NVIDIA Tesla V100: The Best Processor for AI Research
Events
Subscribe:  iCal  |  Google Calendar
Vancouver CA   12, Aug — 17, Aug
London XE   17, Aug — 20, Aug
Cologne DE   19, Aug — 21, Aug
Cologne DE   22, Aug — 26, Aug
Seattle US   28, Aug — 30, Aug
Latest comments

Most interesting and inspiring artist on the business.

by chemo3000@hotmail.com
10 hours ago

Great interview! I appreciate you make this kind of material that inspire and helps people in the same path :) Thanks 80 level and Mats! Ancelmo Toledo ;)

by earn to die
21 hours ago

Thanks for sharing.I found a lot of interesting information here. A really good post, very thankful and hopeful that you will write many more posts like this one. - earn to die

NVIDIA Tesla V100: The Best Processor for AI Research
11 May, 2017
News
NVIDIA CEO Jen-Hsun Huang presented a new processor for AI applications called “Tesla V100”. With 21 billion transistors it is said to be more powerful than the 15-billion transistor Pascal-based processor presented a year ago. 815 square millimeters, 5,120 CUDA processing cores and performance at 7.5 FP64 teraflops (three times faster than last year’s product).

Before introducing the project, CEO talked about the history of AI. Deep learning neural network research started to pay off about five years ago, when researchers began using GPUs. The technology is gaining momentum with Nvidia’s plans to train 100,000 developers to use deep learning.

With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.

NVIDIA

The new processor, manufactured by Samsung, can handle 120 Tensor teraflops and transfer data at 300 gigabits per second (20 times faster than other processors).

The future of deep learning looks bright. What do you think about the announcement? Make sure to share your thoughts in the comments below. 

Via Venturebeat

Source: NVIDIA

Leave a Reply

Be the First to Comment!

avatar
wpDiscuz