NVIDIA Tesla V100: The Best Processor for AI Research
Events
Subscribe:  iCal  |  Google Calendar
7, Mar — 12, Jun
London GB   29, May — 1, Jun
Birmingham GB   1, Jun — 4, Jun
Taipei TW   5, Jun — 10, Jun
Los Angeles US   12, Jun — 15, Jun
Latest comments
by BakingSoda
2 hours ago

Those tilesets are sexy. Seeing new tilesets is like getting introduced to a new lego set.

by Dennis George
2 hours ago

Good way of telling, good post to take facts regarding my presentation subject matter, which i am going to deliver in my college Pay Someone To Do My Exam

Thank you for your sharing. Thanks to this article I can learn more things. Expand your knowledge and abilities. Actually the article is very practical. Thank you! You go here: https://windownproduckey.blogspot.com/

NVIDIA Tesla V100: The Best Processor for AI Research
11 May, 2017
News
NVIDIA CEO Jen-Hsun Huang presented a new processor for AI applications called “Tesla V100”. With 21 billion transistors it is said to be more powerful than the 15-billion transistor Pascal-based processor presented a year ago. 815 square millimeters, 5,120 CUDA processing cores and performance at 7.5 FP64 teraflops (three times faster than last year’s product).

Before introducing the project, CEO talked about the history of AI. Deep learning neural network research started to pay off about five years ago, when researchers began using GPUs. The technology is gaining momentum with Nvidia’s plans to train 100,000 developers to use deep learning.

With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.

NVIDIA

The new processor, manufactured by Samsung, can handle 120 Tensor teraflops and transfer data at 300 gigabits per second (20 times faster than other processors).

The future of deep learning looks bright. What do you think about the announcement? Make sure to share your thoughts in the comments below. 

Via Venturebeat

Source: NVIDIA

Leave a Reply

Be the First to Comment!

avatar
wpDiscuz