NVIDIA Tesla V100: The Best Processor for AI Research

The company’s CEO Jen-Hsun Huang presented a new processor for AI applications that holds an incredible power.

NVIDIA CEO Jen-Hsun Huang presented a new processor for AI applications called “Tesla V100”. With 21 billion transistors it is said to be more powerful than the 15-billion transistor Pascal-based processor presented a year ago. 815 square millimeters, 5,120 CUDA processing cores and performance at 7.5 FP64 teraflops (three times faster than last year’s product).

Before introducing the project, CEO talked about the history of AI. Deep learning neural network research started to pay off about five years ago, when researchers began using GPUs. The technology is gaining momentum with Nvidia’s plans to train 100,000 developers to use deep learning.

With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.

NVIDIA

The new processor, manufactured by Samsung, can handle 120 Tensor teraflops and transfer data at 300 gigabits per second (20 times faster than other processors).

The future of deep learning looks bright. What do you think about the announcement? Make sure to share your thoughts in the comments below. 

Via Venturebeat

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more