NVIDIA Tesla V100: The Best Processor for AI Research
Subscribe:  iCal  |  Google Calendar
Amsterdam NL   25, Jun — 28, Jun
Los Angeles US   25, Jun — 28, Jun
Montreal CA   27, Jun — 1, Jul
Cambridge GB   28, Jun — 2, Jul
Guildford GB   29, Jun — 30, Jun
Latest comments
by Matthew Scenery.Melbourne
6 hours ago

Their website does say that you can pay per image at $1 per image. I am in the opposite boat though. I could see this having a very significant effect on photogrammetry but I would need to process a few thousand images at a time which would not be very feasible with their current pricing model

by Shaun
7 hours ago


To the developers. A very promising piece of software for a VFX supervisor like me. BUT, please reconsider your pricing tiers and introduce a per-image price. We are a pretty large facility, but I can only imagine needing about 1-10 images a month at the very most. It's like HDRI's - we buy them all the time, one at a time. They need to be individually billed so a producer can charge them against a particular job.

NVIDIA Tesla V100: The Best Processor for AI Research
11 May, 2017
NVIDIA CEO Jen-Hsun Huang presented a new processor for AI applications called “Tesla V100”. With 21 billion transistors it is said to be more powerful than the 15-billion transistor Pascal-based processor presented a year ago. 815 square millimeters, 5,120 CUDA processing cores and performance at 7.5 FP64 teraflops (three times faster than last year’s product).

Before introducing the project, CEO talked about the history of AI. Deep learning neural network research started to pay off about five years ago, when researchers began using GPUs. The technology is gaining momentum with Nvidia’s plans to train 100,000 developers to use deep learning.

With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.


The new processor, manufactured by Samsung, can handle 120 Tensor teraflops and transfer data at 300 gigabits per second (20 times faster than other processors).

The future of deep learning looks bright. What do you think about the announcement? Make sure to share your thoughts in the comments below. 

Via Venturebeat

Source: NVIDIA

Leave a Reply