NVIDIA Sets Six Records in AI Performance
Subscribe:  iCal  |  Google Calendar
Kyiv UA   22, Sep — 23, Sep
Valletta MT   23, Sep — 29, Sep
24, Sep — 27, Sep
Tokyo JP   25, Sep — 27, Sep
San Diego US   27, Sep — 30, Sep
Latest comments

Then stop scrolling. My Florida Green’s doctor can help get you registered and lead you towards your medicine. Enjoy this break from social media with a newly legalized joint. Contact My Florida Green today to ‘get legal’ with your medical marijuana card in Sarasota.

Thats really cool talk :)

Wow it's so refreshing to see projects inspired by serious cinema and even more literature, most 3D artist I know probably never heard of Tarkovsky and wouldn't go through an Art film that is "foreign", 2.5 hours and has really slow shots. It's a shame, there's so much inspiration out there waiting to be taken from all the brilliant XX century Masters of cinema... Keep up the good work, I really hope to see more stuff from you.

NVIDIA Sets Six Records in AI Performance
13 December, 2018

Another huge milestone for Nvidia. The team has set six new records for how fast an AI model can be trained using a predetermined group of datasets. The record is set with the help of MLPerf, a benchmark suite of tests created by prominent companies in the space to standardize and provide guidelines for how to measure AI training and inference speed.

Companies who contributed to the creation of MLPerf are Google, Nvidia, Baidu, and supercomputer maker Cray.

Nvidia is said to set records for image classification with ResNet-50 version 1.5 on the ImageNet dataset, object instance segmentation, object detection, non-recurrent translation, recurrent translation, and recommendation systems.

“For all of these benchmarks we outperformed the competition by up to 4.7x faster,” Nvidia VP and general manager of accelerated computing Ian Buck stated. “There are certainly faster DGX-2 ResNet-50 renditions out there, but none under MLPerf benchmark guidelines.”

The whole thing was achieved using Nvidia DGX systems, using NVSwitch interconnectors to work with up to 16 fully connected V100 Tensor Core GPUs, which was presented back in spring 2017. The team submitted and was judged in the single node category with 16 GPUs, as well as distributed training with 16 GPUs to 80 nodes with 640 GPUs.

Their team was able to train with ResNet-50 in 70 minutes. When it comes to distributed training, Nvidia was able to train with ResNet-50 in 6.3 minutes. 

You can learn more here

Leave a Reply

Related articles