Published in Graphics

Nvidia Tensor RT 3 speeds up inferencing by 100X

by on10 October 2017


4.8 FPS on CPU to 550 FPS on Volta

Nvidia has come up with a new optimized complier called Tensor RT 3 and demonstrated how Volta obliterates a modern server CPU.

The unnamed CPU was able to recognize the train network of flowers at 4.8 FPS. As CEO Jensen pointed out, humans would not be able to recognize most of the distinct flowers while the CPU based system flawlessly recognized every single one.

Once Nvidia turned to Volta based Tensor RT 3 optimized compilers, it was able to accelerate the recognition to 500 to 550 FPS with the same accuracy.

On the conservative side you can call this 100X as the rate was fluctuating but it was more than 100 times faster for most parts of the demo.

inferencing

This is a great speed up that will save a lot of time, energy and efforts and it can be applied to every image recognition model where inference is needed. It is hard not to see the benefits of parralel computing as GPU as a processor is simply made for these multi parralel taks.

Last modified on 10 October 2017
Rate this item
(0 votes)

Read more about: