logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

NVIDIA Uses AI to Create Smaller and Faster Circuits

The latest NVIDIA Hopper GPU architecture has nearly 13,000 instances of AI-designed circuits.

NVIDIA presented a new approach that uses AI to design smaller, faster, and more efficient circuits to deliver more performance with each chip generation. Its paper "PrefixRL: Optimization of Parallel Prefix Circuits using Deep Reinforcement Learning" shows that AI can learn to design these circuits from scratch. The company notes that the latest NVIDIA Hopper GPU architecture has nearly 13,000 instances of AI-designed circuits.

NVIDIA aims to find a solution that "effectively trades off" circuit area and delay to create the minimum area circuit at every delay. PrefixRL focuses on arithmetic circuits called parallel prefix circuits. The AI agent learns to design prefix graphs but optimizes for the properties of the final circuit generated from the graph.

The researchers trained an agent to optimize the area and delay properties of arithmetic circuits. For prefix circuits, they designed an environment where the reinforcement learning agent can add or remove a node from the prefix graph; at each step, the agent receives the improvement in the corresponding circuit area and delay as rewards.

PrefixRL's physical simulation required 256 CPUs for each GPU, and training the 64b case took over 32,000 GPU hours, so NVIDIA developed Raptor, an in-house distributed reinforcement learning platform that uses NVIDIA hardware for this kind of industrial reinforcement learning and enhances scalability and training speed.

The results show that the best PrefixRL adder achieved a 25% lower area than the electronic design automation tool adder at the same delay. NVIDIA hopes the method "can be a blueprint for applying AI to real-world circuit design problems: constructing action spaces, state representations, RL agent models, optimizing for multiple competing objectives, and overcoming slow reward computation processes such as physical synthesis."

Find out more here and don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more