It explores and enumerates the immediate risks presented by AI.
NVIDIA has introduced its AI red team, meant to explore the risks presented by AI. It is a cross-functional team made up of offensive security professionals and data scientists using their skills to assess NVIDIA's ML systems to identify and help mitigate any risks from the perspective of information security.
Its framework allows the team to address issues in specific parts of the ML pipeline, infrastructure, or technologies. Any subsection can be isolated, expanded, and described within the context of the whole system. Here are some examples:
- Evasion is expanded to include specific algorithms or TTPs that would be relevant for an assessment of a particular model type. Red teams can point to exactly the infrastructure components that are affected.
- Technical vulnerabilities can affect any level of infrastructure or just a specific application. They can be dealt with in the context of their function and risk-rated accordingly.
- Harm-and-abuse scenarios that are foreign to many information security practitioners are not only included but are integrated. In this way, we motivate technical teams to consider harm-and-abuse scenarios as they’re assessing ML systems. Or, they can provide ethics teams access to tools and expertise.
- Requirements handed down can be integrated more quickly, both old and new.
The AI red team is interested in revealing technical, reputational, and compliance risks and uses a high-level structure to achieve its goals, including:
- Address new prompt-injection techniques
- Examine and define security boundaries
- Use privilege tiering
- Conduct tabletop exercises
NVIDIA encourages other companies to adopt and adapt its methodology. You can get a closer look here.
Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.