Assessing Risks and Impacts of AI (ARIA) is a research program by the National Institute of Standards and Technology (NIST) aimed at developing evaluation methods and criteria that assess AI’s risks and impacts in real-world...more
As required under the October 30, 2023, Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI), the White House issued a memorandum providing further direction on...more
AI “red teaming” involves simulating attacks on AI systems to uncover vulnerabilities and enhance security. It is becoming an increasingly important practice, as regulatory frameworks—such as the National Institute of...more
Large language models (LLMs) have a well-known propensity to “hallucinate,” or provide false information in response to the user's prompt—note that the National Institute of Standards and Technology's preferred term for this...more