Michael HenningerThursday, August 31, 2023Print this page.
School of Computer Science AI experts were among members of Carnegie Mellon University's Responsible AI Initiative that worked closely with the National Institute of Standards and Technology (NIST) to host a workshop this past July with the goal of operationalizing the NIST AI Risk Management Framework (AI RMF).
The framework, the result of a broad collaboration with the public and private sectors, provides guidelines to better manage the potential risks of AI systems at all levels of society. Its use can help integrate trustworthiness into AI by considering how those systems are designed, developed, used and evaluated.
For nearly 70 years, CMU has advanced artificial intelligence to shape the future of society. To continue that mission, the university's Block Center for Technology and Society will provide funds to CMU faculty teams pursuing research ideas to operationalize AI RMF that were generated at the workshop.
The event paired government officials and private sector leaders with CMU AI experts. Organizers included Rayid Ghani, a Distinguished Career Professor in the Machine Learning Department (MLD) and Heinz College; Jodi Forlizzi, the Herbert A. Simon Professor of Computer Science and Human-Computer Interaction; and Hoda Heidari, the K&L Gates Career Development Assistant Professor in Ethics and Computational Technologies in MLD and the Software and Societal Systems Department. All three are among the co-leads of the Responsible AI Initiative.
"Building on the work that NIST has done and CMU’s knowledge of the NIST AI Risk Management Framework, we will work to ensure that we deploy this powerful technology in a way that acknowledges and manages the risks that accompany innovation and exploration," said SCS Dean Martial Hebert. "I am looking forward to participating in these conversations, and in furthering this relationship going forward."
Read the full story on the CMU News website.
Aaron Aupperlee | 412-268-9068 | aaupperlee@cmu.edu