Dear all,
|
The AI Safety Institute Consortium will help develop tools to measure and improve AI safety and trustworthiness.
www.nist.gov
|
In order to respond effectively, we are coordinating with the other units so that ONE submission is made, representing
the entire Mason. A working group consisting of myself, Missy Cummings, Sanmay Das, Jesse Kirkpatrick, and
Alexander Monea are coordinating the response. More information on what we are putting together can be found here: https://www.nist.gov/artificial-intelligence/artificial-intelligence-safety-institute.
At this point, we are requesting information from you on specific expertise and areas of work in which we align with and so can contribute to the consortium. We want to make sure that we obtain a comprehensive view of CEC and Mason. The same request for information
is going to faculty of other units, facilitated by the respective ADRs.
Given the scale of this exercise, please do not respond over email. Instead, respond in the entries outlined in this spreadsheet:
Please keep in mind that you all can edit
this. If I may suggest, put together your response in a local copy and then when ready, copy paste your responses in a new row. Please make sure not to overwrite your colleagues' entries.
As NIST is setting a very fast course on this, we are setting a deadline to collect responses. On
November 24th, we will lock the responses we receive and go forward with those.
Thank you for your help,
-Amarda
Amarda Shehu, PhD
Professor of Computer Science, College of Engineering and Computing
Associate Dean for AI Innovation, College of Engineering and Computing
Director of Center of Excellence
in
Government Cybersecurity Risk Management and Resilience
Associate Vice President of Research for the Institute for Digital InnovAtion (IDIA)
George Mason University, 4400 University Drive, MS 4A5, Fairfax,
VA, 22030