Safe Learning-Enabled Systems

National Science Foundation (NSF)

View website Save this grant

Deadline: Jan 16, 2024

Grant amount: Up to US $1,500,000

Fields of work: Artificial Intelligence (AI) Computer Science & Engineering

Applicant type: Nonprofit, College / University

Funding uses: Research

Location of project: United States

Location of residency: United States

Overview:

NOTE: All applications are due by 5:00 PM local time of applicant organization. 

As artificial intelligence (AI) systems rapidly increase in size, acquire new capabilities, and are deployed in high-stakes settings, their safety becomes extremely important. Ensuring system safety requires more than improving accuracy, efficiency, and scalability: it requires ensuring that systems are robust to extreme events, and monitoring them for anomalous and unsafe behavior. The objective of the Safe Learning-Enabled Systems program, which is a partnership between the National Science Foundation, Open Philanthropy and Good Ventures, is to foster foundational research that leads to the design and implementation of learning-enabled systems in which safety is ensured with high levels of confidence. While traditional machine learning systems are evaluated point-wise with respect to a fixed test set, such static coverage provides only limited assurance when exposed to unprecedented conditions in high-stakes operating environments.

Verifying that learning components of such systems achieve safety guarantees for all possible inputs may be difficult, if not impossible. Instead, a system’s safety guarantees will often need to be established with respect to systematically generated data from realistic (yet appropriately pessimistic) operating environments. Safety also requires resilience to “unknown unknowns”, which necessitates improved methods for monitoring for unexpected environmental hazards or anomalous system behaviors, including during deployment. In some instances, safety may further require new methods for reverse-engineering, inspecting, and interpreting the internal logic of learned models to identify unexpected behavior that could not be found by black-box testing alone, and methods for improving the performance by directly adapting the systems’ internal logic.

Whatever the setting, any learning-enabled system’s end-to-end safety guarantees must be specified clearly and precisely. Any system claiming to satisfy a safety specification must provide rigorous evidence, through analysis corroborated empirically and/or with mathematical proof.

We've imported the main document for this grant to give you an overview. You can learn more about this opportunity by visiting the funder's website.

National Science Foundation (NSF)
FUNDER

Your history with this funder
0
SAVED OPPORTUNITIES
No saved opportunities from this funder yet
FUNDER NOTES
Save this opportunity to add notes...
CONTACTS
Save this opportunity to add contacts...

Other Funding Opportunities from National Science Foundation (Nsf)

This page was last reviewed June 16, 2023 and last updated June 13, 2023