Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: <$750k grants for General Purpose AI Assurance/Safety Research, published by Phosphorous on June 13, 2023 on The Effective Altruism Forum.
Georgetown University's Center for Security and Emerging Technology (CSET) is accepting applications for AI Safety / AI Assurance research grants.
They are offering up to $750k per project accepted, expended over 6-24 months.
1-2 page expression of interest due August 1.
Applicants should be based at an academic institution or nonprofit research organization.
More information here.
From CSET "We’re using 'assurance' here in a broad sense, meaning roughly 'the generation of evidence that an ML system is sufficiently safe for its intended use.'"
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org