Stability AI Joins U.S. Artificial Intelligence Safety Institute Consortium 

Advancing our commitment to AI safety and transparency, Stability AI is pleased to share that we are joining leading companies, educational organizations, non-profits and government agencies to support the United States Artificial Intelligence Safety Institute Consortium (AISIC). Established by the U.S. Department of Commerce National Institute of Standards and Technology (NIST), this landmark initiative will help to support the safe development and deployment of AI systems.

Stability AI aims to amplify humanity’s potential through generative AI. Today, we develop a range of image, video, audio, and language models to support creators and developers around the world. With appropriate safeguards, we release models openly to help promote transparency and competition in AI. We recognize the emerging challenges posed by AI, and realizing the potential of these exciting technologies will depend on joint efforts to improve risk evaluation, mitigation, and assurance. We are pleased to share our experiences and resources with the AISIC to advance its research program. These contributions build on our existing efforts to promote safety in our own AI research, development, and deployment. For example, Stability AI joined the U.S. Government's Voluntary AI Commitments, participated in the first large-scale public evaluation of models at DEFCON, and joined the AI Alliance as a founding member, among other initiatives, to advance open, safe and responsible AI. 

Together, these efforts are examples of how we are working to maximize the positive impact of AI technology for creators, developers, researchers, communities, and the world at large.

You can learn more about this initiative by visiting the U.S. Department of Commerce website here.

Follow us on Twitter, Instagram, LinkedIn, and join our Discord Community.

Previous
Previous

Introducing Stable Cascade

Next
Next

Introducing Stable LM 2 1.6B