Stable Safety

Advancing safe and transparent AI with open models.

Our Commitment to Safe AI  

We take an open-first and safety by design approach with all of our models so that developers all over the world can use our models as building blocks to create exciting new Al tools and ventures, ensuring product integrity starts during the early stages of development.

Our Safety Principles

Six key principles guide our work.

Developing safe and responsible AI requires collaboration. We work with other companies, policy makers, educational institutions, and researchers to advance the field of AI.

In the complex AI ecosystem, our models are essential building blocks for innovators, researchers, and businesses. We incorporate robust safeguards at every stage, from model development to deployment, to prevent misuse.

We’ve partnered with Thorn and All Tech Is Human to commit to implementing child safety principles into our technologies and products to guard against the creation and spread of AI-generated child sexual abuse material.

 Our Safety Commitments

  • We release AI models as open code, along with the parameters that determine the model’s performance. Open models enable researchers and authorities to “look under the hood” to evaluate the operation of the model. Likewise, open datasets invite robust scrutiny for quality, fairness, and bias.  

  • Our most capable models are subject to our ethical use license, which prohibits the unlawful or exploitative use of the model. 

  • In versions of Stable Diffusion developed exclusively by Stability AI, we apply robust filters on training data to remove unsafe images. By removing that data before it ever reaches the model, we can help to prevent users from generating harmful images in the first place. For our Stable LM language models we provide dataset transparency. We conduct and fund research into building better datasets and standards around this.

  • In our hosted applications and APIs, we apply robust filters on prompts and outputs to screen for unsafe content that violates our platform terms of service.

  • We are implementing content authenticity standards and watermarking so that users and platforms can identify AI-assisted content generated through our hosted services.**

    By distinguishing AI content, these standards can help ensure that users exercise appropriate care when interacting with AI content; help to limit the spread of misinformation; and help to protect human artists from unfair mimicry or passing off. Further, these standards can help social media platforms assess the provenance of content before amplifying it through their network, helping to prevent the viral spread of misinformation.

    **Coalition for Content Provenance and Authenticity (C2PA) standards in partnership with the Content Authenticity Initiative (CAI), available here.

To report the misuse of Stability AI models, notify our safety team at safety@stability.ai