The AI Ecosystem
From dataset providers to the social media platforms that distribute and amplify AI-generated content like text, images, and audio, the AI ecosystem is diverse and complex. Our base models represent one component within this interconnected ecosystem.
We believe every participant in the AI ecosystem has a responsibility to implement reasonable safeguards to prevent misuse of AI and public harm.
To help prevent misuse, we implement safeguards across the model development and deployment lifecycle.
-
Safety starts from the moment we begin training our models. We have implemented filters to remove unsafe content from our training data. By removing that content before it ever reaches the model, we can help to prevent the model from generating unsafe content. We also embed safeguards into our models that help with identifying AI generated content downstream and prevent AI misuse by bad actors.
-
We conduct internal testing and where relevant third party, external red-teaming for our most capable state of the art models in order to self-identify opportunities for safeguards.
-
We run filters to intercept unsafe prompts and unsafe inputs when users interact with models on our platform.
-
We monitor attempts to generate unsafe images on our platform. If someone is attempting to generate an unsafe image on our platform, we restrict that image from being generated.
-
We apply an imperceptible watermark to images generated on our platform, and include watermarking by default in our open image models. We are also working on implementing content credentials to help content platforms identify AI-generated content and are partnering with leading entity Adobe on the Content Authenticity Initiative (CAI), which will further identify which model was used to generate an image (C2PA standard).
-
We comply with privacy laws globally and ensure appropriate handling and removal of confidential customer data.
Safeguarding Democracy
With significant elections in over 60 countries impacting 4 billion people this year, we believe safeguarding democracy is of paramount importance. We are investing in research, partnerships and building AI solutions that will help thwart manipulation and ensure election integrity.
Learn about our work with the AI Election Accord.