Decreasing bias and bettering security in DALL·E 2


In April, we began previewing the DALL·E 2 analysis to a restricted variety of folks, which has allowed us to raised perceive the system’s capabilities and limitations and enhance our security methods.

Throughout this preview part, early customers have flagged delicate and biased photographs which have helped inform and consider this new mitigation.

We’re persevering with to analysis how AI methods, like DALL·E, may replicate biases in its coaching knowledge and alternative ways we are able to tackle them.

In the course of the analysis preview we’ve got taken different steps to enhance our security methods, together with:

  • Minimizing the chance of DALL·E being misused to create misleading content material by rejecting picture uploads containing life like faces and makes an attempt to create the likeness of public figures, together with celebrities and outstanding political figures.
  • Making our content material filters extra correct in order that they’re simpler at blocking prompts and picture uploads that violate our content policy whereas nonetheless permitting artistic expression.
  • Refining automated and human monitoring methods to protect in opposition to misuse.

These enhancements have helped us acquire confidence within the skill to ask extra customers to expertise DALL·E.

Increasing entry is a vital a part of our deploying AI systems responsibly as a result of it permits us to be taught extra about real-world use and proceed to iterate on our security methods.

Leave a Reply

Your email address will not be published. Required fields are marked *