Throughout my first 2.5 years at OpenAI, I labored on the Robotics workforce on a moonshot thought: we wished to show a single, human-like robotic hand to resolve Rubik’s dice. It was a tremendously thrilling, difficult, and emotional expertise. We solved the problem with deep reinforcement studying (RL), loopy quantities of area randomization, and no real-world coaching knowledge. Extra importantly, we conquered the problem as a workforce.

From simulation and RL coaching to imaginative and prescient notion and {hardware} firmware, we collaborated so intently and cohesively. It was an incredible experiment and through that point, I typically considered Steve Jobs’ reality distortion field: once you imagine in one thing so strongly and carry on pushing it so persistently, one way or the other you can also make the not possible attainable.

For the reason that starting of 2021, I began main the Utilized AI Analysis workforce. Managing a workforce presents a distinct set of challenges and requires working model modifications. I’m most happy with a number of tasks associated to language mannequin security inside Utilized AI:

  1. We designed and constructed a set of analysis knowledge and duties to evaluate the tendency of pre-trained language fashions to generate hateful, sexual, or violent content material.
  2. We created an in depth taxonomy and constructed a powerful classifier to detect unwanted content in addition to the rationale why the content material is inappropriate.
  3. We’re engaged on numerous strategies to make the mannequin much less prone to generate unsafe outputs.

Because the Utilized AI workforce is working towards one of the best ways to deploy cutting-edge AI strategies, equivalent to giant pre-trained language fashions, we see how highly effective and helpful they’re for real-world duties. We’re additionally conscious of the significance of safely deploying the strategies, as emphasised in our Charter.

Leave a Reply

Your email address will not be published. Required fields are marked *