OpenAI Assembles Elite Team to Tame Superintelligent AI

OpenAI, a leading artificial intelligence research lab, has announced forming a new team dedicated to controlling and aligning “superintelligent” AI systems. The group, led by OpenAI’s chief scientist and co-founder, Ilya Sutskever, will focus on developing techniques to prevent potential superintelligent AI from going rogue.

The announcement comes in the wake of predictions that AI with intelligence surpassing that of humans could emerge within the next decade. Such AI systems may not inherently be benevolent, necessitating research into methods of control and restriction.

The new team dubbed the Superalignment team, will have access to 20% of the compute resources OpenAI has secured to date. The team, composed of scientists and engineers from OpenAI’s previous alignment division and researchers from across the company, aims to address the core technical challenges of controlling superintelligent AI over the next four years.

The team’s approach involves building a “human-level automated alignment researcher” that can train AI systems using human feedback, assist in evaluating other AI systems, and conduct alignment research. OpenAI believes that AI can make faster and better alignment research progress than humans.

However, the team acknowledges their approach’s limitations and potential risks, including the possibility of scaling up inconsistencies, biases, or vulnerabilities in the AI used for evaluation. Despite these challenges, Sutskever and his team are optimistic about the potential of their work and plan to share their findings broadly, contributing to the alignment and safety of non-OpenAI models.

LAStartups.com is a digital lifestyle publication that covers the culture of startups and technology companies in Los Angeles. It is the go-to site for people who want to keep up with what matters in Los Angeles’ tech and startups from those who know the city best.

Similar Posts