Former OpenAI Chief Scientist Launches ‘Safe Superintelligence Inc.’ to Tackle AI Safety

Ilya Sutskever, the former chief scientist at OpenAI, launches a new company, Safe Superintelligence Inc. (SSI), dedicated solely to the pursuit of building a safe superintelligent artificial intelligence system.

Ilya Sutskever

Ilya Sutskever, the former chief scientist at OpenAI, has announced the launch of his new company, Safe Superintelligence Inc. (SSI), dedicated solely to the pursuit of building a safe superintelligent artificial intelligence system.

In a bold statement, Sutskever declared that “building safe superintelligence (SSI) is the most important technical problem of our time.” Recognizing the immense potential and risks associated with advanced AI systems, SSI aims to be the world’s first “straight-shot SSI lab,” with a singular goal and product: a safe superintelligence. He tweeted on x:

“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” Sutskever emphasized. “Our team, investors, and business model are all aligned to achieve SSI.”

The company’s approach is to advance capabilities and safety in tandem, tackling them as technical problems to be solved through revolutionary engineering and scientific breakthroughs. Sutskever stated, “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”

SSI’s singular focus means no distractions from management overhead or product cycles, and its business model ensures that safety, security, and progress are insulated from short-term commercial pressures.

Headquartered in Palo Alto, with an additional office in Tel Aviv, SSI is an American company drawing from the deep technical talent pools in both locations. The company is assembling a lean team of the world’s best engineers and researchers dedicated exclusively to the pursuit of safe superintelligence.

The launch of Safe Superintelligence Inc. marks a significant milestone in the quest for responsible and safe development of advanced AI systems. As the race towards artificial general intelligence (AGI) and superintelligence intensifies, Sutskever’s venture aims to ensure that safety remains a paramount consideration, paving the way for the responsible scaling of transformative AI capabilities.

Anika V

Next Post

Is our Open-Source AI Really Open?

Fri Jun 21 , 2024
By highlighting the gaps between the claims and realities of open-source AI, this research paper aims to shed light on the need for greater transparency and accountability in the field.
Open-Source AI

You May Like