Ilya Sutskever, co-founder and former chief scientist of OpenAI, has announced the launch of his new venture, Safe Superintelligence. The company aims to pioneer advancements in artificial intelligence with a strong emphasis on safety and ethical considerations amidst the rapid expansion of generative AI technologies.
In a statement posted on the company’s website, Safe Superintelligence is described as an American firm headquartered in Palo Alto, CA, with additional operations in Tel Aviv. Sutskever emphasized the company’s commitment to creating a secure AI ecosystem, distinguishing itself from other tech giants currently vying for dominance in the AI sector.
“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the statement read.
Joining Sutskever in this new venture are Daniel Levy, a former researcher at OpenAI, and Daniel Gross, co-founder of Cue and a former AI lead at Apple, as co-founders of Safe Superintelligence. The trio aims to leverage their collective expertise to drive innovation in AI while prioritizing safety protocols.
Sutskever’s departure from OpenAI in May marked the end of an era following internal restructuring under CEO Sam Altman. His decision to leave came after a period of upheaval that saw Altman temporarily ousted and subsequently reinstated in a leadership shake-up late last year.
Safe Superintelligence enters the competitive AI landscape with a strong pedigree, promising a dedicated approach to ensuring the responsible development of artificial intelligence technologies. As the company gears up for its inaugural projects, industry watchers are keenly observing its potential impact on shaping the future of AI research and implementation globally.
Leave a Reply