OpenAI is losing another lead safety researcher, Lillian Wong

[ad_1]

Lillian Weng, a senior safety researcher at OpenAI, announced on Friday that she was leaving the startup. Weng has served as vice president of research and safety since August, and before that was head of the safety systems team at OpenAI.

In a Share on X“After 7 years at OpenAI, I feel ready to reset and explore something new,” Weng said. Wong said her last day will be November 15, but she did not specify where she would go after that.

“I have made the very difficult decision to leave OpenAI,” Weng said in the post. “Given what we have achieved, I am very proud of everyone in the Safety Systems team and have the utmost confidence that the team will continue to thrive.”

Weng’s departure marks the latest in a long line of AI safety researchers, policy researchers, and other executives to have left the company in the past year, many of whom accused OpenAI of prioritizing commercial products over AI safety. Weng joins Ilya Sutskever and Jan Leike — leaders of OpenAI’s now-disbanded Superalignment team, which tried to develop ways to guide super-intelligent AI systems — who also left the startup this year to work on AI safety elsewhere.

Weng first joined OpenAI in 2018, according to her LinkedInShe works on the startup’s robotics team, which ended up building a robotic hand that can solve a Rubik’s Cube — a task that took two years to complete, according to her post.

As OpenAI began to focus more on the GPT model, so did Weng. The researcher moved on to help build the startup’s applied AI research team in 2021. After launching GPT-4, Weng was tasked with creating a dedicated team to build safety systems for the startup in 2023. Today, OpenAI’s Safety Systems Unit includes more than 80 scientists and researchers And a political expert, according to what Wong posted.

That’s a lot of people concerned about AI safety, but several have raised concerns about OpenAI’s focus on safety as it tries to build increasingly powerful AI systems. Miles Brundage, a longtime policy researcher, left the startup in October and announced that OpenAI was disbanding the artificial general intelligence (AGI) readiness team, which he had advised. On the same day, The New York Times profiled a former OpenAI researcher, Susheer Balajiwho said he left OpenAI because he thought the startup’s technology would bring more harm than good to society.

OpenAI told TechCrunch that safety executives and researchers are working on a transition to replace Weng.

“We greatly value Lilian’s contributions to advanced safety research and building rigorous technical safeguards,” an OpenAI spokesperson said in an emailed statement. “We are confident that the Safety Systems team will continue to play a key role in ensuring that our systems are safe and reliable, serving hundreds of millions of people around the world.”

Other executives who have left OpenAI in recent months include CTO Mira Murati, Chief Research Officer Bob McGrew, and Vice President of Research Barret Zoph. In August, prominent researcher Andrei Karpathy and co-founder John Shulman also announced that they were leaving the startup. Some of these people, including Leike and Schulman, left to join OpenAI’s competitor, Anthropic, while others went on to start their own projects.

[ad_2]

Leave a Comment