SEATTLE, United States — OpenAI CEO Sam Altman defended his company’s AI technology as safe for widespread use, as concerns mount over potential risks and lack of proper safeguards for ChatGPT-style AI systems.
Altman’s remarks came at a Microsoft event in Seattle, where he spoke to developers just as a new controversy erupted over an OpenAI AI voice that closely resembled that of the actress Scarlett Johansson.
The CEO, who rose to global prominence after OpenAI released ChatGPT in 2022, is also grappling with questions about the safety of the company’s AI following the departure of the team responsible for mitigating long-term AI risks.
Article continues after this advertisement“My biggest piece of advice is this is a special time and take advantage of it,” Altman told the audience of developers seeking to build new products using OpenAI’s technology.
FEATURED STORIES TECHNOLOGY vivo launches V40 Lite with 5000mAh battery covered by 50-month warranty, starts at Php 13,999 TECHNOLOGY Galaxy Buds3 Pro: Delivering tailored sound wherever you go TECHNOLOGY Very mindful, very intuitive: ASUS’ most superior AI PC yet, the Zenbook S 14, empowers you to achieve more“This is not the time to delay what you’re planning to do or wait for the next thing,” he added.
READ: GPT-4o applications: Things you can do with OpenAI’s new model
Article continues after this advertisementOpenAI is a close partner of Microsoft and provides the foundational technology, primarily the GPT-4 large language model, for building AI tools.
Article continues after this advertisementMicrosoft has jumped on the AI bandwagon, pushing out new products and urging users to embrace generative AI’s capabilities.
Article continues after this advertisement“We kind of take for granted” that GPT-4, while “far from perfect…is generally considered robust enough and safe enough for a wide variety of uses,” Altman said.
Altman insisted that OpenAI had put in “a huge amount of work” to ensure the safety of its models.
Article continues after this advertisement“When you take a medicine, you want to know what’s going to be safe, and with our model, you want to know it’s going to be robust to behave the way you want it to,” he added.
However, questions about OpenAI’s commitment to safety resurfaced last week when the company dissolved its “superalignment” group, a team dedicated to mitigating the long-term dangers of AI.
In announcing his departure, team co-leader Jan Leike criticized OpenAI for prioritizing “shiny new products” over safety in a series of posts on X (formerly Twitter).
“Over the past few months, my team has been sailing against the wind,” Leike said.
“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”
This controversy was swiftly followed by a public statement from Johansson, who expressed outrage over a voice used by OpenAI’s ChatGPT that sounded similar to her voice in the 2013 film “Her.”
The voice in question, called “Sky,” was featured last week in the release of OpenAI’s more human-like GPT-4o model.
Subscribe to our daily newsletter
In a short statement on Tuesdaybetx24,, Altman apologized to Johansson but insisted the voice was not based on hers.
TOPICS: OpenAI, technology READ NEXT Introducing the POCO F6 series: An in-depth comparison of the ... In Darwin’s footsteps: Scientists recreate historic 1830... EDITORS' PICK Kristine gets nearer; Metro Manila, 42 other areas under Signal No. 1 Tropical Storm Kristine slightly intensifies; Signal No. 2 in 5 areas SC issues TRO vs Comelec resolution on dismissed public officials Manila Water Foundation and partners underscore benefits of handwashing DILG identifies 38 hotspots ahead of 2025 polls WPS: US missile deployment to PH key for combat readiness – US general MOST READ SC issues TRO vs Comelec resolution on dismissed public officials Tropical Storm Kristine slightly intensifies; Signal No. 2 in 5 areas Walang Pasok: Class suspensions on Wednesday, Oct. 23 LIVE UPDATES: Tropical Storm Kristine View comments