
OpenAI is continuing to expand its ambitions in artificial intelligence voice technology despite previously warning about the dangers associated with advanced voice-cloning systems.
The company, which two years ago publicly acknowledged it had developed highly realistic voice replication software but chose not to release it widely due to safety concerns, has now quietly acquired startup Weights.gg, a small company known for tools capable of cloning celebrity and public figure voices.
The acquisition, which has not been formally announced, signals that OpenAI remains deeply interested in developing sophisticated AI voice capabilities even as concerns grow globally over copyright infringement, misinformation, identity theft and misuse of synthetic media.
According to people familiar with the matter, OpenAI purchased both the intellectual property and the small employee team behind Weights.gg earlier this year.
The startup shut down its services in March, fueling speculation within the artificial intelligence industry that a larger technology company had absorbed its operations.
Neither OpenAI nor Weights.gg publicly disclosed the financial details of the transaction.
People with knowledge of the acquisition said the startup’s staff has since been distributed across various teams inside OpenAI rather than continuing to operate independently.
The move highlights how major artificial intelligence firms are increasingly consolidating smaller startups specializing in niche AI technologies, particularly in rapidly evolving areas such as synthetic voice generation.
Weights.gg gained attention in AI communities through its consumer application called Replay, which allowed users to create and share AI-generated voice models.
The platform operated almost like a social network centered around voice synthesis, enabling users to upload, exchange and experiment with cloned voices generated through machine learning algorithms.
Many of the voice models on the platform recreated the voices of globally recognized celebrities, musicians, fictional characters and political leaders.
Examples reportedly included AI-generated versions of actor Samuel L. Jackson, singer Taylor Swift, rapper Kanye West and members of the K-pop group Blackpink.
The platform also featured cloned voices based on copyrighted animated characters including Bugs Bunny and Daffy Duck, as well as political figures such as President Donald Trump and former President Joe Biden.
The popularity of these AI-generated voice models reflected growing public fascination with generative AI technologies capable of reproducing human speech patterns with remarkable realism.
At the same time, however, the technology also intensified legal and ethical concerns over digital identity rights and unauthorized use of likenesses.
Several celebrities have already objected publicly to AI voice replication involving their identities.
Samuel L. Jackson has previously criticized unauthorized synthetic reproductions of his voice, while Taylor Swift recently moved to strengthen legal protections surrounding her image and voice by filing trademark applications with the United States Patent and Trademark Office.
The legal landscape surrounding AI-generated voices remains uncertain and rapidly evolving.
Courts, regulators and technology companies are still attempting to determine how existing intellectual property laws apply to synthetic voices created through artificial intelligence systems.
Experts say the issue is becoming increasingly urgent as AI-generated audio becomes more convincing and more widely accessible to ordinary users.
OpenAI itself has repeatedly acknowledged the risks posed by advanced voice-cloning technologies.
In a widely discussed blog post published two years ago, OpenAI revealed it had developed software capable of generating highly realistic synthetic voices.
At the time, the company said the technology was so powerful that it decided against broad public release out of concern it could be misused for fraud, impersonation or misinformation campaigns.
The company warned that realistic voice cloning could potentially enable scams involving fake phone calls, fraudulent political messaging or unauthorized impersonation of individuals.
Despite those concerns, OpenAI has continued investing heavily in voice-based artificial intelligence systems.
Industry analysts say the acquisition of Weights.gg demonstrates how voice interaction is becoming a central component of the company’s broader strategy for artificial intelligence products.
Rather than releasing unrestricted public cloning tools similar to those offered by Weights.gg, OpenAI appears to be focusing on integrating voice capabilities into controlled commercial products and services.
This month, the company expanded access to its voice technology through its application programming interface, or API, allowing third-party developers to incorporate AI-powered voice functions into external applications.
Those tools could support a wide range of commercial uses including live translation services, AI customer support systems and voice-controlled digital assistants.
OpenAI has also steadily improved the voice interaction capabilities inside ChatGPT itself.
The company recently integrated ChatGPT into Apple’s CarPlay platform, enabling drivers to communicate with the AI assistant through spoken commands while operating vehicles.
Voice functionality has become one of the fastest-growing areas of competition among artificial intelligence developers as companies race to make AI systems feel more natural and conversational.
Many experts believe future AI platforms will rely heavily on spoken interaction rather than traditional typed text interfaces.
The acquisition of Weights.gg therefore reflects broader industry trends toward multimodal AI systems capable of processing and generating text, audio, images and video simultaneously.
Still, the move also revives questions about OpenAI’s broader approach to copyright and intellectual property.
The company has already faced significant legal scrutiny over its use of copyrighted material in training artificial intelligence systems.
Last year, OpenAI encountered criticism after releasing Sora, an AI video-generation application that allowed users to create videos featuring recognizable copyrighted characters without explicit permission.
The controversy triggered backlash from parts of Hollywood and intensified negotiations between technology firms and entertainment companies over licensing agreements and content protections.
OpenAI has since attempted to improve relationships with media and entertainment industries.
The company recruited Charles Porch, a well-known Hollywood relationship manager often referred to as a “celebrity whisperer,” as part of efforts to repair ties with influential figures concerned about AI-generated content.
At the same time, OpenAI has scaled back some consumer-facing experimental projects as it prioritizes products capable of generating sustainable revenue.
The company reportedly shut down the standalone Sora application this year while shifting resources toward enterprise products and commercial partnerships ahead of a possible public offering later this year.
Industry observers say OpenAI’s evolving strategy suggests the company is becoming more cautious about releasing powerful AI tools directly to consumers without safeguards.
Instead, it increasingly favors tightly controlled deployments through paid APIs, partnerships and business-focused services.
That approach may help the company reduce reputational and legal risks while still monetizing its underlying technologies.
The acquisition of Weights.gg therefore represents both an expansion of OpenAI’s technical capabilities and a test of how the company balances innovation with safety concerns.
Artificial intelligence voice cloning remains one of the most controversial areas of generative AI because of its ability to blur distinctions between authentic and synthetic communication.
Researchers have warned that increasingly realistic synthetic voices could undermine trust in audio evidence, facilitate fraud and accelerate the spread of misinformation during elections or international crises.
Governments worldwide are now beginning to explore possible regulations surrounding AI-generated media.
Several countries are considering laws requiring synthetic content to carry labels or digital watermarks identifying it as artificially generated.
Technology companies including OpenAI have also proposed voluntary standards designed to reduce harmful misuse of synthetic media tools.
OpenAI has repeatedly stated that it supports stronger safety practices around voice-cloning technology and intends to limit access to the most advanced capabilities.
According to earlier company statements, OpenAI currently has no immediate plans to release unrestricted voice-cloning systems to the broader public.
Instead, access appears likely to remain limited to carefully selected partners and developers operating within controlled commercial environments.
Even so, the acquisition of Weights.gg demonstrates that OpenAI continues to view voice AI as a strategically important technology for the future of human-computer interaction.
As competition intensifies among major artificial intelligence firms, realistic voice generation may soon become as central to AI platforms as text generation already is today.
That evolution could reshape industries ranging from entertainment and customer service to education, transportation and digital communications.
But it will also force governments, courts and technology companies to confront increasingly difficult questions about identity, consent and ownership in an era where artificial intelligence can reproduce not just words, but human voices themselves with astonishing realism.