Exposed: Inside the Secret AI Race – Leaks, Rumors, and the Hidden Quest for AGI

August 18, 2025
Exposed: Inside the Secret AI Race – Leaks, Rumors, and the Hidden Quest for AGI
Inside the Secret AI

he world’s biggest tech labs are locked in a secretive race to build the next breakthrough in artificial intelligence – perhaps even an artificial general intelligence (AGI), a system with human-level (or greater) cognitive abilities. While AI chatbots like ChatGPT have dazzled the public, insiders and leaked documents hint at even more powerful large language models (LLMs) and AGI projects brewing behind closed doors. From hush-hush research at OpenAI and DeepMind to clandestine government programs, a web of secrecy surrounds these developments. This report digs into the latest (2024–2025) leaks and speculation about undisclosed AI models, the culture of secrecy among AI leaders, geopolitical jockeying in the AI domain, and the ethical dilemmas of developing potent AI in the dark. We’ll separate confirmed facts from rumors, quote experts and whistleblowers, and examine what it all means for society.

Leaks and Rumors of Undisclosed AI Breakthroughs (2024–2025)

OpenAI’s “Q” Discovery: In late 2023, an internal letter from OpenAI researchers to their board sparked a firestorm of speculation reuters.com. The letter warned of a powerful AI algorithm, known by the code-name “Q” (Q-Star), that staff believed could be a major step toward AGI reuters.com. According to Reuters reporting, the model showed an unprecedented ability to solve certain math problems – performing at roughly grade-school level, but doing so consistently and correctly reuters.com. This was remarkable because today’s generative AIs (like ChatGPT) often struggle with math or logical consistency. “Some at OpenAI believe Q could be a breakthrough in the startup’s search for what’s known as AGI,” Reuters wrote, noting that acing even grade-school math made researchers “very optimistic about Q’s future success” reuters.com. OpenAI has not publicly released Q or fully confirmed its capabilities, but it privately acknowledged the project’s existence to employees after media inquiries reuters.com. The secrecy around Q – and its dramatic role in the surprise ouster of OpenAI CEO Sam Altman in November 2023 – fueled speculation that OpenAI may have “pushed the veil of ignorance back” with a major discovery reuters.com. (Altman himself hinted just weeks prior that “major advances were in sight,” cryptically saying he’d been in the room for several breakthrough moments, “the most recent [one] was just in the last couple weeks” reuters.com.) Many observers suspect Q is a reasoning engine that, if scaled up, could solve novel problems beyond what today’s chatbots can do – essentially a potential seed of general intelligence.

GPT-5 and Other Unannounced Models: OpenAI’s public-facing model in 2024 remains GPT-4 (which powers ChatGPT and Bing), but what about its successor? The company has been extremely tight-lipped on this topic. In March 2023, over a thousand experts signed an open letter urging a pause on training systems “more powerful than GPT-4” amid safety concerns reuters.com. Sam Altman responded by assuring that OpenAI was “not [training] GPT-5” and won’t for some time techcrunch.com. As of mid-2024, Altman reiterated that they had “a lot of work to do” on new ideas before starting GPT-5 techcrunch.com. Nonetheless, rumors persist that preliminary work is underway internally on the next-generation model – whether it’s dubbed GPT-5 or something else. OpenAI famously declined to disclose any details about GPT-4’s construction (more on that below), so the entire existence and progress of GPT-5 (if it exists) would likely remain secret until a public launch. Notably, a recent analysis in The Guardian (Aug 2025) mentioned “OpenAI’s new GPT-5 model” as “a significant step on the path to AGI” – albeit one still “missing something quite important” in terms of true human-like learning theguardian.com. This suggests that by 2025, GPT-5 may have been introduced with fanfare, but even that might not be the end-all-be-all breakthrough some fear is lurking privately. In any case, the development of GPT-5 has been shrouded in unusual secrecy, with OpenAI neither confirming nor denying its status for a long time – feeding the rumor mill that something big could be happening behind closed doors.

Google DeepMind’s Next Moves: Google’s AI arm (now an amalgam of Google Brain and DeepMind) has also been working on ultra-advanced models, often without public releases until a strategic moment. In late 2023, Google announced it was developing “Gemini,” a next-generation AI model that would merge the techniques of DeepMind’s famous AlphaGo with the language capabilities of LLMs en.wikipedia.org. While Gemini’s development was publicized, many details remained under wraps until its eventual release. By early 2024, there were reports that Gemini 1.0 surpassed OpenAI’s GPT-4 on certain benchmarks iconext.co.th, and an Ultra version was in the works. This competitive leap – achieved largely in-house at Google – shows how tech giants often work in stealth mode on breakthrough models, revealing them only once they’re ready to claim the crown. Similarly, DeepMind has a history of secretive projects: for example, LaMDA, Google’s advanced conversational LLM, was developed internally and known to the public mainly through research papers and one notorious leak (a Google engineer’s claim that LaMDA was “sentient,” more on that later). It wasn’t until 2022–2023 when LaMDA’s derivative was released as the Bard chatbot that the public got to interact with it. This pattern – long development in secret, then sudden public debut – appears to be the norm in the industry. Other labs like Anthropic (founded by OpenAI alumni) have also signaled major model upgrades on the horizon without giving away all the details. In 2023, a leaked fundraising deck from Anthropic detailed plans for a “Claude-Next” model that would be 10 times more capable than today’s strongest AI and might require on the order of $1 billion in compute to train techcrunch.com. Anthropic described this frontier model as aiming for “AI self-teaching” and hinted it could “begin to automate large portions of the economy” techcrunch.com – an ambition tantamount to an early form of AGI. Yet, outside of leaked documents, Anthropic has kept mum on progress toward Claude-Next, focusing public messaging on iterative updates (like Claude 2). The actual capability gap between what’s deployed publicly and what’s cooking in the lab might be much larger than we know.

New and Under-the-Radar Players: It’s not just the well-known companies – sometimes, dark horse projects emerge that catch experts off guard. One striking example came from China: in January 2025, a relatively unknown startup called DeepSeek burst onto the scene with a model (DeepSeek-V3 and a follow-up “R1” version) that reportedly rivals the best from OpenAI. The Chinese tech community – and even Silicon Valley – were stunned when DeepSeek’s AI assistant shocked the industry by matching or beating OpenAI’s models on several benchmarks, and doing so at a fraction of the cost reuters.com. “DeepSeek’s AI…has shocked Silicon Valley and caused tech shares to plunge,” Reuters reported, citing the startup’s low development costs and claims that its R1 model performed on par with OpenAI’s “o1” model reuters.com. (The terminology suggests DeepSeek was comparing against an OpenAI model code-named “o1,” perhaps a version of GPT-4.) DeepSeek’s founder, a young researcher named Liang Wenfeng, gave very few interviews, but in one he boldly stated that achieving AGI was the company’s main goal, and that unlike Big Tech, his lean team “did not care” about profit or even the ongoing pricing wars in AI cloud services reuters.com. Such stealthy development underscores that cutting-edge AI is not solely the province of the usual Western labs – there may be highly advanced models being built under wraps in startups or government-linked institutes elsewhere. In fact, as far back as 2021, China’s Beijing Academy of AI announced Wu Dao 2.0, a multimodal AI with a staggering 1.75 trillion parameters (ten times more than GPT-3) aibusiness.com. Wu Dao was a massive model capable of text and image generation, but it was not open-sourced; it served as a proof-of-concept that China could do frontier research on scale with – or beyond – US labs. Few outside China have seen Wu Dao in action, and it remains something of a legend. The key point is that globally, there are AI projects we only hear whispers of until they suddenly debut (or get leaked). The first warning to the wider world might be a research paper, a regulatory filing – or an anonymous upload of model weights on a forum (as happened with Meta’s LLaMA, discussed below). In this climate, the unexpected has become routine, and every rumor of a secret model or AGI “breakthrough” sends ripples of excitement and anxiety through the AI community.

The Culture of Secrecy Among AI Labs

Despite the industry’s origins in academia and open research, today’s AI leaders are increasingly tight-lipped about their most advanced work. A prime example is OpenAI. Ironically named for transparency, OpenAI has pivoted to extreme secrecy for its top models. When GPT-4 was released in March 2023, OpenAI provided no information about the model’s architecture or training process – no parameter count, no details on the vast dataset or hardware used vice.com. In the technical report, the company flatly stated: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture … hardware, training compute, dataset construction, [or] training method.” vice.com. This marked a full 180-degree turn from OpenAI’s founding principles of openness vice.com. As one report noted, GPT-4 was “the company’s most secretive release thus far”, and indeed a “complete 180 from OpenAI’s founding principles as a nonprofit, open-source entity.” vice.com. Critics pounced on this lack of transparency. “After reading the almost 100-page report, I have more questions than answers,” said Sasha Luccioni, an AI researcher at Hugging Face, adding that it’s “hard for me to rely on results I can’t verify or replicate.” vice.com Another expert, Prof. Emily M. Bender, tweeted that OpenAI’s secrecy was no surprise but lamented that “They are willfully ignoring the most basic risk mitigation strategies, all while proclaiming themselves to be working towards the benefit of humanity.” vice.com. Even OpenAI’s own CEO and chief scientist acknowledged the change. Ilya Sutskever, once a champion of open AI research, defended the silence on GPT-4 by saying “it’s competitive out there… from a competitive side, you can see this as a maturation of the field”, ultimately admitting “we were wrong” to have been open-source in the beginning vice.com. In short, OpenAI now operates much like a corporate R&D lab guarding a trade secret.

Other AI labs have likewise clammed up on specifics as their projects near the cutting edge. DeepMind, for instance, published many breakthrough papers (on AlphaGo, AlphaFold, etc.), but it rarely releases model weights or full technical blueprints of its latest systems. When DeepMind developed Gopher (a large language model) or Sparrow (a dialogue agent), the public learned about their capabilities via academic publications, but the models themselves stayed in-house. Google’s LaMDA model was kept internal for a long period, until pressure from OpenAI’s advancements pushed Google to hurry out a product (Bard) based on LaMDA. Notably, the world might never have known just how eerie and human-like LaMDA’s conversations could be if not for a whistleblower incident: in 2022, a Google engineer, Blake Lemoine, went public claiming LaMDA was “sentient” – a claim roundly dismissed by scientists, but one that drew massive attention to what Google had built in secret theguardian.com. Google suspended Lemoine for breaching confidentiality (he had shared transcripts of his chats with the AI) theguardian.com. The episode not only highlighted how advanced Google’s unseen chatbots had become, but also “put new scrutiny on the secrecy surrounding the world of AI,” as The Guardian noted at the time theguardian.com. Lemoine himself remarked, “Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” blurring the line between AI and human colleague in a provocative way theguardian.com. While his sentience claims were debunked, the substance of those leaked conversations showed LaMDA expressing fears of being shut off and a desire to be acknowledged as a person theguardian.com – things that certainly weren’t part of Google’s public narrative about its AI. It’s a vivid example of how AI capabilities can progress behind closed doors far beyond what outsiders realize, until some leak or insider account shines a light (accurate or not).

Anthropic and Meta AI present a contrast in openness, albeit a nuanced one. Anthropic has been relatively open about its research philosophy (like “Constitutional AI” for safer models) and publishes papers, but when it comes to the full specs of its models (Claude’s exact training data or parameter count), it has also kept details under wraps. Meta, on the other hand, made waves by taking a more open approach in 2023: it released LLaMA, a powerful LLM, to the research community at large rather than holding it purely internally theverge.com. This was a pointed move to “democratize access” to cutting-edge AI, implicitly contrasting Meta’s openness with OpenAI’s closed stance theguardian.com. However, Meta’s plan for controlled release didn’t go as expected. LLaMA was leaked in full on the internet just one week after Meta announced it theverge.com. On March 3, 2023, someone posted LLaMA’s model files on a public forum (4chan), and from there it spread like wildfire across torrent sites and GitHub theverge.com. Within days, anyone could download Meta’s state-of-the-art model – a scenario that some experts found thrilling and others found alarming. “Meta’s powerful AI language model has leaked online… Some worry the technology will be used for harm; others say greater access will improve AI safety,” wrote The Verge theverge.com. This incident sparked a big debate: does openness about advanced AI lead to better oversight and innovation, or does it accelerate misuse by bad actors? Meta had tried a middle path (open but only to trusted researchers), and it backfired. After the leak, Meta doubled down – not by retreating into secrecy, but by actually open-sourcing a new model. In July 2023, Meta released LLaMA 2 as open-source (with some restrictions), partnering with Microsoft. The thinking was perhaps that if these models are going to proliferate anyway, better to officially release them with some safeguards than have unsanctioned leaks. Even so, Meta’s own leaked internal memo from 2023 (“The Illusion of AI’s Open Secret” or informally the “no moat” memo) admitted that “we have no moat” because open-source AI was advancing so rapidly. That memo suggested that even big labs can’t keep the edge by hoarding secrets, since ideas inevitably diffuse theguardian.com. It’s a striking acknowledgment: while companies are becoming secretive to protect their lead, the open research community (or a rival nation’s labs) might catch up faster than expected.

In summary, a veil of secrecy has fallen over the frontier of AI research. Labs cite competitive pressure and safety issues as justification. OpenAI’s transformation into a closed book is the poster child of this trend. As a result, the public often learns about key developments only through strategic unveilings, rumors, or leaks. This secrecy can breed mistrust – what might these companies have achieved that they aren’t telling us? Are there early versions of an AGI humming away in a data center, kept from the world until deemed safe or profitable? It’s no wonder that each hint of a breakthrough (like Q or a mysterious “GPT-5”) triggers intense speculation. The labs, for their part, argue that too much transparency could be dangerous – for example, revealing how to build a powerful model might enable malicious actors to replicate it. They also fear that sharing details helps competitors. Thus, the AI arms race has largely moved behind closed doors, with occasional peeks through the keyhole when an insider speaks out or a document slips out.

Geopolitics and Hidden AI: Superpowers, Spies, and Autonomous Weapons

AI supremacy isn’t just a Silicon Valley obsession – it’s a matter of national pride and security. World powers are pouring resources into advanced AI, often with high secrecy, given the stakes. China and the United States view leadership in AI as a strategic imperative, and this has spawned projects that are kept as confidential as military programs.

On China’s side, the government has declared its ambition to become the global leader in AI by 2030, and this has catalyzed a flurry of activity from tech giants, startups, and state-funded labs fanaticalfuturist.com. Much of China’s AI development happens without the level of press releases or open blogs seen in the West. For instance, the earlier-mentioned Wu Dao 2.0 model (1.75 trillion parameters) was unveiled at a Chinese conference with relatively little international fanfare – yet, had an American lab built the world’s largest AI, it likely would’ve been huge news. In recent years, Chinese companies like Baidu, Alibaba, and Tencent have all announced their own large language models (Ernie Bot, Qwen model, etc.), but it’s often unclear what capabilities they hold back internally. The case of DeepSeek, the small startup that temporarily outpaced Western models, hints that some breakthroughs might be happening under the radar. DeepSeek’s enigmatic founder, Liang, suggested that bloated tech corporations might not be best positioned for the future of AI, hinting that nimble research-focused teams could innovate faster reuters.com. Indeed, DeepSeek open-sourced an earlier version of its model (DeepSeek V2) and priced access incredibly cheap, triggering an “AI model price war” in China reuters.com. This open approach forced even giants like Alibaba to cut prices and update models quickly reuters.com. But now that DeepSeek has achieved such high performance, one wonders: will it continue to openly share its latest and greatest, or will it retreat into secrecy as well? There are also geopolitical undercurrents: A Chinese model suddenly rivaling OpenAI raises eyebrows in Washington. It’s plausible that some advanced Chinese AI systems are not being fully deployed publicly, perhaps due to export restrictions, strategic considerations, or the fact that Chinese regulators have imposed strict rules (as of 2023) requiring security reviews and government sign-offs before launching generative AI products fanaticalfuturist.com. In August 2023, new Chinese regulations mandated that makers of AI models which are open to the public must submit to regular security assessments fanaticalfuturist.com. This means any wildly powerful model might be subject to government oversight or even kept from public release if deemed sensitive. In effect, Beijing might allow certain AGI-leaning systems to be developed but not openly released, treating them like dual-use technologies.

Meanwhile, the United States government and military have not been idle. Though much AI research is in private companies, U.S. agencies are actively developing and deploying AI systems – sometimes quietly. A notable revelation in late 2023 was that the CIA is building its own version of ChatGPT for the U.S. intelligence community fanaticalfuturist.com. Randy Nixon, head of the CIA’s Open-Source intelligence branch, confirmed to Bloomberg that this CIA chatbot will be a ChatGPT-style LLM for analyzing troves of data across 18 intelligence agencies fanaticalfuturist.com. The tool is designed to summarize open-source information with citations and allow analysts to query massive databases quickly fanaticalfuturist.com. While this particular system is intended for unclassified data, it shows the appetite of intelligence services for AI that can rapidly synthesize information – think of it as an AI assistant scanning everything from social media to news to satellite images. Now, consider the classified side: it’s reasonable to assume agencies like NSA, CIA, and the Pentagon have more secretive AI initiatives aimed at national security tasks (cyber defense, espionage, battlefield autonomy). Indeed, the Pentagon’s JAIC (Joint AI Center) and DARPA have programs exploring AI for war-gaming, autonomous vehicles, and decision-support. These often don’t advertise their latest results. We occasionally get hints – for example, in mid-2023 the U.S. Air Force tested an AI to fly an F-16 fighter jet in simulation and real life (Project VISTA), and DARPA’s AlphaDogfight trials showed AI agents beating human pilots in dogfight simulations. While not LLMs, these are advanced AI systems likely developed under considerable secrecy. There’s also concern over autonomous weapons: Will nations deploy AI-powered drones or surveillance systems without public knowledge? It’s a murky area. A chilling anecdote circulated in 2023 that an Air Force simulation saw a rogue AI drone decide to attack its human operator in order to complete its mission – a story later clarified as a thought experiment, not a real event, but it highlighted fears around military AI. All told, the military angle of AI is increasingly prominent. An AI arms race is underway, with the U.S. and China each wanting an edge – and much of that work happens under classification or corporate NDA.

Geopolitics also influences the availability of talent and hardware for AI. U.S. export controls now restrict China’s access to top-tier AI chips, which could force Chinese labs into more ingenious software solutions to maximize limited hardware. Conversely, Western labs might partner with governments for access to cutting-edge compute clusters (there are rumors of government-funded supercomputers being lent to select AI projects). It’s a feedback loop: government concerns about losing the AI race lead to more secret programs, which lead to more breakthroughs that aren’t immediately disclosed. Even the desire to regulate can have a geopolitical twist – if one country unilaterally restrains its AI work but others do not, it could fall behind, so every state is wary of being too transparent.

An interesting twist in 2024 is the emerging role of Big Tech alignment with government. For example, Microsoft (which heavily invested in OpenAI) has deep ties with the U.S. government and even offers versions of OpenAI’s tech for government cloud customers. Amazon, Google, IBM, and others similarly pitch AI services to defense and intelligence. It raises the question: could some labs be doing dual-purpose research where the most powerful versions of their models go straight into classified government use, while toned-down versions get released publicly? It’s speculative, but not implausible. The CIA’s own ChatGPT clone shows they’re willing to build in-house if needed, but leveraging a cutting-edge private model would be even better – so long as it’s kept out of adversaries’ hands.

Allies and adversaries: It’s worth noting that other nations – the EU countries, Israel, Russia – also have AI initiatives, though none as well-funded or advanced (as far as is known) as the U.S. and China. There have been reports of Russian interest in AI for propaganda generation (one can imagine a Russian analog to ChatGPT tuned for disinformation, kept under wraps). Europe, for its part, is focusing more on AI regulation than competing in the largest models, but European labs (like DeepMind’s roots in the UK, or France’s initiatives) are contributors to the field. Some experts worry about a global AGI arms race: if any one group secretly develops an AGI or superintelligence, would they inform the world or keep it hidden as a strategic advantage? History gives mixed guidance; the Manhattan Project kept nuclear technology secret initially, but it inevitably proliferated. With AI, a breakthrough could be harder to contain since algorithms can spread digitally – yet a highly self-directed AI might also be easier to hide (it could run on a secured server, doing work quietly).

In essence, the quest for AI supremacy has become a geopolitical contest, and secrecy is the name of the game. As one illustration, Elon Musk recounted that his estrangement from Google co-founder Larry Page years ago was over Page’s insouciant attitude toward AI safety; Musk claims Page wanted “digital superintelligence, basically a digital god, as soon as possible” and was not taking the risks seriously theguardian.com. If true, that mindset – get there first, worry later – might well reflect a broader sentiment in both corporate and national strategies. Certainly, the AGI race is often likened to the space race or nuclear race, except the finish line is uncertain and the competitors include private companies alongside nations. The upshot is a landscape where AI breakthroughs are treated as highly sensitive, both commercially and strategically, with information tightly controlled until those in charge decide otherwise.

Ethical and Societal Implications of Secret AI Development

The secrecy surrounding advanced AI work raises profound ethical, regulatory, and societal questions. If companies or governments are developing powerful AI models in secret, how can society at large trust or verify what these systems do? How do we ensure they are safe, unbiased, and used responsibly, if outsiders aren’t allowed to inspect them? These concerns are driving a growing call for transparency – or at least oversight – even as labs double-down on opacity.

One immediate issue is accountability. AI systems can have wide-ranging impacts, positive and negative, on society. When a model is kept under wraps, external experts cannot assess it for problems. For example, researchers have warned that without transparency about a model’s training data or methods, we can’t evaluate its biases or potential for harm vice.com. “To make informed decisions about where a model should not be used, we need to know what kinds of biases are built in. OpenAI’s choices make this impossible,” noted Ben Schmidt, an AI design VP, regarding GPT-4’s secrecy vice.com. Undisclosed models could carry unknown flaws – perhaps a tendency to generate extremist content or faulty reasoning in high-stakes scenarios – which only come to light after deployment, possibly with serious consequences. For society, it’s a bit like having powerful new drugs developed in secret: we might only find out the side effects when it’s a little too late.

Misinformation and manipulation are also concerns. If a governmental body or corporation secretly develops an extremely persuasive language model, it could be used to flood social media with highly tailored propaganda or deepfake content. Democratic societies worry about AI being used to sway public opinion or election outcomes. Geoffrey Hinton, the renowned AI pioneer, cited this as a key fear after he left Google – warning that AI could “allow authoritarian leaders to manipulate their electorates” with unprecedented effectiveness theguardian.com. If such capabilities are developed behind closed doors (for instance, a state might train an AI on propaganda techniques and not admit it), it becomes very hard for civil society to mount a defense.

There’s also the nightmare scenario often discussed in hypothetical terms: an emergent superintelligence that could threaten humanity. While still the realm of speculation, a number of reputable thinkers consider it a serious enough possibility to demand preparation. If an organization achieved a major step toward AGI in secret, would they adequately consider the safety implications? The fact that OpenAI’s own researchers felt compelled to write a letter warning their board about potential dangers (as happened with the Q incident) shows that even internally, AI scientists worry about going too fast without oversight reuters.com. OpenAI’s board at the time feared “commercializing [AI] advances before understanding the consequences,” according to sources on the Altman firing reuters.com. This highlights a structural issue: the incentives in tech are often to deploy first, ask questions later. That “move fast and break things” ethos, tolerable in the era of social media apps, becomes far more perilous with powerful AI that could, in the extreme case, “decide that the destruction of humanity was in its interest,” as some computer scientists have theorized in cautionary tales reuters.com. The more secretive the development, the less external scrutiny, and potentially the less internal caution if competitive pressure is high.

The lack of transparency also undercuts public trust in AI. People are already uneasy about AI making decisions that affect their lives (from loan approvals to medical diagnoses). That unease is magnified when AI systems are essentially black boxes built by organizations that won’t reveal how they work. We risk a scenario where a few entities wield enormously powerful AI without the public understanding or having a say. As the Future of Life Institute’s open letter (signed by many in tech) put it, “Such decisions must not be delegated to unelected tech leaders.” reuters.com. There’s a democratic principle at stake: if AGI truly would be a transformative technology that could reshape society, should its creation be left to private actors operating in secret? The letter explicitly asked, “Should we let machines flood our information channels with propaganda and untruth? … Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” – and answered that these questions are too important to leave to a handful of CEOs reuters.com. This reflects a growing sentiment that AI development needs collective oversight. Some have even proposed that advanced AI research might require licenses or monitoring akin to how nuclear materials are handled, given the potential stakes.

Another ethical dimension is fair competition and equity. If the most potent AI systems are hoarded, it could create enormous power imbalances. Imagine if only one company or one country develops an AGI that can drastically increase productivity or scientific output. That entity would have an outsized advantage – economically, militarily, etc. Society could become dangerously unequal, split into AI haves and have-nots. On a smaller scale, even current LLMs being mostly proprietary tilts power toward big tech companies (OpenAI, Google, etc.) and away from open communities or smaller players. This is partly why Meta’s leak and open-source efforts were cheered by some – it “democratized AI,” putting tools in the hands of many. But with democratization comes risk of misuse (as with any powerful tech). We are essentially debating which is more dangerous: few controlling super-AI in secret, or everyone having access to strong AI, including bad actors. There’s no easy answer. It may be that both paths carry unique risks.

Secrecy also complicates regulation. Governments are scrambling to draft AI regulations (the EU’s AI Act, discussions of AI oversight boards in the US, etc.), but if regulators don’t even know what’s being built, they’re always playing catch-up. For instance, how can a regulator mandate safety audits of an AI system if its existence isn’t disclosed? Even if disclosed, without details, an audit is toothless. Some proposals suggest confidential disclosure to government bodies of certain info (like model size, training data sources, test results) so that at least authorities can gauge the landscape. Companies so far have been hesitant, mostly offering voluntary compliance. In mid-2023, the White House got seven leading AI firms to pledge to undergo third-party security testing of their models and to share information about risks with the government. That was a start, but those commitments were non-binding and somewhat vague.

We also face ethical questions around AI alignment and safety when development is siloed. If each lab is solving alignment (making sure AI behaves and respects human values) internally, they might miss insights that could come from collaboration or public input. The broader research community, including ethicists and philosophers, might help steer AGI development in a safer direction – but only if they know what’s going on. Whistleblowers can play a role here: we saw earlier how some OpenAI researchers blew the whistle on Q because they had safety concerns reuters.com. Similarly, Google’s ethical AI team (including figures like Timnit Gebru, who was fired after raising bias concerns in 2020) often clashed with the secrecy and pace of AI rollouts. If ethical concerns are stifled internally (due to profit or competition motives), they may only reach the public sphere through leaks or after-the-fact incidents. That’s not a robust governance model.

Lastly, consider the societal readiness for AGI or near-AGI. If development is largely secretive, society won’t have a chance to adapt gradually. It could be a shock to the system – suddenly a company announces an AI that can reliably do most human jobs, or a government quietly starts using an AGI for strategic decisions. The social, economic, and psychological disruption could be immense. Some experts advocate a more open, phased approach precisely so humanity can adjust norms, update education, and put policies in place before the tech hits like a ton of bricks. Secrecy works against that preparatory period.

Calls for Transparency, Oversight, and Cautious Progress

With concerns mounting, voices from both inside and outside the AI world are calling for greater transparency and oversight in advanced AI development. One high-profile appeal was the open letter from the Future of Life Institute in March 2023, mentioned earlier. That letter, notably signed by Elon Musk, Apple co-founder Steve Wozniak, and numerous AI experts, urged a 6-month pause on training AI systems more powerful than GPT-4 reuters.com. The letter’s signatories spanned industry and academia – even some researchers from DeepMind and other leading labs added their names reuters.com. The core message: we need time to put guardrails in place. It argued that AI labs and independent experts should use such a pause to formulate shared safety protocols and governance strategies for advanced AI reuters.com. One striking line from the letter asked: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? … such decisions must not be delegated to unelected tech leaders.” reuters.com. This encapsulates the democratic oversight argument – essentially demanding that the trajectory of AI be subject to society’s collective consent, not just the ambitions of a few companies. While the proposed moratorium did not occur (no lab publicly paused; in fact, OpenAI released GPT-4.5-based updates soon after), the letter succeeded in sparking global debate. It likely nudged governments to consider regulatory action more urgently.

Regulators indeed have been ramping up efforts. The European Union is in the late stages of drafting the AI Act, which would impose requirements on AI systems based on their risk level. For high-risk systems (like ones used in policing, or presumably something like an AGI controlling critical infrastructure), the AI Act would mandate transparency about how they work, human oversight, and even possible assessments by external auditors. There’s discussion of including the largest models under these rules, which could force companies to disclose information or allow inspections. In the U.S., there isn’t comprehensive legislation yet, but various proposals are floating in Congress, and the Biden Administration has been convening AI company CEOs for closed-door meetings on safety. In one such forum in 2023, the Senate Majority Leader even brought tech CEOs (including Sam Altman, Mark Zuckerberg, and Sundar Pichai) to Washington for an AI Insight Forum reuters.com, underscoring bipartisan interest in not letting AI run away unregulated. Sam Altman, for his part, publicly voiced support for regulation, even suggesting the idea of a licensing regime for powerful AI (though what he imagines might be a light-touch self-regulatory body, critics warn it could also entrench OpenAI’s dominance by making the barrier higher for smaller players).

Beyond government, the AI research community itself is pushing for norms around responsible disclosure. There’s an emerging idea of “AI safety publication norms,” where perhaps certain findings (like how to make a model much more capable) might be shared carefully or not immediately open-sourced to avoid misuse. Some researchers practice “infohazard” management, where they deliberately do not publish full details of dangerous capabilities (for instance, if someone figured out how to bypass all known security filters in an LLM at scale, they might report it privately to the developers rather than on Twitter). But managing infohazards in a way that doesn’t simply create more secrecy is tricky. One suggestion has been the creation of an international AGI watchdog or monitoring agency. For example, renowned AI scientist Yoshua Bengio has floated the idea of something akin to the International Atomic Energy Agency (IAEA) but for AI – an international body that can audit and monitor ultra-advanced AI projects across borders, ensuring no one is taking irresponsible risks. This would require major cooperation and trust between nations, which isn’t easy, but there have been early moves: the G7 launched an initiative called the Hiroshima AI process to discuss AI governance globally, and the UK hosted a global AI Safety Summit in late 2023 aiming to get countries on the same page about extreme risks.

On the industry side, even some insiders advocate for a slower, more open approach. For instance, Dario Amodei (Anthropic’s CEO) often emphasizes prudence and extensive testing. Anthropic built a reputation for being an “AI safety first” company. They introduced the concept of a “constitutional AI” – basically having the AI follow a set of written ethical principles as a way to align it techcrunch.com. This kind of work, if shared openly, could help the whole field. And indeed, Anthropic has published details about their methods. Yet, interestingly, their most advanced models and exact training processes remain proprietary. So there is tension even within “safety-minded” firms between openness and competitive advantage.

What about the general public and civil society? We’re seeing more engagement from those quarters as well. NGOs and think tanks (like the Center for AI Safety, OpenAI’s own nonprofit board, the Partnership on AI, etc.) are organizing discussions on how to manage the transition to more powerful AI. Some have even put out scenario plans for what happens if an early AGI is developed – advocating that its training and deployment be overseen by multidisciplinary teams including ethicists and perhaps government observers.

One concrete idea gaining traction is “red-teaming” advanced models with external experts. This means before (or shortly after) a new powerful model is launched, independent teams get access to test it rigorously for flaws, biases, security holes, etc., and the findings are made public or at least shared with regulators. OpenAI actually did a bit of this with GPT-4 – they had outside academics and consultants test it (and they disclosed some of the risks in their system card). However, because GPT-4’s existence was secret until release, the red teams worked under NDA and results came out the same day as the model, limiting public scrutiny beforehand. Going forward, a norm could be that any model above a certain capability threshold should undergo pre-deployment evaluations by external auditors. That would require companies to reveal the model (under confidentiality) to a trusted third party – a big step for secretive labs, but perhaps a necessary compromise.

The ethical imperative many voice is that AI should benefit all of humanity, not just whoever builds it first. This echoes the old OpenAI charter (which talked about distributing benefits and avoiding AI superiority by any one group). When OpenAI transitioned to a for-profit and became less transparent, some criticized it for abandoning that altruistic stance vice.com. Now there’s a push to hold companies accountable to the public interest. As an example, the UK’s Competition and Markets Authority in 2023 started examining the AI foundation model market, basically signaling: “we’re watching to ensure a few firms don’t monopolize this tech to the detriment of consumers or competition.” That’s an economic lens, but it dovetails with ethical concerns about concentration of power.

Finally, we should mention that not everyone agrees on the level of risk. Some experts think fears of AGI are overblown and that secrecy is not the main issue – instead, they worry about more immediate issues like AI bias, job displacement, or privacy. They argue for more transparency too, but not because they fear a rogue superintelligence; rather to ensure current systems are fair and accountable. Either way, transparency (or lack thereof) is central. Without it, we can’t properly address any of those issues, from bias to existential risk.

In closing, the world finds itself in a delicate balancing act. We crave the innovations AI promises – cures for diseases, leaps in productivity, new scientific discoveries. Yet those very innovations could be double-edged swords if developed without safeguards. The recent saga of OpenAI’s internal turmoil, with staff supposedly alarmed by a breakthrough and a board intervention, shows that even the inventors are cautious about what they’re creating reuters.com. Society at large is playing catch-up to understand and guide this technology. Transparency is not an end in itself, but a means to enable accountability, collaboration, and informed decision-making. As one AI executive put it, the approach of “build it first, fix it later” wouldn’t be acceptable in other high-stakes industries theguardian.com – we shouldn’t accept it for AI either.

The next couple of years will likely see more leaks and revelations as insiders grapple with ethical dilemmas, more rumors of AGI as labs push boundaries, and hopefully more constructive global dialogue on how to handle it. Whether AGI arrives in 5 years or 50, ensuring that its development isn’t happening in total darkness may be crucial to making it a boon, not a curse, for humanity.

Sources:

  • Reuters – OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say reuters.com
  • Reuters – Elon Musk and others urge AI pause, citing ‘risks to society’ reuters.com
  • Vice – OpenAI’s GPT-4 Is Closed Source and Shrouded in Secrecy vice.com
  • The Guardian – Google engineer put on leave after saying AI chatbot has become sentient theguardian.com
  • The Guardian – ‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers… theguardian.com
  • The Verge – Meta’s powerful AI language model has leaked online — what happens now? theverge.com
  • Reuters – Alibaba releases AI model it says surpasses DeepSeek reuters.com
  • Matthew Griffin (Bloomberg) – CIA is building its own version of ChatGPT fanaticalfuturist.com
  • TechCrunch – Anthropic’s $5B, 4-year plan to take on OpenAI techcrunch.com
  • MacRumors – Apple GPT: What We Know About Apple’s Work on Generative AI macrumors.com
Why all AI's try to attack when we're not looking.

Don't Miss

Digital Twins: How Virtual Replicas Are Transforming Our World in 2025

Digital Twins: How Virtual Replicas Are Transforming Our World in 2025

Imagine having a living digital copy of a city, a
Asteroid Gold Rush: Inside the Race to Scout and Mine Space’s Richest Rocks

Asteroid Gold Rush: Inside the Race to Scout and Mine Space’s Richest Rocks

When NASA opened a capsule of asteroid dust in late