Dr Waku and Connor Leahy Discuss the Dangers with Unregulated AI-Development

I prompted Dall-E3 to create a picture of a YouTube discussion between Dr Waku and Connor Leahy with a focus on the dangers of open sourcing AI-development.

AGI and the Question About Open or Closed Source

Recently, I came across a discussion on Dr. Waku’s YouTube channel featuring Connor Leahy, the CEO of Conjecture. The conversation examined the risks of artificial general intelligence (AGI), the ideological forces driving its development, and the role of transparency in mitigating dangers. One point struck in particular: Leahy’s firm critique of open-source AI for powerful systems.

Open-source AI has long been celebrated as a cornerstone of transparency and collaboration. By sharing code and ideas, we’ve seen innovation flourish, from small startups building on GPT models to community-driven breakthroughs like Stable Diffusion. But as we edge closer to developing AGI, a new question emerges: Is open-sourcing such powerful tools a good idea?

On one side, advocates argue that open access prevents monopolistic control and promotes ethical oversight. On the other, critics like Connor Leahy warn of catastrophic risks: weaponization, loss of control, and the potential for irreversible harm.

Open Source, Transparency and Collaboration

For decades, the open-source ethos has driven some of the most remarkable advancements in technology. Think of Linux, TensorFlow, or PyTorch—platforms that transformed industries by making cutting-edge tools available to anyone. Supporters of open-source AI believe this tradition must continue, even as we approach AGI.

Open-sourcing AI models enables independent researchers and watchdogs to scrutinize the code, uncovering biases, flaws, or safety issues that might go unnoticed in proprietary systems. When the stakes are as high as AGI, transparency isn’t just idealistic—it’s essential.

Without open access, the power to develop and deploy advanced AI remains concentrated in the hands of a few corporations and governments. Open-source models allow smaller players—startups, researchers, or even hobbyists—to contribute meaningfully to AI’s growth.

Connor Leahy’s Perspective: The Dangers of Open-Sourcing Powerful AI

While the benefits of open-source AI are clear, Connor Leahy argues that the risks, especially at the AGI level, far outweigh them.

Leahy points to the rapid proliferation of deepfake and voice-cloning tools as cautionary tales. When anyone can access these technologies, they can be used for harm—misinformation campaigns, identity theft, and worse. With AGI, the stakes grow exponentially.

AGI systems, by their nature, are incredibly complex. Neural networks operate in ways that even their creators don’t fully understand. If open-sourced, these systems could evolve or be repurposed in unpredictable ways. Once released, they cannot be recalled.

Leahy compares AGI to nuclear weapons: while no one advocates for making nuclear blueprints public, the same caution should apply to powerful AI systems. Transparency cannot come at the cost of existential risk.

Meta’s Open-Source Advocacy: Altruism or Self-Interest?

Meta (formerly Facebook) has been one of the loudest proponents of open-sourcing AI. At face value, this aligns with the values of transparency and democratization. But critics argue that Meta’s motivations may not be entirely altruistic.

By releasing open-source models, companies like Meta attract contributions from the global developer community. These improvements often feed directly into proprietary products, giving corporations the best of both worlds: community-driven innovation and proprietary control.

Open-source development can lower R&D costs, as independent researchers and developers do much of the heavy lifting.

Conclusion

The open-source vs. safety debate isn’t just an academic exercise—it’s about shaping the future of humanity. Decisions made today about how we develop and share AI will determine whether these systems serve the common good or become tools of destruction.

Leahy’s perspective is a sobering reminder that technology isn’t inherently good or bad; it’s how we use and control it that matters. His challenge to the open-source ethos might feel counterintuitive, but it’s rooted in a fundamental concern: can we afford to take the risk?

Further reading

The Compendium by Connor Leahy at thecompendium.ai

(This post was written with the help of ChatGPT-4o)


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *