An AI-generated image in Linge Claire of a bald man sitting in front of a wall filled with encyclopedias. He resembles Carl Brown from the YouTube-channel Internet of Bugs.

AGI Reality Check: DeepSeek and the Future of AI

Introduction

Are we on the verge of an AI revolution, with robots ready to take over our jobs? Or is the current AI boom, fueled by promises of Artificial General Intelligence (AGI), heading for a spectacular bust? The hype is undeniable, but amidst the breathless pronouncements, it’s crucial to maintain a healthy dose of skepticism. We need an AGI reality check.

Enter Carl Brown, the veteran software developer behind the YouTube channel “Internet of Bugs.” With over 35 years of experience in the tech industry, Brown has seen technological waves come and go. He’s not easily swayed by the latest buzzwords. In a recent video, Brown tackles the big questions surrounding AGI, drawing on insights from several key sources: a Computerphile video explaining DeepSeek’s innovative architecture DeepSeek is a Game Changer for AI, a “Nobody Special Finance” video exploring potential controversies surrounding GPU access in China, and an MIT Technology Review article analyzing the challenges of large-scale brain research How big science failed to unlock the mysteries of the human brain.

This post will explore Brown’s key arguments, using these sources to provide a much-needed reality check on the current state of AI and what it means for the future – especially for those of us working in the tech world.

The writing of this blog post was quite tricky since the cooperation with the AI (Gemini 2.0 Advanced) was not optional. Read more in the companion post.

The Trillion-Dollar Question: Is AGI Imminent?

The dream of Artificial General Intelligence (AGI) – AI that can perform any intellectual task a human can – has captivated scientists and science fiction fans for decades. Companies like OpenAI, Anthropic, and Google are pouring billions into research, promising a future where AI solves our biggest problems and transforms every aspect of our lives.

But Carl Brown, on his channel “Internet of Bugs,” throws a bucket of cold water on this fiery optimism. With the seasoned perspective of someone who’s witnessed numerous tech hypes cycles, Brown argues that we’re far from truly understanding how the human brain works. And if we don’t understand the brain, how can we possibly replicate its capabilities in software? His 35+ years of experience gives him a grounded base from which to critique the current exuberance.

Many in the tech world are pushing for rapid AGI development, driven by vast sums of investment money. But Brown offers a much-needed AGI reality check, urging caution and a more realistic assessment of the challenges ahead.

Carl Brown’s Three Scenarios for the Future of AI

To understand the potential futures of AI, Brown outlines three distinct scenarios, ranging from the wildly optimistic to the deeply concerning. These scenarios help frame the risks and uncertainties surrounding the current push for AGI.

Scenario 1: AGI Achieved (and the Uncertain Aftermath)

This is the scenario often depicted in science fiction: AI reaches human-level intelligence (and possibly surpasses it), leading to a radical transformation of society. Jobs are automated, new industries emerge, and humanity either enters a golden age of leisure or faces an existential threat from its own creation. While this scenario captures the imagination, Brown views it as the least likely, at least in the near term. The sheer complexity of the human brain, he argues, makes achieving true AGI a monumental – and perhaps insurmountable – challenge for the foreseeable future.

Scenario 2: The AI Bubble Bursts

This scenario is far more grounded in historical precedent. Brown draws parallels to the dot-com bubble of the late 1990s, where massive investment in internet companies eventually led to a painful market crash. He suggests that the current AI boom, fueled by unrealistic expectations about AGI, could be heading for a similar fate. If investors lose confidence and pull their funding, the AI industry could face a significant downturn, potentially triggering a wider economic recession. This scenario suggests a potential AI bubble burst is a real concern, and one that warrants serious consideration.

Scenario 3: The “Fake AGI” Nightmare

This is the scenario Brown considers the most troubling, and perhaps the most likely. In this future, companies, driven by the pressure to deliver on their promises, claim to have achieved AGI, even if their AI systems are far from truly intelligent. These “fake AGIs” are then deployed in various sectors, replacing human workers and automating critical tasks. The result? Widespread errors, unforeseen consequences, and potentially catastrophic failures. Imagine AI systems making crucial decisions in healthcare, finance, or infrastructure, but without the judgment, adaptability, and common sense of human beings. This scenario paints a picture of economic disruption, job losses, and a general erosion of trust in technology.

DeepSeek: A Cost-Cutting Innovation, Not an AGI Breakthrough

Amidst the race for AGI, a Chinese company called DeepSeek made headlines with a seemingly impressive feat: creating a powerful AI model at a fraction of the cost of its competitors. This sparked excitement, but also confusion. Was DeepSeek a major step towards AGI? Carl Brown argues emphatically: no. As we explored in a previous post on Foodcourtification about DeepSeek’s culture of innovation. The company, led by Liang Wenfeng, is firmly focused on practical applications and efficiency, rather than the pursuit of general intelligence.

What is a “Mixture of Experts”?

DeepSeek’s efficiency stems from a clever architectural approach called “Mixture of Experts” (MoE). Instead of relying on one gigantic, all-encompassing neural network (like many other AI models), DeepSeek uses a collection of smaller, specialized networks, called “experts.” Think of it like a team of human specialists: you wouldn’t ask a cardiologist to fix your plumbing; you’d go to a plumber. Similarly, DeepSeek directs different parts of a task to the most relevant expert within its network. For a visual explanation of MoE, see this clip from Computerphile (starting at 0:45): [link to Computerphile video with timestamp]

The Power of Sparsity

The key to MoE’s efficiency is sparsity. Only a small number of experts are activated for any given input. This is far more efficient than using the entire network every time, which is what “dense” models like earlier versions of GPT do. This drastically reduces the computational power required, leading to significant cost savings. Dr. Pound further explains the concept of sparsity in this section of the video (starting at 1:51): [link to Computerphile video with timestamp]

Token-Level Routing: A Deeper Dive

DeepSeek takes this efficiency a step further with “token-level routing.” This means that even within a single sentence or paragraph, different parts (tokens) can be sent to different experts. This allows for incredibly fine-grained specialization and even greater efficiency. You can see how token-level routing works in more detail here (starting at 3:28): [link to Computerphile video with timestamp]

DeepSeek vs. The “Bigger is Better” Approach

This approach, which prioritizes practical solutions and rapid iteration, contrasts sharply with the “bigger is better” philosophy that has dominated much of the AI industry, particularly in the pursuit of AGI. Companies like OpenAI have focused on building ever-larger, more complex models, consuming vast amounts of data and energy. DeepSeek, on the other hand, demonstrates that efficiency and smart architecture can be just as important as sheer size, excelling in areas like coding and mathematics. This focus on real-world applicability, combined with an open-source philosophy, represents a different path compared to the often-closed development practices of some major AI players.

The GPU Question: A Controversy Surrounding DeepSeek

DeepSeek’s achievements have also sparked controversy. While the company claims impressive cost efficiency, some, like Jack from the “Nobody Special Finance” YouTube channel, have raised questions about whether Liang Wenfeng had access to more advanced NVIDIA GPUs than officially acknowledged, potentially circumventing US export controls. Jack from “Nobody Special Finance” presents his evidence in this video (starting at 8:47): [link to Nobody Special Finance video with timestamp]. This raises questions about the true cost of development and the fairness of the AI race, providing an interesting geopolitical perspective.

This uncertainty aligns with Carl Brown’s cautious approach, acknowledging that the exact cost savings might be debated, even if the underlying technological innovations are valid. It’s important to note that this doesn’t invalidate DeepSeek’s core innovations – MoE and sparsity are genuine advancements. However, it does highlight the complexities and potential for unfair advantages in the global AI landscape.

The Brain Barrier: Why AGI Remains Distant

Carl Brown’s skepticism about near-term AGI isn’t just based on gut feeling; it’s rooted in a fundamental challenge: our limited understanding of the human brain. He argues that if we can’t fully grasp the intricacies of our own cognitive processes, how can we expect to replicate them in a machine?

To illustrate this point, Brown points to massive, decade-long initiatives like the European Union’s Human Brain Project (HBP) and the US-based BRAIN Initiative. These projects, with significant funding and expertise, were launched with ambitious goals to map the brain’s complex structure and activity.

The MIT Technology Review Article: Evidence of Difficulty

An insightful article in MIT Technology Review, “How big science failed to unlock the mysteries of the human brain,” by Emily Mullin provides a critical look at these projects. Both were launched with grand ambitions – mapping the entire brain’s activity (BRAIN Initiative) and creating a full computer simulation of a human brain (HBP). However, as Mullin details, both faced significant scientific and logistical hurdles, and had to significantly adjust their goals and approaches. The HBP shifted from full-scale simulation to building computational tools for neuroscientists. The BRAIN Initiative focused more on developing technologies for studying the brain. These shifts came amidst criticisms from the scientific community about feasibility, cost, and the potential to overshadow other important research.

It’s important to note that Mullin’s article was published in 2021, before the recent explosion of interest and investment in AI driven by large language models. The article’s focus was on the challenges of neuroscience research itself, not on predicting the future of AI. However, the core insights about the complexity of the brain and the limitations of our current understanding remain highly relevant to the debate surrounding AGI.

While these projects did lead to some advancements – new tools for brain research, a 3D digital brain map from the HBP, and impressive large-scale neuron recordings – they ultimately fell short of their initial, transformative goals. This underscores the central point: understanding the human brain is an incredibly difficult undertaking.

The struggles of these well-funded, expertly staffed projects, even to make incremental progress, highlight the vast gulf between our current understanding of the brain and the capabilities needed for true artificial general intelligence. The MIT Technology Review article, though written before the current AI hype, serves as a powerful, real-world example supporting Brown’s cautious outlook on the near-term prospects of AGI. The difficulty in mapping the human brain is a significant factor that contributes to a needed AGI reality check.

What This Means for Software Developers (and You)

So, if AGI isn’t just around the corner, and the AI landscape is more complex than the headlines suggest, what does this mean for software developers? Carl Brown’s message is surprisingly reassuring: don’t panic.

He views the rise of LLMs and other AI tools not as a threat, but as another step in the ongoing evolution of software development. He draws a parallel to previous technological shifts, like the transition from assembly language to high-level languages like C, and then to languages with built-in memory management and extensive frameworks. Each of these advancements, he argues, initially seemed disruptive, but ultimately allowed developers to focus on higher-level problems and become more productive.

The key, Brown emphasizes, is to adapt and embrace change. Instead of fearing that AI will replace them, developers should focus on learning how to use these new tools to augment their own skills. This means:

  • Focusing on High-Level Skills: The ability to define problems, design solutions, architect complex systems, and verify that requirements are met will become even more valuable. These are the skills that require human judgment and creativity – the things LLMs currently lack.
  • Learning to Use LLMs as Tools: Developers should become proficient in using LLMs for tasks like code generation, documentation, and testing. This will free up time and mental energy for more strategic work.
  • Understanding the Limitations of AI: It’s crucial to recognize what current AI can’t do – exercise critical judgment, understand context, handle unexpected situations, or think long-term. This understanding will help developers use AI responsibly and effectively.

Brown acknowledges that some developers might choose to move into adjacent careers, and that’s perfectly valid. The rapid pace of change in the tech industry can be exhausting. But for those who love the core challenges of software development – solving problems with code – he believes that AI tools will ultimately be empowering, not replacing. The shift to an LLM-enhanced workflow might feel bumpy, but it’s a transition we can navigate successfully, just as we’ve navigated previous technological shifts.

Conclusion

Carl Brown, on his YouTube channel “Internet of Bugs”, offers a valuable counterpoint to the prevailing AI hype. He reminds us that while current AI technology, particularly in the form of Large Language Models, is impressive and rapidly evolving, it is not Artificial General Intelligence. The race for AGI is a high-stakes gamble, with the potential for significant economic disruption if the promised breakthroughs don’t materialize, or even worse if “fake-AGI” is released. DeepSeek represents a significant advancement in efficiency, not a leap towards true AI. The ongoing struggles to understand the human brain, as highlighted by the Human Brain Project and the BRAIN Initiative, further underscore the immense challenges that remain.

The current situation calls for a healthy dose of skepticism and a focus on realistic, achievable goals. Instead of chasing the elusive dream of AGI, we should concentrate on developing and deploying AI tools that augment human capabilities, not replace them. For software developers, this means embracing change, adapting to new workflows, and focusing on the higher-level skills that will continue to be in demand.

The future of AI is undoubtedly exciting, but it’s also uncertain. By approaching it with a critical eye, a grounded perspective, and a willingness to adapt, we can navigate this technological shift successfully and build a future where AI serves humanity, rather than the other way around. We need an AGI reality check.


Posted

in

by

Comments

2 responses to “AGI Reality Check: DeepSeek and the Future of AI”

  1. […] share that approach, and show how it can lead to a more focused and efficient workflow. You can see the blog post that inspired this improved […]

  2. […] interface to create a blog post from a YouTube video. That experience, documented in the blog post AGI Reality Check: Deepseek and the Future of AI, demonstrated the raw power of AI for content creation. However, it also highlighted a critical […]

Leave a Reply

Your email address will not be published. Required fields are marked *