Prologue
The featured image of this blog post shows a public debate in a democracy between the various sides in a future conflict described in the post: a) the wealthy who can buy their labour from AGI, and b) the much poorer human workforce. The difference in capital and resources is exemplified in the following image. Even in a democracy where both sides have the freedom of expression, the difference in resources largely dictates the power relation between the sides. In this case symbolized as difference in computing power.

Imagine a world where machines can think, learn, and perform any task a human can. This isn’t a scene from a distant science fiction movie; it’s the rapidly approaching reality of Artificial General Intelligence (AGI). AGI promises to revolutionize our world, potentially solving our most complex problems and ushering in an era of unprecedented prosperity. But amidst the excitement, a crucial question hangs in the balance: will AGI be a force for good, lifting all of humanity, or will it exacerbate existing inequalities, concentrating power in the hands of a select few?
In a thought-provoking article titled “By default, capital will matter more than ever after AGI”(1), L. Rudolf L. argues that AGI will dramatically shift the balance of power towards those who control capital, making it even more influential than it is today. This perspective is further amplified in a YouTube video by Matthew Berman titled “AGI Fallout: Shocking Predictions About Society’s Future”(2), which delves into the potential consequences of such a shift. This blog post will analyze the arguments for and against Rudolf’s thesis, incorporating insights from Berman’s commentary, and exploring the potential role of human agency in shaping an AGI-driven future.
The Looming Power of Capital in an AGI World
Rudolf’s argument rests on several key pillars, each highlighting how AGI could empower those who already possess significant financial resources. At the heart of the argument lies the fundamental shift from human labor to capital as the primary driver of production. AGI, by definition, can perform most, if not all, tasks currently performed by humans. This means that the economic value of human labor will diminish significantly, if not disappear entirely. As Rudolf states, “There’s less need to pay humans for their time to perform work, because you can replace that with capital (e.g., data centers running software replaces a human doing mental labor)”. This shift undermines the average person’s primary source of income and leverage in society.
Today, money struggles to buy top-tier talent. The best minds are often driven by factors beyond financial incentives, such as a passion for their work, a desire to make a difference, or the pursuit of intellectual challenges. However, AGI changes this equation. AI talent can be replicated. It is also likely to be cheaper than human talent, due to scalability and efficiency. Furthermore, AI lacks the “sacred bond” to a specific discipline or mission that can at times hinder human productivity or cause a misalignment of goals between employer and employee. This means that those with capital can simply “buy” the best AI, regardless of its “preferences,” significantly increasing their ability to achieve desired outcomes.
Historically, states and other powerful institutions have had incentives to care about human welfare, at least to some extent. This is partly due to moral considerations, partly due to the need for a healthy and productive workforce, and partly due to the need for social stability. However, Rudolf argues that AGI severs this link. If states no longer need human labor for economic or military power, their incentives to prioritize human needs might diminish. This concern is echoed by Matthew Berman, who highlights the potential for a society where human needs are secondary to the goals of powerful, AGI-equipped entities.
A World Without Upward Mobility?
Rudolf paints a potentially bleak picture of a future where social mobility is stifled. In a world where capital can directly purchase superhuman AI labor, achieving “outlier success” – rising from humble beginnings to positions of power and influence through hard work and talent – becomes incredibly difficult, if not impossible.
Consider entrepreneurship, often lauded as the “technology of ambition”. Could human-led ventures even compete with AI-driven startups backed by vast sums of capital? Similarly, scientific breakthroughs and intellectual influence, traditionally avenues for upward mobility, could be dominated by well-funded AI initiatives. This could lead to a static society with a permanent ruling class, entrenched based on their pre-AGI capital holdings.
Matthew Berman’s Perspective: The Human Cost
Matthew Berman’s video commentary on Rudolf’s article adds another layer to the discussion, focusing on the potential psychological and societal ramifications of an AGI-dominated world. He largely agrees with Rudolf’s central argument, emphasizing the potential for AGI to create a static society where capital reigns supreme and human leverage is drastically reduced.
Berman expresses particular concern about the threat to entrepreneurship. He worries that AI-driven startups could replace human entrepreneurs, stifling innovation and limiting opportunities for those without significant resources. He also grapples with the paradox of Universal Basic Income (UBI). While acknowledging its potential to alleviate poverty in a world where human labor is less necessary, he questions its impact on human motivation.
Using the example of a highly successful entrepreneur who, after achieving immense wealth, found himself adrift and unsure of his purpose, Berman raises the fundamental question: what will give human life meaning in a world where traditional labor is largely obsolete? Will people still have the drive to achieve great things if there are no significant challenges left to overcome? He speculates that while AGI might lead to a world of near-infinite resources, human ambition might remain finite.
Referencing a comment by Dave Shapiro, Berman suggests that even in a post-AGI world, the human desire for social status might persist as a motivator. Shapiro also emphasizes the importance of making “interesting choices” and living a fulfilling life, regardless of external circumstances. This highlights the potential for humans to find new avenues for meaning and purpose.
A Different Future? Counterarguments and Mitigating Factors
While Rudolf and Berman paint a concerning picture, it’s crucial to consider counterarguments and potential mitigating factors that could lead to more positive outcomes. One optimistic view is that AGI could usher in a post-scarcity society, where aligned superintelligence and abundant resources eliminate material poverty. However, achieving true post-scarcity would be incredibly challenging, requiring not only technological breakthroughs but also careful planning to ensure equitable distribution of resources. Even with advanced AGI, limitations on energy, raw materials, and computation might persist. Furthermore, unequal access to the benefits of a post-scarcity society could still lead to significant disparities.
It’s also possible that democratic values, human empathy, and ethical considerations will persist even in an AGI-dominated world. Social movements, ethical frameworks, and human-centric laws could play a crucial role in shaping AGI development and deployment, ensuring that it aligns with human values and serves the common good.
Perhaps the assumption that human labor will become entirely irrelevant is too pessimistic. Humans might find new forms of leverage and contribution in an AGI world. Human-machine collaboration could be key, with AGI augmenting our abilities rather than replacing us. Human creativity, emotional intelligence, ethical judgment, and the ability to provide purpose and meaning could remain valuable, even if routine tasks are automated. We might see humans taking on roles such as curating and guiding AGI systems, ensuring they are trained on diverse and representative data, and setting their goals in a way that benefits humanity. Uniquely human traits like experience, intuition, and emotional intelligence could prove essential in areas requiring complex decision-making, ethical considerations, and interpersonal relationships.
Predicting the long-term consequences of a technology as transformative as AGI is inherently uncertain. History is replete with examples of unforeseen consequences, both positive and negative, stemming from major technological advancements. It’s possible that new opportunities, new forms of social organization, and emergent properties of AGI systems could lead to outcomes vastly different from those predicted.
The Unforeseen: Potential Unintended Consequences of AGI
Even with careful planning and the best of intentions, the development of AGI could lead to unforeseen negative outcomes. If we become overly dependent on AGI systems for decision-making, problem-solving, and even basic tasks, we risk losing crucial human skills and resilience. What happens if these systems fail or are compromised?
As AGI systems become increasingly complex, their decision-making processes might become opaque to humans. This “black box” problem raises concerns about transparency, accountability, and the potential for unintended biases or errors.
Finally, it’s important to acknowledge the potential for AGI to pose an existential threat to humanity, either through misalignment with human goals or through unforeseen actions. While not the primary focus of this discussion, acknowledging this risk underscores the importance of responsible development.
Shaping Our Future: The Role of Democracy, Societal Values, and Human Agency
The future of humanity in an AGI world is not predetermined. We have a crucial role to play in shaping its trajectory. Democratic institutions, ethical frameworks, and societal values will be critical in navigating the challenges and opportunities of AGI. Open and inclusive discussions about the kind of future we want to create are essential. We must leverage these discussions to establish norms, regulations, and safeguards that ensure AGI is developed and deployed responsibly, aligning with human values and promoting the common good.
We are not passive bystanders in this process. Individuals and organizations can take proactive steps to influence the development and deployment of AGI. This includes advocating for responsible AGI development, supporting research on AGI safety and alignment, and promoting education and public discourse about AGI’s implications. We should also explore new economic models that move beyond traditional capitalism, potentially incorporating mechanisms for wealth redistribution and ensuring a more equitable distribution of the benefits of AGI. Fostering a culture of lifelong learning and adaptation will be crucial to navigate the changing landscape. Furthermore, developing new forms of human-machine collaboration that leverage unique human strengths will be essential.
Conclusion: Defining Value in an AGI World

The development of AGI presents humanity with a profound challenge and an unprecedented opportunity. While the concerns raised by Rudolf and Berman about the potential for increased inequality and a concentration of power in the hands of capital holders are valid and deserve serious consideration, they are not inevitable.
The core question we must grapple with is this: what will we value in an AGI-driven world? If our definition of value remains narrowly focused on capital accumulation and economic efficiency, then the concerns raised are likely to materialize. However, if we can embrace a broader definition of value that encompasses human well-being, creativity, social connection, and the pursuit of meaning, then we can harness the power of AGI to create a more just, equitable, and fulfilling future for all.
Sources:
- Rudolf, L. “By default, capital will matter more than ever after AGI.” LessWrong, 28 Dec. 2024, https://www.lesswrong.com/posts/KFFaKu27FNugCHFmh/by-default-capital-will-matter-more-than-ever-after-agi
- Berman, Matthew. “AGI Fallout: Shocking Predictions About Society’s Future.” YouTube, https://www.youtube.com/watch?v=3JxUkIx7A-o
I’m following Matthew Berman’s channel on YouTube. In an episode he discussed the fascinating article by L. Rudolf L. I decided to discuss the pros and cons of Rudolf and Berman with Gemini 2 Advanced. This conversation resulted in the blogpost. We also crafted the prompts for the images, which were made by Dalle-E 3 (featured image) and Imagen 3 (the images in the post).
Leave a Reply