Technology
Full Anthropic CEO Goes Techno-Optimistic in 15,000-Word Paean to Artificial Intelligence
Anthropic CEO Dario Amodei wants you to know that he will not be “doomsday” for AI.
At least that is my understanding of “mic drop” at ~15,000 words essay Amodei posted on his blog late Friday night. (I attempted asking Anthropic’s chatbot Claude if he was OK, but unfortunately the post exceeded the free plan’s length limit.)
Amodei paints an image of a world where all the risks of artificial intelligence are mitigated and technology delivers previously unrealized prosperity, social improvement, and abundance. He says this is not intended to minimize the shortcomings of AI – Amodei is initially targeting, without naming names, AI corporations that over-sell and usually hype their technological capabilities. However, it may very well be argued that this essay leans an excessive amount of towards techno-utopia, making claims which might be simply not supported by facts.
Amodei believes that “powerful AI” will emerge as early as 2026. By “powerful AI” he means AI that’s “smarter than a Nobel Prize winner” in fields resembling biology and engineering and might perform tasks such like proving unsolved math theorems and writing “extremely good novels.” Amodei claims that this artificial intelligence will have the opportunity to control any software and hardware possible, including industrial machines, and can essentially do many of the work that humans do today – but higher.
“(This artificial intelligence) can engage in any activity, communication, or remote operation… including taking action on the internet, giving directions or giving directions to people, ordering materials, directing experiments, watching videos, creating videos, etc.” – writes Amodei. “It has no physical form (other than life on a computer screen), but can control existing physical tools, robots, or laboratory equipment through a computer; theoretically, he could even design robots or equipment that he could use.”
Lots would have to occur to get to this point.
Even today’s best artificial intelligence cannot “think” the way in which we understand it. Models don’t reason a lot as replicate patterns they observe in their training data.
Assuming for the sake of Amodea’s argument that the AI industry will soon “solve” human-like pondering, will robotics catch up to enable future AIs to conduct laboratory experiments, produce their very own tools, and so forth? The fragility of today’s robots suggests that is unlikely.
But Amodei is an optimist – very optimistic.
He believes that in the subsequent 7-12 years, artificial intelligence could help treat just about all infectious diseases, eliminate most cancers, treat genetic diseases and stop Alzheimer’s disease in its early stages. Amodei believes that in the subsequent 5-10 years, conditions resembling post-traumatic stress disorder (PTSD), depression, schizophrenia and addictions might be curable with AI-based drugs or genetically preventable through embryo screening ( and controversial opinion) — and that there can even be AI-developed drugs that “adjust cognitive function and emotional state” to “trick (our brains) to behave a little better and provide more satisfying daily experiences.”
If this happens, Amodei expects the common human lifespan to double to 150 years.
“My basic prediction is that AI-based biology and medicine will allow us to compress the progress that biologists would make over the next 50 to 100 years into 5 to 10 years,” he writes. “I’ll call it the ‘compressed twenty first century’: the concept if we develop powerful artificial intelligence, we’ll make as much progress in biology and medicine in just a few years as we’d in all the twenty first century.
This also seems far-fetched, provided that artificial intelligence has not yet radically modified medicine – and will not occur for a very long time, or never. Even if the AI does it reduce requires the work and expense of getting a drug into preclinical testing, it could fail at a later stage, similar to human-designed drugs. It is essential to consider that artificial intelligence currently used in healthcare has proven to be biased and dangerous in many respects, or otherwise extremely difficult to implement in existing clinical and laboratory settings. It seems, well, aspirational to suggest that each one of those and other problems might be solved inside the subsequent decade or so.
But Amodei doesn’t end there.
He claims that artificial intelligence can solve world hunger. This could reverse the tide of climate change. It could also transform the economies of most developing countries; Amodei believes that AI can increase sub-Saharan Africa’s GDP per capita ($1,701 in 2022) to China’s GDP per capita ($12,720 in 2022) inside 5-10 years.
These are daring statements, although probably familiar to anyone who has listened to the followers of the “Singularity” movement, which expects similar results. Amodei acknowledges that such a development would require “a massive effort in terms of global health, philanthropy and (and) political support,” which he believes will occur since it is in the world’s best economic interest.
This can be a dramatic change in human behavior, provided that humans have repeatedly shown that their primary goal is what is going to profit them in the short term. (Deforestation this is only one example amongst hundreds). It’s also price noting that most of the staff chargeable for labeling datasets used to train AI are paid well below minimum wage, while their employers reap tens of tens of millions – or lots of of tens of millions – of equity from the outcomes.
Amodei briefly touches on the specter of AI to civil society, proposing that the coalition of democracies secure the AI supply chain and block adversaries who intend to use AI for malicious purposes from accessing powerful technique of AI production (semiconductors, etc.). At the identical time, he suggests that AI, in the best hands, may be used to “challenge repressive governments” and even reduce bias in the legal system. (Historically, artificial intelligence heightened prejudices in the legal system).
“A truly mature and successful implementation of AI can reduce bias and be fairer for all,” writes Amodei.
So if artificial intelligence takes over every possible task and does it higher and faster, would not that put humans in a difficult position from an economic viewpoint? Amodei admits that it’s, and that at this point society would wish to have conversations about “how the economy should be organized.”
But it offers no solution.
“People really want a sense of accomplishment and even a sense of competition, and in a post-AI world it will be entirely possible to spend years attempting a very difficult task with a complex strategy, similar to what people do today, starting career research projects, trying to become Hollywood actors or start companies,” he writes. “The fact that (a) AI could in principle do this task better and (b) this task is no longer an economically rewarding element of the global economy does not seem to me to make much difference.”
In conclusion, Amodei puts forward the concept artificial intelligence is solely a technological accelerator – that humans naturally move towards “the rule of law, democracy and Enlightenment values.” But in doing so, it ignores most of the costs of AI. Artificial intelligence is predicted to have – and already has – a huge effect on the environment. And this causes inequality. Nobel Prize-winning economist Joseph Stiglitz and others excellent AI-driven workplace disruptions could further concentrate wealth in the hands of corporations and leave staff more powerless than ever.
These corporations include Anthropic, although Amodei is reluctant to admit it. After all, Anthropic is a business – one apparently price nearly $40 billion. And those that profit from AI technology are essentially corporations whose only responsibility is to increase profits for shareholders, not the betterment of humanity.
A cynic might actually query the timing of the essay, provided that Anthropic is reported to be in the means of raising billions of dollars in enterprise capital funding. OpenAI CEO Sam Altman published the same technopotimist manifesto shortly before OpenAI closed its $6.5 billion funding round. Perhaps it is a coincidence.
On the opposite hand, Amodei will not be a philanthropist. Like every CEO, he has a product to present. It just so happens that his product will “save the world” – and people who think otherwise risk being left behind. At least that is what he would have you think.