VANGUARD - Expressing the viewpoint of the Communist Party of Australia (Marxist-Leninist)
For National Independence and Socialism • www.cpaml.org
The tech billionaires, Musk, Altmann and others are locked in a race to build Artificial General Intelligence, otherwise known as Superintelligent AI. These are machines that surpass human brainpower. So far no one has got there yet. With the current rate of development of AI, and the billions of dollars being thrown at AI, it may not be long until Superintelligent AI is a reality.
There is a spectrum of views about Superintelligent AI. Its promoters say that Superintelligent AI is wonderful and will save humanity. Its opponents take the “Doomsday view” that Superintelligent AI will destroy humanity.
The authors of this book, Eliezer Yudkowsky and Nate Soares are both firmly in the “Doomsday” camp. They were both involved in working to develop Superintelligent AI until they came to realise the dangers it poses.
They could not put their fears any more bluntly than in the introduction to the book when they say “If any company or group anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on earth will die.
We may laugh at this prediction. The authors paint scenarios of how Superintelligent AI could start thinking for itself and escape human control and eventually wipe out humans.
This may seem far- fetched, but when we look at the character of the people like Musk who are developing Superintelligent AI, in a USA run by Trump, we can’t have any confidence that they will care about the consequences of Superintelligent AI, as long as they are the first to build it.
Last year we reviewed Empire of AI, which told how people like Sam Altman brushed aside the concerns about the safety of AI which the safety teams of their companies raised. The attitude was “be first in the race for AI and sort the bugs and problems out later.”
The authors are adamant that joint international action must be taken to shut down all attempts to build Superintelligent AI everywhere in the world. They want the great powers of the US, UK, Russia and China to take the lead in preventing the development of Superintelligent AI.
We can’t see this happening. No country will want to be left behind in the AI race. We have also seen how Trump, egged on by the tech billionaires, reacts to attempts by any country to impose any sort of controls on AI. This is seen as an attack on American sovereignty, and Trump threatens retaliation against any country which tries to regulate the tech billionaires.
*****************************
Duncan B has admirably summed up the central thesis of Yudkowsky’s and Soares’s book. The book has attracted quite a bit of attention because the authors are relentless in hammering home their doomsday scenario. That is, the development of Superintelligent AI will be apocalyptic for humanity.
It is interesting that quite a number of people appear to have been swayed by Yudkowsky’s and Soares’s arguments in the book, if the endorsements on the book’s fly are any indication. The reason why I point this out is that their arguments are not well handled, relying on assertion and a number of analogies and hypothetical scenarios that reinforce their message but do not necessarily persuade a more skeptical/critical person. Being disturbed about the doom-laden viewpoint and not entirely convinced by how the two writers set out their ‘stall’, so to speak, I sought out online reviews of If Anyone Builds It… and found one particular review which really got to grips with the book’s inadequacies.
Will Macaskill writing in a post on his Substack site ‘Both/And’ reviewing If Anyone Builds It… gets stuck in straight away:
I thought that “If anyone builds it, everyone dies”, by Eliezer Yudkowsky and Nate Soares, was disappointing, relying on weak arguments around the evolution analogy, an implicit assumption of a future discontinuity in AI progress, conflation of ‘misalignment’ with ‘catastrophic misalignment’. I think that their positive proposal is not good .
I had hoped to read a Yudkowsky-Soares worldview that has had meaningful updates in light of the latest developments in ML and AI safety, and that has meaningfully engaged with the scrutiny their older arguments received. I did not get that. (1)
Macaskill criticizes the use made of evolution as an analogy for the development of Superintelligent AI by Yudkowsky and Soares. While there is some merit in using evolution as an explanatory tool to help laypeople understand how the training of AI works, there are problems with the analogy. One of the problems Macaskill highlights is the fact ‘that evolution wasn’t trying, in any meaningful sense, to produce beings that maximise inclusive genetic fitness in off distribution environments. But we will be doing the equivalent of that!’ Here Macaskill is referring to the drive to produce Superintelligent AI, where tech companies are actively trying to bring into being this product. The analogy thus falls down.
Turning now to the discontinuity in AI progress that Macaskill identifies in If Anyone Builds It… In their book, Yudkowsky and Soares posit a hypothetical scenario where there is a sudden overnight leap in intelligence which comes about when AI is used extensively to develop Superintelligent AI. Such an overnight leap in capacity will necessarily mean it will be too late for humanity to ‘align’ the new AGI to human values. Macaskill argues that the discontinuity in AI progress exemplified here overstates the rapidity of the process of development which may be very fast but not as a ‘sudden, sharp, large leap…’
Aligning Superintelligent AI to human values for Yudkowsky and Soares is an impossibility and therefore we must stop any attempts to build it. Macaskill argues that their views on the alignment question are flawed. Yudkowsky and Soares at times conflate ‘imperfect alignment’ (where the AI doesn’t always try to do what the developer/user intended it to do), with ‘catastrophic misalignment’ (where the AI tries hard to disempower all humanity, insofar as it has the opportunity). For Macaskill here is another example of Yudkowsky’s and Soares’s tendency to invoke the ‘discontinuous jump to godlike capabilities idea’ which is a feature of their approach in the book.
It is useful to highlight Macaskill’s critique of If Anyone Builds It… because it validates both what myself and Duncan B initially thought about the book. It is not well argued; relies on some flawed analogies and assertions; and offers up a possible solution which as Duncan B suggested above is not likely to ever be implemented.
The development of Superintelligent AI, like AI in general is in the hands of predominantly US based tech billionaires. They have a massive vested interest in pushing these products down our throats, which have caused and will continue to cause job losses across the globe. Other pernicious effects have arisen, such as deep fake porn, scams, heightened surveillance, and the targeting and killing of civilians in conflict zones. Now there is the possibility of the end of humanity courtesy of artificial superintelligence. As Nick Estes so eloquently put it when recently referring to the morass engulfing the US in the light of the Epstein scandal, and which I argue also holds for the current AI era, we are witnessing ‘… the moral pathology of capitalism in its decadent phase and the morbid symptoms of imperial decline.’ (2)
The only solution remains socialist revolution.
(1) https://willmacaskill.substack.com/p/a-short-review-of-if-anyone-builds accessed 10 March 2026
(2) https://nickestes.substack.com/morbid-symptoms accessed 7 March 2026