top of page

Reflections on Yuval Noah Harari’s Nexus

Writer's picture: Andres De MiguelAndres De Miguel

Thrust into the international spotlight by his immensely successful book Sapiens: A Brief History of Mankind, Israeli historian and philosopher Yuval Noah Harari has since been steadily carving out a niche as a public intellectual. However, Harari’s later works, Homo Deus and 21 Lessons for the 21st Century, deviate slightly from his typical pure historical study. Through an application of historical case studies, he seeks to untangle the complexities of various pressing issues, including bioengineering, international terrorism, and the future of the human species. 


Harari’s latest book; Nexus: A Brief History of Information Networks from the Stone Age to AI, straddles the line between historical research and educated hypothesising. The first half of the book is a historical study of the development of human ability to transfer, store, use and communicate via information, as the title suggests, since the stone age. The second half of the book extends this analytical framework of information networks to the modern-day onset of mass AI adoption. Through it, Harari warns of AI’s potential dangers and outlines a strategy to ensure that AI does not collapse human civilisation.


If that sounds like a lot to do in a 400 page book, that’s because it is. The word ‘Brief’ in the title is instructive of how this book was written, and how seriously it should be taken. This article will not delve too deeply into the text’s habit of reductionism and sweeping generalisations to serve a surface-level analysis of the AI alignment problem. Harari’s work was already aptly described by Elle Magazine on the inside cover as ‘poolside reading’. Rather, I want to focus on Harari’s apparent cognitive dissonance when he is forced to contend with the vested interests preventing progress in the field of AI safety, and what this reveals about not only his politics, but those of the liberal mainstream more broadly.


It is always tricky to concisely label someone’s political position at the risk of oversimplifying their views, but, over the course of Nexus, certain trends materialise in Harari’s thinking. We can see Harari fundamentally as a pragmatist, and a moderate one. Harari himself states that the purpose of Nexus is to carve out ‘a more nuanced and hopeful view of human information networks and of our ability to handle power wisely’, while exploring the ‘middle ground’ between extreme positions he identifies on either side of today’s acrimony (Prologue, 27). The latter parts of the book are dedicated to analysing how evolving AI information networks affect governmental systems. Harari contends that this novel technology could both threaten and bolster liberal democracies and totalitarian states (Prologue, 32).


Despite his air of centrist objectivity, it is clear Harari sees liberal democracy, despite its flaws, as the vessel best equipped to navigate the storm of the AI age. When faced with the prospect of a more divided world, Harari contends that ‘as long as we are able to converse, we might find some shared story that can bring us closer’  (Ch 11, Page 384). Similarly, Harari praises liberal democracy for how it was able to bring about the decline of war by ‘humans changing their own laws, myths, and institutions and making better decisions.’ (Ch 11, 391-392).  More directly, Harari warns against totalitarian leaders who may wish to use AI as a weapon in a dog-eat-dog world, that ‘in the era of AI, the alpha predator is likely to be the AI’ (Ch 11, 393). 


It is with this understanding of Harari’s position on the virtues of liberal democracy, and his hope that its self-correcting systems of government will hold against the myriad challenges of the AI revolution, that I want to examine the contradictions of Nexus. More specifically, the mental gymnastics and questionable argumentation Harari is required to employ in order to keep his hope for established institutions alive.


The fundamental position against Harari’s ‘liberal optimism’ is what we might term the Marxist or ‘realist’ view of international relations. Though understood as incongruous views in the academy, Harari ties them together in observing that both predicate their theories of change on the exploitation of slanted power relations by the strong. In the author’s own words, such a view ‘[sees] all humans as fundamentally interested in power… [who] have tried to camouflage this unchanging reality under a thin and mutable veneer of myths and rituals, but have never really broken free from the law of the jungle’ (Ch 11, 388). This materialist view of global power relations is particularly troubling for the liberal optimist, who trusts that benevolent powerful states and individuals may set aside their personal interests to willingly work for the greater good. 


Disproving this position is therefore fundamental for the main argument of the book. To preserve ‘the light of consciousness itself’ (Epilogue, 403) in the face of AI, ‘we must commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms’ (Epilogue, 404).


Given the importance of dismantling the Marxist position for the coherence of the book, Harari’s argumentation is disappointing, and at times puzzling. For Harari, it is the intersubjective made-up stories humans tell themselves which determine all relations between large-scale human groups (Ch 2, 30). To quote him in full:


If history had been shaped solely by material interests and power struggles, there would be no point talking to people who disagree with us. Any conflict would ultimately be the result of objective power relations, which cannot be changed merely by talking. In particular, if privileged people can see and believe only those things that enshrine their privileges, how can anything except violence persuade them to renounce those privileges and alter their beliefs?


To substantiate this claim, Harari employs some perplexing examples. Citing that the United States and Britain invaded the Iraqi oil fields and not the Norwegian oil fields in 2003, he contends this proves that the invasion could not have been motivated solely by materialist considerations (Ch 2, 30). A bizarre example from head to toe, it simply does not disprove the supposed Marxist claim that the US and the UK invaded Iraq for the material gain of controlling oil fields. A Marxist need only reply that Iraq was a country with less political power and influence than Norway, making its invasion far less costly for both the US and UK, and thus making it an easier and more fruitful target. Using only Harari’s simplified definition of the Marxist position, his counterexample is quickly brushed aside.


Similarly, Harari later attempts to disprove the Marxist position through an analysis of the falling numbers of violent conflicts in the world, stating that ‘the clearest pattern we observe in the long-term history of humanity isn’t the constancy of conflict, but rather the increasing scale of cooperation’ (Ch 11, 389). He goes on to use the falling military budgets of many countries around the world to present the case that cooperation is possible, and humanity is not dominated by an innate motive to dominate for one’s own gain (Ch 11, 391).


The problem here is that, once again operating under Harari’s own definition of the Marxist position as cited above, power and submission need not be achieved only through violent means. Harari mistakenly equates the exercise of power with violent conflict. It ignores the myriad forms of economic, political, and technological power that may be exerted onto weak nations by dominant ones. 


Here, instead of taking the easy option of listing examples when countries have used their economic or political influence to exert power over another country without firing a single bullet, Harari’s own examples and arguments will suffice. Funnily enough, Harari specifically outlines the possibility of AI and data farming being used in a new form of colonialism. ‘A few corporations or governments harvesting the world’s data could transform the rest of the globe into data colonies - territories they control not with overt military force but with information’ (Ch 11, 370). In that case, is the exercise of power and domination possible through means other than warfare or not? Harari seems ambivalent on this issue, despite his thesis’s coherence depending on the answer.


Having now seen the confusing, contradictory, and fallacious arguments employed by Harari against the Marxist position on power, it is interesting to discuss the possible explanation behind his apparent cognitive dissonance. 


Harari seems to be very aware of the vested interests threatening the regulation and measured adoption of AI. He correctly cites the lobbying power tech companies hold over legislative bodies both in the EU and the US, highlighting how tech firms spent $183 million on such lobbying in 2022 (Ch 6, 220), more than oil & gas and pharmaceutical companies. Similarly, Harari spends a sizeable part of the book condemning the influence of Facebook’s algorithm in inciting violent hate crimes against the minority Muslim Rohingya population in Myanmar in 2016-17. The clear conclusion Harari draws from his analysis of the role the algorithm played in creating this violence is that it was promoting hate because it was programmed ‘in line with the business model…with a single overriding goal: increase user engagement’ (Ch 6, 199). In short, Harari concedes that between 25,000-43,000 Rohingya died in Myanmar from 2016-2017 because Facebook needed to keep its shareholders happy.


Thus, it is clear that Harari acknowledges the lobbying power of tech corporations to prevent AI regulation, and how the power of algorithms, when combined with the intrinsic logic of shareholder capitalism, can lead to dystopian outcomes. He is unable, however, to thread these ideas into an impactful critique of oligopoly capital as the fundamental barrier to humanity’s positive and productive adoption of AI.


Perhaps Harari is aware of this himself. Perhaps he is also aware of the fact that his largely mainstream audience may not buy his books if he explicitly branded himself as a socialist revolutionary, and followed through with the criticisms implicit in his writing. Similarly, his publisher must also have these considerations in mind. However, for the sake of argument, I shall assume here that all Harari writes are his own views.


I believe the reason behind Harari’s cognitive dissonance on the existential threat of AI policy is his fundamental belief in the mainstream liberal order. A liberal moderate like Harari, when faced with the apocalyptic challenge of AI, has to believe that the potential harms that may arise from contemporary liberal institutions are perversions of their function, and not a fundamental fault in their design. This is of course the overarching distinction between a liberal and a leftist. The former seeks to improve the system of liberal democracy within existing institutions of capitalism. The latter seeks to preserve liberal democratic principles of free discourse and elections outside the status quo, which, in the 21st century, is unequivocally a capitalist one.


This liberal status quo view applied to technology and its ability to change societies is exemplified in Harari’s own words:


By swiftly disseminating the words of presidents and citizens, newspapers and telegraphs opened the door to both large-scale democracy and large-scale totalitarianism. Technology is rarely deterministic, and the same technology can be used in very different ways (Epilogue, 398-399). 


In this sense, Harari is not making any grand points about the material circumstances that determine the way in which technological power is used for different ends. Instead, the analysis is reduced to the agency of individuals to do with technology whatever they please, regardless of the context. Without this material analysis, any discussion regarding how AI may be mishandled, and how we can prevent its catastrophic consequences, is meaningless. Unfortunately, Harari’s position as a moderate liberal, despite the caveats he provides to the efficiency of liberal democracy, prevents him from constructing a coherent analysis of AI in the modern world, for his worldview would not survive.


I don’t necessarily believe Harari to be a bad writer. In fact, I quite enjoyed reading Nexus for all the quirky examples and case studies Harari used to support his surface-level assessment of the greatest existential crisis humanity has ever faced. After all, that is the intended effect. To not only analyse the issues Harari presents in this book, but to also solve them, would take lifetimes, not 400 pages in size 14 font.





Image: Wikimedia Commons/Martin Kraft

No image changes made.

Comments


bottom of page