Skip to main content
SearchLoginLogin or Signup

Killer Robots and the Technological Condition

Reflections on The Terminator (1984)

Published onNov 14, 2022
Killer Robots and the Technological Condition
·

On October 19, 2022, The Terminator (1984) was screened by CAPAS at the old Heidelberg Karlstorkino. It was, somewhat ironically, one of the last films to be seen at the venue before the cinema moved to the new cultural hub at the Kulturhaus Karlstorbahnhof in the Südstadt. But, as apocalyptic unveiling has long taught us, with all endings come new beginnings. The Terminator was the start of the regular apocalyptic cinema programme at CAPAS for the Winter semester 22/23. Elke Schwarz, one of our current CAPAS fellows, addressed the audience with a scientific commentary on the classic film and its relation to our troubling times. The following is a revised version of said commentary.

Killer Robots and the Technological Condition

On October 6, 2022, six robotics companies – among them Boston Dynamics – published an open letter declaring that they pledge not to arm their robotics creations for fear that ‘bad actors’ might end up misusing the technology. Only a week later, the annual Association of the U.S. Army trade event featured a large number of armed robots with new levels of autonomous capability. The event added to the already heightened fears that autonomous hunter-killer machines are on the horizon of near possibility.  

What better time, then, to re-watch the original 1984 Terminator film, the story that kicked off not only an immensely lucrative Hollywood franchise, but also provided the ultimate visual for dystopian fears – or indeed utopian fantasies – for the future of war to come? While Terminator 2: Judgement Day (1991) is the most well-known and successful of the franchise (it remains the third highest grossing film of all time), the 1984 original offers a fascinating glimpse back to the future – through the lens of the past – as it engages the perennial dichotomy between the human and the machine in an era when computational technology was still in relative infancy. 

Rumour has it that the idea for the Terminator appeared to James Cameron in a nightmare. Some suggest that this nightmare might have borrowed quite extensively from ideas depicted in the 1964 sci-fi show The Outer Limits, but who came first, or whether this is true or not, is, in this instance, not so interesting. Rather, the Terminator – as an individual film and a franchise – presents a continuation of distinctly ‘modern’ socio-political and technological challenges that originate from an early embeddedness within industrial machines, and still resonate with us today. 

So then, the special effects and focus on the cyborg-as-Killer-Robot notwithstanding, this is not a story about technology at all. Like most good science fiction stories, it is primarily a tale about human anxieties and human thriving, against the backdrop of socio-political strife and within a technologised world. Foregrounded here is a grappling with our own alienation from the world (Weltfremdheit); the fact that we are, in some ways, always artificial, but never artefact. Günther Anders (2010: 25) describes this condition as a type of shame; specifically, a Promethean shame: 

The desire of modern man to become self-made, to become a product has to be considered faced with this changed foil: humans want to make themselves, not because they do not tolerate anything that is not human-made, but rather because they do not want to remain un-made. Not because they are ashamed to be made by others (God, gods, Nature), but because they are not made at all and as un-made humans remain inferior to their own fabrications. What is evident here is a variant of a classic confusion: the inversion between creator and creatum.

The film therefore engages the philosophical possibility of creating one’s own destiny, meditating on the fact that we are always enmeshed with our technological artefacts when creating the future. After all, as we learn in subsequent Terminator films, it is the remnant of the T-800 which was crushed when Sarah Connor activates the automated machines in the ultimate battle which gave rise to Skynet in the first place. With this, then, the apocalypse is human induced, both times: in making the sentient Killer Robot universe that spells the end of human life as we know it, and in defeating it. Ultimately, humans end up on the victorious side; bringing an end to the world of the machines through the birth of John Conner, a perhaps very literal exemplification of what Hannah Arendt (1998) calls ‘natality,’ the condition of immanent new beginnings through birth. Natality is the human condition associated with contingency and unpredictability and ultimately the possibility for political action toward something new, the possibility to overcome, to give birth to a better future. “Do I look like the Mother of the Future?” Sarah Connor quips as she bandages up Kyle Reese’s arm. Apparently so.

Like all good sci-fi films and stories, the focus is on the condition of humanity and encapsulates the worries, fears, and aspirations of a specific human collective. To this end, visionary technologies are often mobilised with two aims: to lend a cutting edge dimension and a little bit of sex appeal to the storyline, but also, importantly, to help draw the contours of socio-political problems and anxieties much more starkly in ways only fictional technology can. In the 1980s, computer technologies as we know them today were nascent, the world was still coming to terms with the aftermath of World War Two, and the latent nuclear threat was permeating political consciousness. This was the era of early personal computers, personalised audio equipment, personalised entertainment devices: technologies that seem quaint by contemporary standards, but that spelled the beginning of the digital universe that envelops us today and the subjectivities that result from this embeddedness. Research in artificial intelligence (AI) – a programme that had started in the 1950s – had just come out of a stark ‘winter,’ a period in which interest and investment into this new technology had waned. Now, in the 1980s, rapid advances in computer processing power reignited the hopes and ambitions for AI anew. And with this came lofty ideas of an AI superintelligence that would assist us in freeing ourselves from mortal constraints – but also might just wipe out all of humanity. Trapped, once more, between utopia and dystopia. 

Investment in military weapons research, including computer technology for warfare, was, at that time, running at an all-time high. Growing sums were allocated to computerised weapons technology and this was, of course, also the era in which the development of Intercontinental Ballistic Missiles (ICBMs) was raising great concerns around their ability to serve as nuclear weapons delivery platforms, and all this on the back of a period of nuclear stockpiling and a still very ‘hot’ arms race and threat of all-out nuclear war. Autonomous weapons systems were still in their infancy, although weapons systems with some autonomous capabilities have, technically, existed at least since 1979 when deep-water mines were introduced that employed sensors to rudimentarily identify, track, and target mostly Soviet submarines. In some ways, the context is not so different than ours today – perhaps a recurrence or remix – with ubiquitous new personal technologies that demand our increased attention, a rapid advancement in hypersonic missile technology by powerful states, and a latent threat of nuclear annihilation. And perhaps the socio-political commentary of the early Terminator movie should raise some concerns about the contemporary conjuncture. 

But as in so many cases where strands of sci-fi visions of the future find their way into generations present, it is rarely the political, economic, and social lessons that should have been learned from the fiction that translate into human progress, and almost always the dystopian technologies that are realised instead. Consider, for example, Neal Stephenson’s 1992 cyberpunk novel Snow Crash, which places significant emphasis on questions of class and privilege and in which the term ‘Metaverse’ was first coined. Today, the latter has materialised in a form all-too-close to Stephenson’s depiction in the book, while issues of class and privilege have remained largely unaddressed, and have, instead, become amplified. The case here is similar. Rather than consider the themes of nuclear devastation and anxiety about an increasingly technologised world, one cannot shake the thought that perhaps the technology elites read or watch these tales of socio-political woe and think: “Ah yes, I get it, let’s look into whether we can make some quasi-sentient surveillance network and some Killer Robots as lethal delivery systems. That seems like a great idea.” 

But as with all human affairs, and specifically with technology, things rarely turn out exactly as expected. A quick look at where we are with what the Terminator suggests we might be dealing with in 2021-29 is instructive. Neither AI sentience nor humanoid autonomous cyborg robotics, where human tissue encases a machine body, as depicted in the first Terminator film, are anywhere near even the rudimentary sophistication depicted in the film. As it turns out, it is really difficult to make a well-rounded, functioning humanoid robot. The DARPA Robotics Challenge is a case in point and makes for amusing viewing for anyone wishing to dispel the notion that a robot take-over is imminent: robots missing steps, falling down and toppling over, reminiscent of someone who’s had just a few too many drinks at the bar. Similarly, the Boston Dynamics robot dog ‘Spot’ went viral with a video of the machine being comically thwarted by a banana peel(perhaps directing our attention away from a more eerie use of the technology for policing in New York, or to encourage social distancing in Singapore). Although the Boston Dynamics humanoid robots ‘Atlas’ and ‘Handle’ made waves in 2020 with an impressive display of backflips and other parkour skills, the technology is still a long way away from the all-purpose nimbleness a human body has. The possibility of making a humanoid robot look life-like is also still some ways off and the most prominent attempts – such as the Hansen Robotics robot bust Sophia, who was gifted citizenship in Saudi Arabia in 2017 – remain eerie but unconvincing as a human replica. 

Perhaps the most accurate depiction of contemporary technology in the film is the use of deep fake audio technology, which appears twice, first when Arnold Schwarzenegger flawlessly channels the voice of a cop responding to dispatch and later again when he channels Sarah Conner’s mother to find out where she is. Although it is not entirely clear how the T-800 would have acquired the necessary data to train for a deep fake rendering of either of these two with such levels of nuance and sophistication, especially if, as Reese explains, SkyNet had lost the majority of its data. Deep fake technologies present a growing challenge to democratic pillars of society today as they contribute to the spread of disinformation, hate speech, and the undermining of trust as a core social currency. As a tool in information warfare, deep fake technologies are advanced and potent. As has been the case for a while now, ethical and legal frameworks to curb their impact lag behind the technological advancements made. The same is true for the use of AI in military contexts, not least because the market is lucrative – worth just under 7 billion US dollars in 2021 – and set to double in the next five years. This, in turn, supports a growing market for autonomous weapons systems, an industry that has generated 11 billion US dollars in 2020, forecast to triple by 2030. 

As we move from sci-fi to reality in discussions on Lethal Autonomous Weapons Systems (LAWS) – or ‘Killer Robots’ – the Terminator is frequently invoked, perhaps for good reason; not because the T-800 is a realisable technology, but because the dystopic idea of a purely systematic killing machine raises anxieties that need to be expressed in some recognisable shape. Up until very recently, the media depiction of Lethal Autonomous Weapons Systems and the threat that they may ‘go rogue’, was often closely associated with images from the Terminator franchise. This portrayal was rather unhelpful for frank discussions on the regulation of autonomous weapons systems as it played too much on sensationalist dystopian hype. Since 2020, that visual has subsided and a more mundane picture of LAWS has emerged in media and discourse. 

The fact is, that today’s autonomous weapons are a lot less conspicuously frightening but all the more threatening. Above you can see the logo for the US Department of Defense Algorithmic Warfare Cross-Functional Team (AWCFT), better known as Project Maven. Project Maven is the US military AI pathfinder programme, initiated in 2017. The programme uses machine learning algorithms to evaluate footage captured by drones, assess this footage based on pre-configured parameters, and provide information about what could or should be identified as a threat, and, potentially, acted on with lethal force in real time. Project Maven provides not a weapon as such, but, rather, a weaponisable AI tool that could be employed in aid of making a weapon system more autonomous. At this stage, systems like those developed by Project Maven still keep the human in the decision loop, meaning the human is the final arbiter of force, but this is quickly changing and more systems that could be described as ‘HKs’ – or hunter killer systems, in the film’s parlance – are being trialled or indeed used. In 2020, for example, the US Defense Advanced Research Project Agency (DARPA) flew a trial mission of drone swarms and swarming tanks, which were tasked with tracking down terrorist suspects in an urban environment. This was just one of many such exercises which signaled very clearly that the Pentagon is determined to shift an increasing amount of decision-making – including targeting decision-making – to machines. In 2021, a UN Report on the conflict in Libya indicated that a Turkish-made drone was used to find, track, and attack one of the parties to the hostilities without requiring any data connection between the operator and the munition. The details on this event are scant and a bit fuzzy, but if it was indeed used in full autonomy mode to identify, track, and attack human targets, this would – at this time – be the first publicly documented anti-personnel use of a fully autonomous lethal system. And this is where the greatest dangers reside. AI-equipped weapons systems employed for lethal targeting are likely marred by incomplete, low-quality, incorrect, or discrepant data. This, in turn, leads to highly brittle systems and biased, harmful outcomes. Autonomous systems tend to be built and tested on rather limited samples of data, sometimes synthetic data, and sometimes inappropriate data. To date, it is simply not possible to model the complexities of the battlefield accurately. But the general thinking is that more data will somehow solve this problem. This, of course, also means more surveillance, usually of populations that have no say in it. 

In 2015, it became known that Project Maven had a predecessor programme that used metadata to identify potential terrorist suspects for targeting. This system was called – seemingly without any irony – Skynet. The real world Skynet is an extensive data surveillance programme employing machine learning and algorithmic data analysis in order to establish patterns of behaviour which would help identify suspects with potential terrorist intent. We now know that Skynet was employed to identify possible targets for drone strikes in Pakistan. When then CIA Director Michael Hayden let slip in 2014 that the CIA “kills people based on metadata,” it was on account of programmes such as Skynet that this was made possible (Cole, 2014). 

This reflects a trend in weapons systems that are employed to hunt and kill. The fundamental idea that underpins this trend is that more data will somehow solve the problem of war, society, and conflict. This, however, brackets a conscientious engagement with the social and political nature of warfare and conflict. War is, in essence, a social issue, not a technological or an engineering problem. If treated as the latter, more surveillance to gather more data is inevitable. This usually comes at the expense of populations and communities that have little say in the matter. It also represents, more generally, a shift toward the greater datafication of human life in an attempt to overcome our finitude, the condition of Promethean shame, and, ultimately, our doom. 

On the surface, The Terminator is the age-old story of humans versus machines, when really its primary focus is on our own dehumanisation in the race to produce ever more powerful, intrusive, and autonomous technological systems, including weapons systems, and our ongoing enmeshment into a larger ecology of digital machinery, with all of the political complications this entails. This theme, I suspect, will be the stuff of future films to come for many years, lest we annihilate ourselves before then. 

Bibliography: 

Anders, Günther (2010) Die Antiquiertheit des Menschen Volume 1. München: C. H. Beck Verlag

Arendt, Hannah (1998) The Human Condition. Chicago: University of Chicago Press

Cole, David (2014) ‘We kill people based on metadata’,  New York Review of Books, 10 May 2014. Available at:<http://www.nybooks.com/daily/2014/05/10/we-kill-people- based-metadata/>


Elke Schwarz is Reader in Political Theory at Queen Mary University London (QMUL) and Director of TheoryLab at QMUL’s School of Politics and International Relations. Her research focuses on the intersection of ethics, war, and technology, especially in connection with autonomous or intelligent military technologies and their impacts on contemporary warfare. She is the author of Death Machines: The Ethics of Violent Technologies (Manchester University Press, 2018), and her work has been published in a broad range of journals across the fields of security studies, philosophy, military ethics, and international relations. Over the last 10 years, she has been involved in a number of policy initiatives through various international NGOs and think tanks on issues related to the use of drones, autonomous weapons systems, and military Artificial Intelligence. She is an RSA Fellow and a member of the International Committee for Robot Arms Control (ICRAC). She is also co-series editor for the Springer Verlag series, “Frontiers in International Relations” and Associate Editor for the journal New Perspectives.


Comments
0
comment
No comments here
Why not start the discussion?