What does Captain America: The Winter Soldier get right about AI? We discuss the MCU film and a key modern-day question.
Captain America: The Winter Soldier is regarded as one of the best films in the Marvel Cinematic Universe, and with good reason. The film has a stellar direction from Joe and Anthony Russo, a brilliant plot, and an amazing performance from Chris Evans as Steve Rogers, aka Captain America.
The film is known for its handling of important political issues like constant government surveillance.
But in the age of AI, the film has taken on a new meaning. This is because a central part of the plot, Project Insight, revolves around the use of autonomous weapons systems (AWS).
While the Russos had no intention to comment on this, the film does show just how dangerous AI can be and depicts the very real concerns about AWS.
What is Project Insight?
Project Insight started as a S.H.I.E.L.D. project. The original plan was to launch three helicarriers; each outfitted with an algorithm that determines if a person was or was about to become a threat and kill them. These helicarriers pick their targets autonomously, without human input.
Later in The Winter Soldier, it’s revealed the Nazi group HYDRA has taken over both S.H.I.E.L.D. and Project Insight. Captain America learns the true nature of the algorithm, one that predicts if people are or will become a threat to HYDRA’s new world order.
Project Insight can do this because the algorithm reads people’s bank records, medical history, voting patterns, emails, phone calls, and even their high school SAT scores.
It evaluates all this and more to predict people’s future based on their previous activities. If a person’s predicted future doesn’t match HYDRA’s vision, then the Insight helicarriers will kill them.
Seemingly sci-fi stuff.
How is AI used in war?
While HYDRA’s helicarriers are firmly in the realm of sci-fi, unfortunately, the technology they’re outfitted with isn’t. AI is taking on a much bigger role on the battlefield and has already killed people.
Autonomous weapons systems (AWS) have been developed and used in countries like Turkey. The country has developed a suicide drone, a type of munition that is given a target and is launched autonomously. It finds the target using biometric facial recognition, then flies up in their face and detonates.
Based on the data it’s given and nothing else, this weapon chooses who to kill. Without a human operator.
What does Captain America: The Winter Soldier get right about AI?
While The Winter Soldier is a fictional tale, it captures the very real danger of how data might be used in the future by AWS. People’s data could be weaponized in the real world, too, making it very easy to target certain ideologies, beliefs, and even characteristics like a person’s ethnicity. It’s scary stuff.
Every social media post, every expressed opinion, every purchase, and every website visited could be used against their owners. If AWS goes too far, it could cause untold human destruction, as well as the death of privacy, freedom, and the very concept of a free society.
If terrorists got hold of AWS, they could cause immeasurable damage with only a few lines of code. All the data they could want is available online, allowing them to turn it into ammunition to target people. Dictatorships could use AWS to make dissent impossible.
A video called Slaughterbots depicts how unstoppable and effective AWS is in horrifying detail. Made in collaboration with Professor Stuart Russell, an expert in computer science and AI from the University of Berkeley, it shows a company unveiling a suicide drone for benevolent use on the battlefield, only for it to target civilians holding certain opinions.
The Winter Soldier shows the damage that can be done with AWS. It doesn’t matter whose hands the weapons are in; everyone is put at risk. And unfortunately, there aren’t real-life superheroes to stop the robots.