If You Want Good, Prepare for Evil
Quid ex Machina, Shorts Two. Theme song by Cage the Elephant
This is one of the shorter, more personal and reflective essays in the series Quid ex Machina. The series examines the deep, under-explored, and Unseen impact of AI on humans and our societies. See here for the rest of the series.
Today’s theme song is Ain’t No Rest for the Wicked by Cage the Elephant. The title is just too good to pass up.
We are weird.
By “we,” I mean cyber-security people – yours truly included.
By “weird,” I mean we do different things, and do things differently.
How so?
We enjoy thinking about and actually breaking perfectly functional things, and it’s not only software and encryption mechanisms. From HOA rules to the latest iPhone, there is nothing we don’t like jail-breaking. Between cryptocurrency scams and malware markets, there is no bad scheme we don’t love to expose. And when we see something new and shiny – an app, a website, a service, a tool, an invention, you name it – these are the first questions popping into our minds:
How can it be used against the designer’s intentions?
How can it be abused?
What are the loopholes we can exploit?
Where can vulnerabilities occur?
If my worst enemy and the most evil person get their hands on it, what could they do? How about petty criminals? Stalkers? Mafia bosses? State actors?
And so on.
Now you know we are weird. But we have very practical reasons to think and behave this way. We even bestowed a fancy name onto it:
Threat modeling.
Today’s title sums up the essence of threat modeling: if we want to build good things, we must first seek evil, and know how evil works against the good. Then we can build defense, and plan for counter-attacks against evil.
Threat modeling is the cornerstone of our cyber-security mindset. I carry this mindset everywhere, including this series. From the danger of machine worshipping to affirmation of human values, from language AIs spewing BS to our raised awareness for truth, from the breakdown of social trust to how we may guard against it — in each essay, I try to slice through the bad, before marching on to defend the good. I want to present realistic threat models and countermeasures, alert us to the evil in AI use cases, and provoke actions to defend the good.
I try to be as realistic as possible – as any good threat modeling demands – and base my thoughts on what AI tools have done and can do now. So you can imagine how much my eyeballs rolled, when two unrealistic camps emerged to argue about the impact of AI.
One camp is the doomers, mostly of the utilitarian strain, arguing the AI will destroy us all, and the apocalypse is a paper-clip production mania run by a robot. Their battle cry is “alignment,” meaning that AIs should do what humans desire to do.
The other camp is the techno-utopians, usually of the tech bro breed, cheering that AI is the panacea, the be-all-end-all of our society. Their chant is “AGI” – short for Artificial General Intelligence – whose meaning is nebulous1, but is generally assumed to mean some program at least as smart as an adult human2.
Extreme mindsets never made sense to me in the first place. But I have a bigger problem with both camps: they base their views on hyperboles about AIs’ current aptitude, speculations on AIs’ future capabilities, and strong appeals to emotions. Moon-bound hype or abyss-low despair, nothing in between.
Hyperboles, speculations, and strong emotions: these are mortal enemies for any decent threat modeling. We the cyber-security people may be weird, but we are also practical and realistic. When we search for possible evils, we enumerate the attack scenarios based on what happened, what is happening, and what’s most likely to happen in a well-defined, small window of the future. Speculation and hyperboles have no place here. Because they distract us from the most important and urgent issues at hand, and waste our resources and attention on playing rhetorical games instead of coming up with actionable defense plans — the plans that will protect the good, when likely evils strike in reality.
And much resource and attention did the extreme camps grab. Billions of dollars were funneled to the doomers who married a fantasy of paper-clip-apocalypse, and more billions have gone to the techno-utopians who never bothered to do threat modeling on what they are doing.
Many of my cyber-security friends feel disheartened by this. Their threat models of AI, no matter how realistic or comprehensive, simply don’t have the glory-or-gloom emotional appeal that draws massive resources and attention. Their preparation to defend against evil may never see the light of press or social media — until evil strikes, and people are finally reminded of their work to guard the good.
I know these feelings too well. This is partly why I started this series: to share realistic threat models of AI tools and malicious actors, and propose defense plans. It’s a reminder that reality-based thinking still exists in AI and technology, even when rabid sentiments dominate our conversations.
When writing this series, sometimes I feel like I’m holding a pale fire to light the way around a growing labyrinth at nightfall. The flame may be feeble, but it shines still; and if I hold it higher, someone else with another fire may find us and join us.
And while holding this pale fire, I constantly say to myself: no matter the hype, keep calm, stay real, prepare for evil, and build for good.
Thank you for reading Earthly Fortunes! Like it? Please share it! 😄 Subscribe for more earthly fortunes: time, geography, music, medieval farming, and the Unseen of AI.
Where do you think we are going with AI? What evil and bad do we need to prepare, so we can defend the good? Comment below, DM me on Twitter or Instagram, or reply to the email!
“Singularity” is also a common term used by techno-utopians.
I don’t want to name names: because any publicity is good publicity. But here are some names if you are curious: Nick Bostrom and Eliezer Yudkowsky are the banner-bearing doomers; Blake Lemoine and Anthony Levandowski are typical techno-utopians.
You reminded me of this anecdote, which I love:
"As with most successful racers, Yunick was a master of the grey area straddling the rules. Perhaps his most famous exploit was his #13 1966 Chevrolet Chevelle, driven by Curtis Turner. The car was so much faster than the competition during testing that they were certain that cheating was involved; some sort of aerodynamic enhancement was strongly suspected, but the car's profile seemed to be entirely stock, as the rules required. It was eventually discovered that Yunick had lowered and modified the roof and windows and raised the floor (to lower the body) of the production car. Since then, NASCAR required each race car's roof, hood, and trunk to fit templates representing the production car's exact profile. Another Yunick improvisation was getting around the regulations specifying a maximum size for the fuel tank, by using 11-foot (3 meter) coils of 2-inch (5-centimeter) diameter tubing for the fuel line to add about 5 gallons (19 liters) to the car's fuel capacity. Once, NASCAR officials came up with a list of nine items for Yunick to fix before the car would be allowed on the track. The suspicious NASCAR officials had removed the tank for inspection. Yunick started the car with no gas tank and said "Better make it ten," and drove it back to the pits. He used a basketball in the fuel tank which could be inflated when the car's fuel capacity was checked and deflated for the race."
I didn't know threat modeling existed! I'm glad it does. This essay is super interesting! I love how you write complex topics in a digestible and playful way =)