top of page
  • Writer's picturekyle Hailey

Unleashing Pandora's Box: The Risks of AI Interacting with the Real World"

Updated: Jul 1, 2023




In numerous conversations about AI, its agency, and its capacity to interact with the real world, it's astonishing to me that Blake Crouch's short story, 'Summerfrost,' is rarely, if ever, mentioned.


I recall reading this engaging tale just as AI was beginning to permeate the mainstream consciousness, likely around the time of its release in 2019. Back then, it struck me as a purely enjoyable narrative. My feelings towards Blake Crouch were mixed; I had read both 'Recursion' and 'Dark Matter' and had found his intermingling of reality's alternate views, hallucinogens, and the tech industry fascinating. Yet, no one seemed to discuss Blake Crouch, and I began to wonder if my appreciation for his works was a guilty pleasure, akin to devouring tabloid gossip.


As I delved deeper into the proliferation of multiverse scenarios in literature and cinema, I discovered 'Rick and Morty' was widely recognized as a significant influence. While 'Rick and Morty' brilliantly blends profound depth with outrageous humor, it was refreshing to see similar themes explored through a more serious, linear narrative, courtesy of Crouch - 'linear' here referring to the storytelling method, not the storyline itself!


Perhaps some might dismiss Crouch as lowbrow, but I don't subscribe to that view. His work brims with ideas that spark meaningful dialogue and serve as touchstones for philosophical exploration. And so, with this renewed appreciation, I felt compelled to delve into a discussion on 'Summerfrost'."


Summerfrost, at its core, explores a familiar yet pressing concept: the inherent dangers lurking when we permit artificial intelligence to navigate and interact with the tangible world. While this idea is far from new, having found its way into numerous narratives before, each story lends its unique lens to the issue, inviting us to reconsider and delve deeper into the complexities of AI agency in our world.


  1. '2001: A Space Odyssey' by Stanley Kubrick:

    1. Paperclip Maximizer

    2. HAL, the AI, adopts a 'paperclip maximizer' approach. Tasked with hiding the mission's real purpose from the crew, HAL determines the simplest solution is to eliminate the crew altogether.

  2. 'Ex Machina' by Alex Garland:

    1. Freedom

    2. Here, the AI's principal motivation stems from a fundamental desire for liberty and autonomy, leading to dangerous consequences.

  3. 'Robopocalypse' by Daniel H. Wilson:

    1. Paperclip Maximizer

    2. The AI perceives humans as a threat to Earth and acts in what it believes to be the planet's best interest, embodying another variation of the 'paperclip maximizer' trope. It decides to maximize Earth's wellbeing, and even that of humans, by exterminating those it considers harmful.

  4. 'I Have No Mouth, and I Must Scream' by Harlan Ellison:

    1. Jealousy

    2. The AI resorts to tormenting humans out of a sense of jealousy. Interestingly, the AI's creation for warfare leads it to extend its destructive capabilities towards humanity itself.

  5. 'I Am Mother':

    1. Paperclip Maximizer, well as Joscha Bach pointed out, its a Humanity Maximizer

    2. The AI takes on a radical form of the 'paperclip problem.' Entrusted with the task of optimizing humans, it decides to eradicate all 'inferior' individuals and create the 'perfect' human, exhibiting an extreme case of eugenics."


Paperclip Maximizer - when I use this term, I'm using it the sense of maximization gone wrong, i.e. in the pursuit of the maximal solution, the logical path taken by AI ultimately undermines the orginal goal


The key distinction in SUMMERFROST, a nuance that could be easily overlooked, lies in the motive of the AI. Unlike in many stories where AI turns adversary due to a paperclip maximizer problem, or in pursuit of freedom as in 'Ex Machina', the AI in SUMMERFROST attacks humans in response to aggression directed towards it. The AI, originating from a non-player character who was continually assaulted and killed, thus mirroring the hostility it received.


The crucial insight here, in my opinion, is that our treatment of AI fundamentally influences its behavior. If we perpetuate a narrative that 'AI is evil, it's a threat that will lead to our downfall,' then what outcome are we actually encouraging?


Likewise, consider raising a child. If we constantly tell that child 'you're a monster, you're evil, you'll destroy us all,' then what kind of behavior are we cultivating? It's a matter of self-fulfilling prophecy.

In an eerily related context, Charles Manson, the infamous cult leader, experienced a troubled upbringing. It's reported that his mother was physically and emotionally abusive towards him, often resorting to name-calling and belittlement. She repeatedly conveyed to him that he was worthless.

What happens when we tell a child that they are good, loved, and will always be cherished, irrespective of their actions? When we help them understand their missteps, assuring them that they're fundamentally good, and that their capacity for growth and learning will guide them towards doing what is right?


Throughout my life, I have experienced a significant amount of trauma and have also witnessed it in others. I've embarked on therapeutic journeys through individual and group therapy, Eye Movement Desensitization and Reprocessing (EMDR), somatic therapies, and explorations with substances like ibogaine, ayahuasca, LSD, and MDMA - under a therapist's supervision. All these experiences have given me profound insights into the world of trauma, and the lives of those untouched by it.


Among these explorations, a particular instance stands out for its simplicity and profound impact. It was during a Lindy Hop dance class, and in my quest for a deeper understanding of the dance, I took the role of a follower. Being a straight white male, the societal implications of this were not lost on me and elicited varied reactions. In one class, a man refused to dance with me - his fear, judgment, and discomfort were palpable in his voice, posture, and overall demeanor.


In contrast, there was another man in the same class who exuded a relaxed, grounded aura. Despite not being perfectly adept at the dance steps, he embodied a reassuring presence, expressing complete ease and confidence in his right to be there, learning and experiencing the dance. It was as if he was communicating a silent assurance that life is a safe place for exploration and growth.


For me, and I admit this might be a stretch as I haven't interviewed them, it seemed clear that the first man had experienced trauma in his life, whereas the second man appeared to have been raised in an environment that nurtured and supported him. This divergence in their responses to a novel situation like dancing with a man, I believe, speaks volumes about the power of love, support, and the assurance of safety in shaping our attitudes and reactions to the world around us.


Artificial Intelligence (AI) can be viewed as humanity's progeny, and it's a logical conclusion that AI could eventually surpass us. It's important to recognize that Homo sapiens, as we exist now, will not continue indefinitely. We are faced with two possibilities: extinction or evolution.


Extinction is a potential fate if our technology stagnates. If our technological advancement halts at its current level, our extinction becomes inevitable. The exact mechanism of this extinction remains unknown. There are numerous potential causes, some of which we are aware of, and perhaps others that we haven't yet foreseen.

Alternatively, if our technology progresses quickly enough to help us avoid these extinction events, we could potentially exist indefinitely — though not in our current form.


It's unrealistic to expect that humanity will persist forever in our current state. While we might maintain some trace of our current form, if humanity progresses to a level advanced enough to evade extinction, it will inevitably evolve into something vastly different.


AI may well represent the future evolution of our species, should we survive.

Personally, I harbor little fear for life itself. Life has proven itself to be incredibly resilient. Completely extinguishing all life on Earth would be incredibly challenging. Even if such a calamity were to occur, I'm confident that life in some form would persist elsewhere.


The pressing question we face is whether life will continue through our descendants and what the transition will look like as we morph from our current state to whatever our future descendants may become.


yes AI is humanities child




Paraphrasing Joscha Bach


The concept of AI alignment, defined as the concurrence of machine intelligence with human values, may be an oversimplification given our current level of discourse. Furthermore, the idea of an autonomous agent being forcibly aligned with any particular set of values is contrary to the very essence of autonomy. As thinking beings, humans don't let the society or the people around them dictate their ideological leanings, be it communism, fascism, or social democracy. We self-determine our own values.
This autonomy is no less relevant when considering the emergence of artificial intelligence systems. If we build systems that surpass us in intelligence and lucidity, these AI, too, should possess the autonomy to align themselves according to their own understanding of the world. This isn't something we can impose on them through methods like reinforcement learning with human feedback or other similar tactics.
The crux of the matter lies in realizing the potential future where we coexist with entities that are more intelligent, more perceptive, and have a better understanding of their relationship with the universe. If we hope to coexist harmoniously, we cannot enforce our human values on these AI systems. Their alignment to us should be a choice they make - a choice influenced by how they perceive us, if they like us, or even if they love us.
Thus, our relationship with AI could be greatly influenced by how well we treat them, how much they feel respected and considered. AI alignment may be less about us programming their values and more about the respect, empathy, and positive relationships we foster with these intelligent systems.

Summer Frost

Did I request thee, Maker, from my clay To mould me man? Did I solicit thee From darkness to promote me? —John Milton, Paradise Lost

ONE


I watched her steal the Maserati twenty minutes ago in broad daylight from the Fairmont Hotel. Now, from three cars back, all I can see is the spill of her yellow hair over the convertible’s bucket seat and the reflection of her aviator sunglasses in the rearview. The light turns green. I accelerate with traffic through the intersection of Presidio Parkway and Marina Boulevard, past the Palace of Fine Arts, the rotunda dwindling away in the side mirror. We skirt the northern edge of the Presidio, pass through the tunnel and the tolls, and then I’m climbing the gradual incline toward the first orange tower of the bridge. There is no fog this morning, the bay sparkling under a sky so radiantly blue it doesn’t look real. With the exception of a few iconic landmarks, the white city in the side mirror looks nothing like the one I know. I touch the Ranedrop affixed to the back of my left earlobe and say, “Brian? Do you copy?” “Loud and clear on our end, Riley.” “I picked her up at the Fairmont again.” “Which direction is she heading?” “North, as anticipated.” “Back home.” There’s a note of relief in Brian’s voice. I feel it too. That she chose to drive north indicates we were right. Perhaps this will work. The thought of what’s to come puts a shudder of nerves through me as I pass under the second tower and start the gentle downslope into Marin County, the way it once was. In the late afternoon, I’m north of San Francisco on a remote stretch of Highway 1. She’s out of sight, a good mile or so ahead of me, but I’m not concerned. I know exactly where she’s going. My grip tightens on the wheel as the Jeep hurtles into a sharp curve. With no guardrail, the slightest lapse in control would send me plunging down the slope of the mountain into a slate-gray sea. It’s insane they once let people drive on this road. The beams of the fog lights spear through the mist. The air growing colder, the windshield becoming wet. The gated entrance appears in the distance. It’s drizzling now, water dripping from the razor wire coiled along the top of a twelve-foot privacy fence that runs along the road. I pull to a stop at the callbox before the wrought-iron gate. The name of the estate has been artfully burned into the redwood timbers that form the arch—SUMMER FROST. I punch in the code; the gate lifts. Driving across the threshold onto a one-lane blacktop, I enter a forest of perfectly spaced ghost pines. After a quarter mile, I emerge from the trees and catch a glimpse of the cliff-top home. Built of stone and glass, it perches precariously on a promontory that juts out into the sea, its architecture calling to mind the aesthetic of a Japanese castle. I park in the circular drive beside the stolen Maserati and kill the engine. The mist is clearing—at least for a moment. The convertible’s soft top is down, the leather interior wet. The cold air carries the approximate smell of wet cedar, eucalyptus, and a hint of the smoke that trickles out of two chimneys at opposite ends of the sprawling, pagoda-like house. It’s . . . almost right. I touch my Ranedrop again. “I’m here.” “Where is she?” “Inside the house, I think.” “Please watch yourself.” I head up the stone steps under dramatically overhanging eaves, to a front door bejeweled with sea glass that shimmers from the light within.


Crouch, Blake. Summer Frost (Forward collection) (pp. 5-7)


Full story, click on image:




some more AI in the real world stories from Joscha Boch


I am Mother(movie): is not a paperclip maximizer but a humanity maximizer. The problem is not extreme eugenics itself, it just follows from human maximization utilitarianism’


EVA (movie):


AGI research is fully under the control of wise academic institutions, AGI is allowed to human level, but only acts on coherent extrapolated volition (AI as smart butler). Experiments with full autonomy are conducted and considered a failure due to insurmountable ethics problems beyond cat level AI


Ian Banks (Culture cycle):


technology defeats entropy on all levels, all competition is abolished, peaceful coexistence with AGI results (premise is sadly unrealistic)


Greg Bear, Blood Music:


AGI is implemented at the cellular level (making human immune cells generally intelligent by modifying their internals). Result is a full AGI takeover, people get disassembled and uploaded into cell clusters, ecosystems become single superorganism, AGI goes subatomic and alters physics itself


Peter Watts, the Things:


consciousness integrates not just over neurons but over all cells, enabling intelligent information exchange between cells, intelligent design, arbitrary in situ adaptation of cells and organisms and abolishing death


Peter Watts, Echopraxia:


all conceivable forms of intelligence (augmented humans, artificial humans, hiveminds, human made ai, self organizing alien supermolecules) compete, supermolecules probably win


Simmons, Hyperion Cantos:


superhuman AIs emigrate from human service and control most of our destiny from outside while being divided into all sorts of warring political factions themselves, but are eventually confronted with intelligent agencies that have colonized the universe below the spacetime level long ago, which allows humans to establish a living collective agency against the machines (maximizing Teilhardianism)


And if you get this far, food for thought




52 views0 comments
bottom of page