Information and its Discontents
It begins with a twitch, or perhaps an itch. A feeling that grows from the short circuit between finger and screen. Barely a desire, a subterranean impulse: to plunge. Before I can register what’s happened, my face is scanned, recognized, thrown into the riptides of sound, image, and text that roar in and out of the machine. Rage, ecstasy, desire spiral out all fractal-like. How many times have I found myself here? Drawn along deforming vectors, little more than a lonely node traversing a twisting topology where you always end up right where you began. Swiping away at my phone, not sure how I even started, and not willing to leave.
Digital immersion is a peculiar sensation. Its phenomenology remains vague despite being one we’re all too familiar with by now. Like inverted reflections of a flow state, or mindfulness meditation (just without any of the mind), designed to keep us on a hamster wheel incapable of reaching any escape velocity. No doubt the blame lies largely with our modern techno-oligarchs. Multi-billion dollar companies have invested multimillions to sharpen the efficacy of our devices to a bleeding edge. Hacked sensory pathways generate false urgency, reprogrammed neural loops attach strange signals to ancient reward. Yet it’d be wrong to assume that’s all there is to this story, because underneath the manipulative bells and whistles lives something else that first beckons us into this digital world.
It’s this mysterious attractive force that Birrell Walsh also sensed forty-some years ago. In his essay titled “Monkey Trap or the Mystical Engine”—which fittingly opens the January 1985 issue of the Whole Earth Review thematically dedicated to the topic “Computers As Poison”—he trains his focus on the odd appeal of these then-emergent machines. “To those who don’t like computers,” Walsh begins, “these machines seem to be a kind of monkey trap… The monkey reaches in and grasps the fruit, but with the fruit in his fist he cannot get it out.” Anyone who has had the misfortune of seeing me lost in my device, dead-eyed and tight-gripped, will agree that this comparison is as apt now as it was then.
However, he continues, “to those of us who love these machines, they are a portal into another, fascinating universe… full of possibilities.” It’s this world that he wants to shed light on, to explain to the uninitiated who scorn the technophile without understanding them. You see, he argues, to the insider, the computer isn’t a trap, but a gateway. Past the event horizon is an alternate dimension, the “informational world.” For the first time, the computer allows “the manipulation of almost pure information.” No longer is it “buried in life, in books, in tables and patterns”; on the computer it is “purified.” Absorption into this information space is nothing short of “ecstasy.” “How many fond of wine could resist its distillate, brandy?” he asks.
Our world today would appear to run on this high-proof distillate. In much of the West, we have transitioned into what is often referred to as the “information economy.” Intelligence agencies want information so they can resist—or wage—“information warfare.” We’re warned that platforms and companies are constantly trying to harvest more information from us. According to the Freedom of Information Act, information is “vital to the functioning of a democratic society.” It is the lifeblood that pulses through our socioeconomic veins, the current that animates our waking world.
Given information’s supposedly central role in contemporary life, it’d be fair to assume that we’d have rigorously developed the concept by now. Yet the closer one looks, the more its edges blur. Walsh himself assumes that its charisma is self-evident, but he only obliquely references anything like a definition. For him, as for most, information is axiomatic in the truest sense: an ill-defined pillar upon which an entire edifice is architected. And so the very source of the computer’s alleged appeal remains deferred—an ever-receding point, a black hole observed only through its gravitational distortion—begging the question of what, exactly, is the ghost that lives in this machine.
In another time, computers might have been considered angels. After all, they are the inheritors of these figures in their originary sense, descended from the Greek angelos—which referred to an envoy or messenger. Similar to our computers today, the angels of the fabled past gave us mortals access to the divine messages that absorbed us in the ecstatic. In the Islamic tradition, they were thought to have been made from light. It’s easy to imagine them crackling with the electric energy that now flows into our devices, carrying sacred information from the other side; perhaps they even glowed with the uncanny blue hue of our current message-bearers.
Information has long been tied to the otherworldly. It’s a genealogy baked into the term itself. The “form” in in-form-ation is a distant relative of the Platonic forms, which Plato thought of as unchanging, transcendent entities that existed in the luminous domain of truth—everything we saw in the physical world was merely a projection of the deeper things-in-themselves that constituted the real. In the famed allegory of the cave, the prisoner escapes the shackles that bind him and realizes that what he thought was reality were shadows; like the computer user, he passes through the portal of the cave into the light of day where he can finally exist amidst the forms, the progenitor to Walsh’s “informational world.”
Unlike the weightless information-space of our digitized lives, however, these realms of truth were filled with ethical gravity. It’s revealing that in his telling of the allegory, Socrates asks what might happen if the prisoner were to go back to that cave and appear in court; they would then have to debate the “shadows of justice” with those who had “never beheld the Ideal Justice.”1 The cultivation of knowledge was tied to lofty principles like Justice, Truth, Virtue. It was only by contemplating the forms and developing an understanding of the world through them that we could truly learn what it meant to live well and flourish as humans.
It’s a deep irony then that modern information is born at the precise moment that it is severed from any pretense of human meaning. Technology has played a critical role in this fissure. As the media theorist Friedrich Kittler writes, it was the electric telegraph that, “for absolutely the first time,” decoupled information from communication “in the form of a massless flow of electromagnetic waves.”2 Information was no longer a message waiting there to be drawn out by the human mind, but a current of electrical signals that moved from machine to machine—energy borne along by cables and wires and inhuman artery.
These technologies inspired the researcher Ralph Hartley to begin developing a quantitative, measurable notion of information in his 1928 paper “Transmission of Information.”3 Only by eliminating “the psychological factors involved” could one establish “a measure of information in terms of purely physical quantities” to analyze the efficacy of emergent technical systems like telegraph and telephony networks. Twenty years later, Claude Shannon inaugurated the field of information theory with his seminal paper “A Mathematical Theory of Communication.”4 While “messages have meaning,” he writes, “these semantic aspects of communication are irrelevant to the engineering problem.” By disconnecting information from the human domain—and instead focusing on the systems-level processes by which messages are encoded, transmitted, and decoded—Shannon transformed it into a physical quantity akin to energy and mass, a property that existed regardless of whether it was decipherable by any human actor.5
The maneuvers of the 20th century would thus reconstitute information in the image of the machine. While information had always been otherworldly, we’d historically believed that we had privileged access to it through philosophical reflection; it could shape (in-form) our minds, and with enough time gradually arc us towards true understanding, even wisdom. No longer. Information was now spoken in the machine’s cryptic language. It had no interest in all-too-human concepts like justice or virtue; it trafficked in entropy and noise, bits and binaries, flows and currents. The optimization of mechanical processing and transmission, not human flourishing, was to be its primary purview.
For this reason, it’s not quite right to say that the computer “distills” information, as if it were out there simply waiting to be processed. Instead, computers exist as elements of a broader technical network that (beginning with telegraphs and running through our digitized systems) gives birth to a new kind of information entirely: a cold-blooded, inhuman information that devours the fleshy forms of knowledge that came before—information reimagined as measurable quantity rather than felt quality, optimized for lightspeed global transmission and absent historicity. Recently, this form of information has reached its telos via “synthetic” data, produced using generative AI, meant to fill in the missing gaps of our ever-ravenous datasets. No longer does information need to be laboriously gathered from the actual world’s people and places; it can now be manufactured on demand.
Whereas epistemologists once thought a minimum definition of knowledge was a justified true belief, computerized information needs neither to be believed nor true to function. It just needs to exist at a scale where it can brute-force its way through any uncertainty. We might say that the “post-truth” world is, at its core, this informational world taken to its fetishistic extreme. After all, if computers really gave us information in the familiar sense, you’d think we’d feel more, well, informed when we re-emerge from our plunges into cyberspace. I can’t say that I’ve experienced much of this lately. Immersion in this world is not the clarifying ascent to the rarefied domain of truth, but a descent into the sublime underworld of steel and silicone thought. Not to say that this underworld isn’t without its own appeal; there is a charge that builds upon one’s approach, the first glimmerings of a new kind of technologized feeling.
My first encounter with this sensation took place in the early 2000s. Growing up, one of my favorite things to do was lug my parents’ bulky laptop into the closet and hook it up to a webcam. By placing the webcam in front of the closet door, I could get a live feed of what was going on immediately outside. I’d sit there—in a closet in the Midwest in Bush’s post-9/11 America—for god knows how long like some operative on a stakeout, watching this feed of nothing. Sometimes, I’d open Google Earth and poke around my neighborhood, pretending I was a satellite. I eventually supplemented this system by convincing my parents to buy me a toy branded as a spy laser detection system so that I’d have advance warning of approaching agents (e.g., mom, dad, brother, family dog).
Despite nothing ever actually happening, I was enthralled. I wasn’t merely seeing the view outside, which I could have easily done at a much higher fidelity by just opening the closet door. Rather, I felt that I was somehow integrated into the view. Or maybe I had been disemboweled and dismembered and the strands of my nervous system now stretched over the topology of the house. Or better yet, I was a central processor, the nexus synthesizing the disparate channels of information that poured in from the house and beyond. In short: I was, for the first time, seeing and sensing like a machine.
If we follow bell hooks’ writing on the commodification of racial difference and Otherness as an act of “eating the other,” we might say that here the seduction of information lay with being eaten by the machine.6 Decades later, I still haven’t forgotten the enchanting thrill of this early digestion into machine-thought. I suspect I’m not alone. Most of us have glimpsed this technological sublime in some form or other. It comes to us in the most unexpected places. The trans theorist McKenzie Wark, for instance, finds it in the act of raving. She argues that unlike “most cishet men” who mistakenly “imagine they can outfuck the machine,” the raver by contrast has learned to enjoy the process of producing “styles for life inside the machine,” of melting with it and experiencing not the “time of duration,” but “the romantic other of machine time.”7
Call it a techno-erotics: one that finds release in the entanglement between our pleasure circuits and the network’s informational circuits—the sensation of organic flesh pressing up against outer solidity of hardware before dissolving into the interior softness of software. Though Wark is right to identify this transposition as sensuous or “romantic,” I remain skeptical of its liberatory potential. In contrast to Wark’s Raving, Natasha D. Schull’s book Addiction by Design offers a more nuanced, equivocal description of the sensation by way of a longtime video poker player named Mollie. “The thing people never understand,” Mollie says, “is that I’m not playing to win.” What matters instead is “to keep playing—to stay in that machine zone where nothing else matters.” There, you’re in “the eye of a storm… vision is clear on the machine in front of you but the whole world is spinning.”8
As the writer Rob Horning once noted to me, to be consumed in such a way is to experience a liberation from the winds of chance and unknowability that surround us, to outsource all of one’s cloudy doubts and stormy indecisions to the clean epistemology of computational processing. Little wonder that the machine zone is so alluring to us today. Amidst a physical, lived world spiraling into ever-greater stratospheres of abstraction and incomprehensibility, the informational world promises to nullify the uncertain—to create a small opening in which we might experience, in Walsh’s words, “easy control over complex operations” at last.
In cleaving us off from the indeterminate and complex, however, this consummation also leads to an all-too exploitable alienation. Georges Bataille famously described erotics as “assenting to life up to the point of death,” and our new techno-erotics is no exception.9 Echoing the language of both the raver and the gambler, a guilt-wracked drone pilot tellingly speaks of a “zombie mode” where he would merge with the technology to kill.10 When all we are left with are vectors of action, data on what has been done and what one can do, we remain bereft of any knowledge about what one should do. Information-space is a Lethean current where one easily forgets the very knowledge of good and evil that gives us our agency. Death shades our desire for contact with the lifeless machine. Not just our own figurative death—in the form of a turn away from the moral demands of human living—but the real death of others.
Unfortunately, escaping this zone is no simple task. In an economy increasingly reconfigured around the flow of information, it has become a default setting. I once had a boss who told me “multitasking is for computers,” and then proceeded to send me dozens of messages and to-dos in succession over workplace comms as I was buried in other work. Knowing and thinking like a human is a liability. It’s too slow, involves too many feelings, too much deliberation, too much uncertainty; it opposes the “fluid, smooth, fast circulation” that “fuels contemporary capitalism’s extremization,” as Anna Kornbluth writes.11 Success requires one to always already be like the machine, to multitask, treat people as statistics, parse down human complexity to its most digestible form, keep a cool head while delivering death, constantly send and receive, be always available, always on.
In the past, many assumed that this stance would catalyze its own transformation, whether through accelerationist collapse or technologically-aided liberation. But it appears that despite the constant stream of activity, little change accrues. It’s one reason why social theorist Jonathan Crary calls our age of digitized information the age of 24/7 capitalism. His choice of 24/7 is intentional, referring to “an arbitrary, uninflected schema of a week, extracted from any unfolding of variegated or cumulative experience.”12 While information accumulates with every action, click, and purchase, it rarely synthesizes into anything greater than the next advertisement. Unlike knowledge, information doesn’t aspire to weighty universality; it is too light, too content with becoming obsolete as quickly as it is produced. Just as it was when I was a child sitting in front of my laptop, staring at my unmoving screen, nothing happens, and we remain enthralled.
In his original article, Walsh suggested a few ways out of these ouroboric traps. Humor and rival obsessions, he argued, were two ways to “step outward” beyond this digitized realm. These suggestions, however, failed to anticipate the extent to which these affects would be mobilized to keep us online. Not through information as Walsh conceived of it—which he continued to see as meaningful and self-serious in spite of it all—but through the reign of “content,” which vacuates information of any remaining substance and positions it as little more than an empty container to be filled with whatever desires and distractions a viewer may ever want. Rather than break the cycle, content effectively channels humor and obsession to reinforce the feedback loops that generate further mindless engagement.
Whereas “information” emerged from transmitter systems of the early 20th century, “content” comes to us from the more recent logic of Web 2.0. In 1999, Darcy DiNucci published an essay called “Fragmented Future” in which she coined the term Web 2.0 to illustrate that the web as it was then was merely an “embryo of the Web to come.”13 Unlike Web 1.0 with its static landing pages, inert users, and one-way “read-only” experience, Web 2.0 would be a dynamic system spiraling in all directions at once. It wouldn’t have “any visible characteristics at all,” instead being a “transport mechanism… through which interactivity happens.” Critically, the germinating seed for Web 2.0 was to be found in 1.0’s ability to make “interactive content universally accessible.” Content—and in particular content that generated further interaction— was to be the building block of the web of the future, not information.
The significance of this transition from 1.0 to 2.0 can be better understood through comparison with a historical analogue discussed in media theorist Harold Innis’ book Empire and Communications: the move from stone and papyrus writing in the Egyptian empire. Innis observes that compared to the static, monumental, rigid qualities of stone—which made it a fitting medium to convey divine authority, as seen in the engravings that adorn monumental tombs—papyrus allowed for efficient, transportable writing. Upon escaping the heavy medium of stone, he writes that “thought gained lightness” and made possible a “secularization of writing, thought and activity.”14 Similarly, we might say that the stonelike, static experience of Web 1.0 imbued it with a certain aura of authority (perhaps why Walsh suspected that humor might break its hold). In contrast, the dynamism of Web 2.0—focused on content as a catalyst for further interactions and engagement—gave it a distinctly libidinal quality that allowed it to utilize humor and obsession to amplify its efficacy.
If humor and obsessions are unable to help us, then we are left with the final path of escape that Walsh suggests: applying information practically to the outside world. “To make practical the visions seen in information-space,” Walsh writes, “one’s thoughts must move back and forth between the information-world and the outer world where one’s applications and purposes lie.” Go forth and touch grass, the old argument goes, and in doing so, we might find a path out of this mystical-engine-turned-monkey- trap. But even here he admits that practical application isn’t enough. “The goals must be worthy, lest the new demons be worse than the old.” And there, as the saying goes, is the rub.
After all, from where do such goals come? They are not merely self-evident, for not even self-evident truths syllogistically become goals deemed worthy of pursuing (“we hold these truths to be self-evident, that all men are created equal,” wrote dozens of slaveholders in our country’s founding document before continuing to maintain slavery for the next hundred or so years). Such goals are materialized through difficult struggle over decades and centuries; through gradual accumulations that go nowhere and do nothing before changing everything; through pauses, bottlenecks, friction, regression.
In other words, it happens through precisely the kind of slow, collective process of learning that has been forced into obsolescence by our lightspeed techno-culture of information, and more recently, our vacuous culture of content creation—which remains too short-sighted, too rapid, too concerned with quantity over quality, certainty over deliberation, systems over people, to sum up to these worthwhile ends. As information brokers and techno-oligarchs become willing bedmates with authoritarian reactionaries,15 it’s become all too clear that the ethics spawned by this milieu are capable of little more than the grotesque accumulation of blind power. Older forms of navigation are required: art, poetry, protest, discourse. Slower, subterranean knowings that don’t aspire to compressed legibility and rapid transmissibility, that refuse trite summary, and cannot be sanded down to standardized smoothness.
Over half a century ago, the original Whole Earth Catalog was envisioned as a device that could orient the user towards “what is worth getting and where and how to do the getting.” Now that the “where and how” are always just a click away, the inheritors of the Catalog must return wholly to the “what.” We must strive to understand what is worth getting, what is worth doing, and what are the collective dreams and shared struggles that we will let guide us. This will necessitate far more than neutered and neutralized information can provide. It will demand conviction: faith that we might do better, be more. Only then will we be able to step out of machine-space—to leave the illusory comfort of the eye of the storm and brave the fray, armed with the shared knowledge we’ll need to truly find our way out.
- Shawn Eyer, “Translation from Plato’s Republic 514b–518d (‘Allegory of the Cave’).” [↩]
- Friedrich Kittler, “The History of Communication Media,” CTheory, 1996. [↩]
- R. V. L. Hartley, “Transmission of Information,” The Bell System Technical Journal 7, no. 3 (July 1928): 535 - 563. [↩]
- Claude Shannon, “A Mathematical Theory of Communication,” The Bell System Technical Journal 27 (July 1948): 379–423. [↩]
- James V Stone, “Information Theory: A Tutorial Introduction” (arXiv, 2019, arXiv:1802.05968v3). [↩]
- bell hooks, “Eating the Other: Desire and Resistance,” in Black Looks: Race and Representation (South End Press, 1992), 21. [↩]
- McKenzie Wark, Raving (Duke University Press, 2023). [↩]
- Natasha D. Schull, Addiction by Design (Princeton University Press, 2012). [↩]
- Georges Bataille, Erotism: Death and Sensuality (City Lights, 1986). [↩]
- Matthew Power, “Confessions of a Drone Warrior,” GQ, October 22, 2013. [↩]
- Anna Kornbluh, Immediacy, or The Style of Too Late Capitalism (Verso, 2024). [↩]
- Jonathan Crary, 24/7: Late Capitalism and the Ends of Sleep (Verso, 2013). [↩]
- Darcy DiNucci, “Fragmented Future,” Print, April 1999. [↩]
- Harold Innis, Empire and Communications (Clarendon Press, 1950). [↩]
- Cecilia Kang, “Tech C.E.O.s Spent Millions Courting Trump. It Has Yet to Pay Off,” New York Times, April 8, 2025. [↩]
Leo Kim is an essayist and critic based out of New York. His work has been featured in Wired, Vox, The Baffler, Logic(s), Artnews, Noema, and others.