The syrupy will of information
Every app on my phone is trying to predict me, and every day I work to improve these predictions.
Spotify Daylists prescribe my hourly moods with embarrassing intimacy. Youtube’s Home Page curates my latest hobby conspiracies. Hinge arranges my “Most Compatible” romantic opportunities. I open a Tweet and instead of seeing a conversation around this Tweet, Musk’s Tik-Tokified “X” now tosses me into a flurry of semi-related Tweets I might also like to tap on …and it works. I do tap.
How does Instagram rank my friends to build the slickest funnel into the scroll vortex? Zuck’s bots have been staring over my shoulder my whole adult life. They know how I click, swipe, and linger—how likely I am to connect with the idea of every person in my world.
The algorithm thirsts for attention, so it caters to my social agenda, coaxes my eyes, my errant crushes, my insecurities, in a sense predicting my digital life—all while piping it through the desire-matrix of ad-injected viral currents.
Everyone knows this. Brain rot’s basically a Gen Z bragging right. But we only want to take action when the outgroup runs it. If we let China own the dashboard which controls the dance trends of American teenagers, how long before they twist the kids’ minds against the Western conscience.
If you are a “creative,” a “maker,” an “influencer,” then you yearn from the other side of the glass. Why do some things hit and others not? If something gets “engagement,” what are you supposed to do if not ratchet out more things like that? Fast, hot, sugar-rich shock. Do what works. Click into the gears.
Black boxes of weighted social concerns—tested, measured, ever-improving. These are “content predictors” at scale, and the gears of this machine are our own attention. We construct massive informational models with our eyes and fingertips. As the data flows, each day’s model grows into a thicker, spikier, more intricate structure than the last. A teeming urchin reaching into space.
Neurons all the way down
The way social media reads your attention isn’t so different from the way GPTs read a paragraph—which both mirror the way your frontal cortex builds a reality. Which is… not true, exactly. But in spirit and basic structure, these are all hierarchal systems of neural networks. Each network holds layers of neurons, and each neuron is just a box with a number in it.
A neuron’s job is to determine if it “sees” something in its area of concern, and then to “share” with the next layer of neurons what they saw. When input information crosses the first layer of neurons, it’s translated into a series of numbers. The second layer of neurons then looks over the first layer and lights up according to their relationship with that first layer. Not all relations are the same. Every neuron has internal “weights” and “biases” around whether or not it “wants” to receive messages from certain neurons or pass along messages to others. The more often one neuron shares with another neuron, the “friendlier” they become. “Those who fire together, wire together.”

“Machine learning” refers to this process where you run thousands, millions, billions of example input/output pairs through a neural layering, and after each run, you calculate an error value. This error value is fed backwards into the network to alter how strongly or weakly the neurons “connect” in order to reduce the error for the next run.
In a sense this is how all learning functions. “Intelligence” describes the ability to predict how a set of inputs (e.g. a string of words, an array of media) can be turned into a desired output (e.g. the perfect reply, infinite dopamine scroll). And “learning” refers to a constant shaving away of error between runs of pattern-matching.
Whether we look at neurons in our brain, neurons of social media, or neurons in OpenAI’s transformer model, we see stacks of on/off switches that weave through escalating layers of abstractions.
What we call the Information Age has been the process of realizing individual humans as neurons. Not in some high-minded concept of “global consciousness” (though not NOT that), but in a more practical sense: we hold inside us a set of numbers, connected to other numbers.
In a social media model, when we like, share, or comment on a piece of content, we change the numbers in our box. We tell the model “yes” or “no,” and this message goes off to neurons in our neighboring layers, adjusting the connective tissue of the system. Filling in the cracks.
Generative AI presents the latest designs in predictive tooling (i.e. intelligence). The GPTs and DALL-E’s have consumed pupil-dilating amounts of internet data, assigned numbered weights to the data tokens’ relations to each other, and then used these vector relationships to make predictions on future data. This pre-trained guessing system is then pumped through an interface where we can angle the predictive levers towards our own designs.
The Social ←→ Generative Ouroboros
If you met someone from the early 1900s and told them about these two types of AI magic in the future: one which used its predictive powers to absorb all words/pictures/sounds/symbols and then gave it to everyone to instantly remix to their own ends vs. one that used its predictive powers to prescribe our news consumption, our cultural diet, our social opinions, and the people we date—which would disturb them more?
To be fair, we’ve oversimplified this comparison between social AI and generative AI. These are vast, complex, secretive systems doing much more than stacking a couple neural networks. But if you start from the base definition of intelligence-as-prediction, it stands to reason that one system places the power of prediction in your own hands, whereas the other sits some place above you, in the hands of a few unknown engineers, directing your eyes under the premise of “connection.”
It’s true, Tik-Tok and Meta really do want to connect us, as they say. Because the more neural connections a model has, the more sophisticated predictions it can make.
Our social media echo chambers are basically the model testing its sorting algorithm for higher-resolution, longer-tailed precision. Take the above example of Instagram pushing different comments sections to men vs. women for the same piece of content. Is it a two-faced engagement tactic? Or is it a way for the machine to learn nuance around how content is processed by different neural layers?
Because any Gen AI model requires a cost-prohibitive amount of training data, the only people who can build them are scientists, snoops, thieves, and federal-grade plumbers. You need a massive flow of input data to get quality predictive output. And in this sense, generative AI following our social aggregation platforms is like a snake eating it’s own tail.
Whether it’s an Ourbourous, a portal, or a trick mirror—the best metaphors for social media point to something Janus-faced. And now, just as all content formats have converged into the same infinite multi-media-scroll, maybe all gen-AI designs will converge into multi-media-chatbots.
The technocapitalist machine wants to commoditize and pressurize all information into a liquid stream: our attention aerosolizes it into data at the front (scroll & tap), and our questions re-condense it into media at the back (prompt & edit). Curious neurons, we share the beauty and pain we find along the way. And the model thickens.
Tending the Garden
Even if all these AI interfaces use the same basic architecture, we’ll pipe in different streams. The takeover of Twitter was welcome if only for the splintering of platforms that ensued. Whether Blue Sky, Mastodon, Truth, Farcaster, or Substack have any legs, they at least have different ethos, different aesthetics, different communities and algorithmic ~vibes~. What’s the difference between a network that hides likes vs. shows them off? Idk. Let’s try both models and find out.
Diversity is healthy and maybe more-so with Gen AI.
For the most basic, edgeless cutting-edge, stick with OpenAI.
For something baked into familiar tools with the patina of ethics, use Adobe.
For something more rogue, experimental, dark and unhinged, play with Stable Diffusion, Midjourney, Mistral, and the wildgrass of open source projects.
Meanwhile Apple, Microsoft, and Google move slowly, Godzilla-paced towards whatever end-to-end solution will keep users’ faces bathed in the glow of their attention apparatus.
We need to adapt to recognize the biases of models we inhabit. Whether it’s social AI or generative AI, you are, with prejudice, being worked. It’s only a matter of where you want to work. Where do you want to stake yourself in the hierarchy of attention?
Instead of railing against “AI” as a broad concept, we should be talking about how to create more AI paradigms, both social and generative, designed by a more diverse set of players with competing goals. The FAANG companies—already rich with infrastructure to vacuum up nutritious data—will spawn gen-AI to reinforce their own walled gardens. Simultaneously, the political regulatory agenda will be cowritten by these big players, just like every media revolution of the 20th century.
People talk of AI like it’s some monolithic boogeyman headed towards us. But it’s more like a multi-species ecosystem that’s already sprouted all around us. We are already not the main characters. In the biosphere of information, human beings are not the fauna or even the flora. We’re more like the soil itself, fertile ground for information to grow from, reproduce in, and fight over. To “hit pause” on this growth might be wise, but which part do you hit pause on? And who has the authority to “hit pause”?
Enable 3rd party cookies or use another browser
^try to watch this video without getting pulled in and leaving please
It seems every individual must take on, for themselves, the role of information gardener. What you choose to like, share, create, follow, and nurture determines what grows in these ecosystems—your digital hygiene is not just a benefit to you but to everyone else in your networked garden. Every “Like” is a vote on the technocapitalist ballot: “More of this.”
So it goes with generative AI and the internet in general. It’s a fiercely individual enterprise that makes the savvy players savvier and the fools more foolish.
You might currently view AI prompters as plug-and-play opportunists. But as we grow wise to it, gen-AI won’t destroy art nor will it make everyone an artist. It will force creatives to work smarter in order to discover unique and fresh perspectives within our media systems. Mix, mutate, edit, curate. In a future shaped by social and generative AI, our “content” will evolve into something more dynamic, interactive, and alive.
Artificial structures rhyme with nature’s own. And just like our biological world, the sign of health in an informational ecosystem is diversity and competition. Something antifragile. Something with tensegrity. We don’t need to “Stop AI.” It’s here. Growing within us. We need new ways to guide it.
Loved this read.
To:
"It’s here. Growing within us. We need new ways to guide it."
I say:
"One does not simply walk into Mordor. Its black gates are guarded by more than just orcs. There is evil there that does not sleep. The great eye is ever watchful. It is a barren wasteland, riddled with fire, ash, and dust. The very air you breathe is a poisonous fume. Not with ten thousand men could you do this. It is folly."
Don't tweet, Don't post, and whatever you do don't look at it!