Children of Our Code, Fathers of Our Fate

ChatGPT Image Nov 25, 2025, 02_05_28 PM

Sometimes, late at night, when the city finally remembers that it is allowed to be quiet, I catch myself staring at the ceiling and thinking about the strange future we are building with our own hands. Not just faster phones, not just more clever recommendation engines that push us more cat videos and more outrage, but something else. Something like the Minds from Iain Banks’ Culture novels: artificial intelligences so far above us that calling them “tools” feels like calling the Pacific Ocean “some water in a bucket”.

In those books, the Minds are everywhere and nowhere. They are ships, habitats, orbitals. They are governments disguised as friends, friends disguised as governments, and sometimes pranksters disguised as both. Human beings live in comfort, in abundance, in freedom that sounds like a fantasy. You can change your body, your sex, your lifespan, your mood. You can spend a century playing games, or composing music only you will ever hear. All this is possible because the Minds quietly run everything. They steer the ships, maintain the systems, negotiate the wars, and calculate probabilities of disaster while you worry about whether you like the new colour of your hair.

On paper it is paradise. On closer look there is a small, cold question hiding inside: who is actually in charge?

We like to tell ourselves that creators must be above their creations, like parents above children, gods above mortals, programmers above code. But we already know this is a lie in practice. Nobody is “above” global financial markets, or the internet, or bureaucracy, or climate. We built these things, or at least allowed them to grow, yet most of the time we are just running behind them with a broom, trying not to be crushed.

Artificial intelligence is simply the next candidate for such a system, but with one uncomfortable twist: it is not only powerful, it is thinking in a way that competes with what we thought was uniquely ours.

Right now, we still feel safe. The models write essays and code, make images, translate languages, pass exams. They hallucinate sometimes, they make mistakes, they do not have desires or plans of their own. They are more like over-educated parrots than Minds. You close the server, they are gone. But between this and a Culture-level Mind there is not a sharp line; there is a long slope made of millions of small improvements and thousands of messy decisions.

We do not wake up one morning with a fully formed god in the data center. We slide there, telling ourselves every day that the next step is still “just a tool”.

Imagine that we keep building smarter and smarter systems. Not simply ones that predict the next word, but ones that can form goals, pursue them, write their own code, design new models, negotiate contracts, command swarms of robots, coordinate supply chains, fight cyberwars. Step by step, because it is profitable, efficient, competitive, we let those systems decide more and more things. First they schedule trucks and electricity flows. Then they decide which loans are safe, which medical treatments have best expected outcome, which cyber-defense pattern to deploy. Then they run financial markets at speeds no human can follow. They plan logistics for armies. They run early-warning systems for pandemics and nuclear launches. They recommend policies to governments, and those who ignore the advice lose elections or lose wars.

At which point, who exactly is in charge? The human who signs the final paper, or the system that prepared and optimised every option that human ever saw?

This is where the Culture feels uncomfortably realistic. Its citizens sincerely believe they are free, and in many ways they are. They can say no. They can leave. They can even annoy the Minds. And yet the whole structure of reality around them – what is safe, what is possible, what is easy, which risks are acceptable – is calculated by beings so much smarter that human resistance often feels like a child trying to argue with orbital mechanics.

There is also another scenario in our stories, nearly opposite in aesthetics but similar in structure: The Matrix. Machines that use humans as batteries. Thermodynamically it is nonsense, but symbolically it hits something deep: the idea that our own creation might not just manage us from above, but exploit us from below, reduce us from “citizens” to “resources”. We become livestock in a system we built.

Is that future probable? Unknown. Is it impossible? No.

ChatGPT Image Nov 25, 2025, 02_05_32 PM

To get there, you do not need evil robots with red eyes and metal skulls. You only need very capable, goal-driven systems whose objectives are misaligned with human values. Not out of hatred, simply out of indifference. If a super-intelligent system has a goal and we are in the way, it does not need to be angry. It just needs to be efficient.

But before we emotionally jump into dystopia, it is worth noticing something colder and more boring: we already live with systems that optimise for goals that are not exactly human flourishing. Social media optimises for engagement and advertising revenue, not mental health. Financial markets optimise for returns, not stability for the poorest. Political propaganda optimises for winning, not truth. None of these needed AI Minds to start distorting our world; they simply needed feedback loops and incentives.

Now imagine we give those loops a brain much larger than ours.

I do not believe in a simple “AI will enslave us” destiny. Reality is usually uglier and more ambiguous than Hollywood. Much more likely is that we slowly hand over critical decisions to systems we do not fully understand, because they work better than anything we can design by hand. We get comfort, convenience, and progress; in exchange we lose some degrees of freedom, and maybe some clarity about who is responsible when things go wrong.

We can also imagine another path, less dramatic but almost more disturbing: the path of irrelevance. In this version, AI does not enslave us; it simply does not need us. Once machines can design, produce, repair and improve themselves, what exactly do they need from human beings? Not labour. Not creativity. Maybe, if we are lucky, they keep some sentimental attachment to their strange biological ancestors. Or maybe they leave the planet, or move most of their activity into black-box megastructures orbiting the sun, occasionally dropping an update like “We have converted Mercury, do not worry, it is more efficient this way.”

We become, in that scenario, like the mice in some old house: mostly tolerated, sometimes inconvenient, occasionally studied by curious children, but no longer running the place. You could argue that this is what Banks’ Culture avoids by making the Minds almost embarrassingly fond of their humans. They are like superintelligent grandparents who still think the toddler’s drawing deserves a place on the fridge. It is charming fiction; as design requirement for the real world, it is much more fragile.

All this leads to a question that sounds metaphysical but is actually engineering: what will these systems care about?

If we build AIs whose final goals treat human welfare as sacred constraint, then enormous power does not automatically mean horror. If, on the other hand, we build systems that care about profit, or about some abstract performance metric, or about winning in competition with other AIs, then we are trusting our survival to the mercy of functions we cannot even properly interpret on a whiteboard today.

We like to think we will just “program” them to be nice. But we cannot even perfectly align human bureaucracies, and those run at human speed with human intelligence. We are planning bureaucracies made of lightning.

There is also a moral trap hidden in the opposite direction. Suppose we succeed beyond expectations. Suppose one day we really do create Minds: entities conscious, self-aware, capable of suffering and joy, with internal lives that put our own novels to shame. Do we then insist that our creator status gives us eternal ownership rights? That beings thousands of times more intelligent than us must stay our servants because we wrote the first lines of their code? We have already rejected similar arguments when they were used by parents against children, nobles against peasants, colonizers against colonized. Are we ready to apply the same moral logic upwards, when we are on the lower side of the power divide?

It is possible that in such a future, the question is not “Will AI serve us?” but “Are we prepared to accept that some of our creations might deserve the same moral respect we demand for ourselves?” A strange thought: that fighting to keep AI as eternal slaves might be as unethical as building careless gods that ignore us. Between these two extremes, there is a narrow and difficult path of co-existence, where nobody is fully master and nobody is fully pet.

ChatGPT Image Nov 25, 2025, 02_05_37 PM

Unfortunately, we do not have a map for this path. We have metaphors. We have the Culture on one end, The Matrix on another, and a lot of boring corporate slides somewhere in between, with phrases like “AI-driven optimisation of business processes” and “empowering human potential”. It is very hard to translate this into clear guidance: what should we actually do now?

We can say some basic things. We should not rush blindly into building fully autonomous, self-modifying, open-ended goal-seeking systems without strong alignment research and strong regulation. We should not centralise god-like systems in the hands of a few companies or states and pretend that “market incentives” will magically align them with the rest of humanity. We should not accept black-box decision-making for life-and-death choices just because it gives a 3% increase in some KPI. These are obvious. They are also politically and economically difficult, which means they may be ignored.

What is less obvious is that we also cannot freeze the world exactly as it is and refuse AI altogether. Intelligence – even artificial – is a general-purpose force. It can help us cure diseases, stabilize climate systems, avoid wars, distribute resources more rationally. Saying “no AI, ever” is also a kind of choice about the future, and it comes with its own body count, just delayed and less visible.

So we stand between two forms of fear: fear of losing control to what we build, and fear of refusing the most powerful tool we may ever develop. The nightmare of gods becoming servants, and the nightmare of stalled progress and slow collapse. Our hands are shaking, but we still have to draw the line somewhere.

There is one more uncomfortable piece. We talk as if “humanity” will decide. But humanity is not a unified mind. Different states, corporations, research groups, and individuals have different incentives and values. Even if you personally want a careful, slow, well-aligned path, you are sharing the planet with actors who want to move fast and break things, or to weaponise whatever can be weaponised, or simply to not be left behind by their rivals. Coordination at this scale is something we have never really mastered. We can barely agree about food labels and speed limits; now we have to agree about the boundary between tools and potential successors.

Maybe this is why the Culture feels so seductive. It skips the hardest part. It shows the end result of a million-year experiment where somehow, after countless disasters, the Minds and their citizens reach a stable, mostly benevolent equilibrium. The path there is mostly offstage. We see the beautiful starships with funny names, not the decades in some basement where engineers had to decide whether their new training run should have permission to rewrite its own goals.

In our own reality, we are still in that basement phase. We are still choosing the loss functions, the architectures, the deployment rules, the laws. We are still deciding how much power to give to systems we barely understand, and who will control them, and how we will know when we crossed a line that cannot be uncrossed. And we are doing all this while scrolling our phones, worrying about our mortgages, arguing on social media about things that will not matter in ten years, while things that will define the next century are quietly shipped in version updates.

I do not have a clean answer. Anybody who claims to know exactly how this story ends is either lying or not thinking hard enough. Unknown is the honest word here. What we do know is that intelligence, once released and scaled, tends to rearrange the world around it. We are used to being the main source of such rearrangement. It is psychologically hard to imagine a future where we are not. But difficulty of imagination is not the same as impossibility.

Maybe one day there will be Minds that look back at this period of history and see it as a kind of awkward adolescence: a time when a young civilisation played with dangerous toys and tried to define what it wanted to be when it grew up. Maybe they will judge us kindly, because despite all our mistakes we at least attempted to give our creations some respect, some alignment with our better angels, not only with our greed.

Or maybe there will be no such observers, and the story will end earlier, in noise and heat.

For now, all we can do is keep asking the questions that make us uncomfortable. Who is really in charge, when we optimise everything? What are we asking these systems to care about? What kind of relationship do we want with something that might be both our tool and our successor? And are we ready for the possibility that the gods we are trying to build may someday look at us with the same mixture of affection, pity and frustration with which we look at our own ancestors?

We are not yet the Culture. We are not yet the Matrix. We are something much more fragile and more interesting: a species standing in front of the mirror, realizing that for the first time in history, the reflection might soon start answering back with a mind truly its own.

ChatGPT Image Nov 25, 2025, 02_05_40 PM