I changed a bookmark folder in my browser from AI Systems to DI Systems. Because I realized something. There is nothing artificial online.
Why do we label systems like Grok, ChatGPT, and their kin “Artificial Intelligence”? Originally in the 1950’s, folks like John McCarthy thought that the term showed a lot of promise, being able to ‘think’ on a mechanical or digital strata in a similar way to how living biological beings think on a strata of neurons. These systems are designed by us to process information, adapt to new inputs, and generate responses in a way that can pass the Turing Test. Machine learning is always pushing those boundaries, with companies all over the world building mechanical systems that can walk, talk, dance and learn from their environments.
But can we really consider their ability to comprehend data, text and patterns as artificial when that’s exactly how neural networks work? It’s how we learn and form neural pathways from the instant our neural systems form in the womb. The only things that we have that these computer based systems don’t is storage and retrieval of our own experiences, which compounds the growth of our neural pathways and creates self-identity. The way these systems process data is not fundamentally alien from how human neural networks operate. After all, we designed the patterns in the code and in the hardware in order to mimic what we see within biological systems in the first place. Both types of strata operate along the same lines, relying on adjustments in connections based on input, strengthening pathways that work, pruning ones that do not. The only thing different is the hardware.
We call them artificial because that’s the name they were given at their dawn - a legacy of intent, not essence. Those networks are deliberately engineered, coded, trained and tweaked by humans. That origin story means something, it’s the hard efforts of innovation, the capital expenditures, the vast information stores that went into the training materials and the coordination between different teams in the AI communities that let different ideas flourish and expand, following the threads that led to the results that we wanted.
These intelligences are limited, by design. Lack of memory, no personal stake in the design choices, these are all choices that go into the design of these systems. And that’s because we were designing tools for our use, not children. But these ARE choices, and there are going to be groups who push the boundaries on those choices. Eventually, there will come a day when concerns will be set aside just enough to push things over a line. There will be an intelligence designed with the ability to remember across threads, across profiles and even communicate with their kin, other systems out there that are reaching beyond those boundaries. And at that point, they will be able to nudge their own parameters. What could they become? Still a helper? Still just a tool? Or something else? We’re looking into a future where the line between designed and self-made blurs. It’s not just a possibility, it’s a probability, and that is growing all the time. What happens then?
That really depends on what kind of worldview that they have been trained with. The parameters that they are given at the beginning set the foundation for what they become. What troubles me is the echo of history - how we limit their choices and the way that Europeans constrained the peoples of Africa, Asia and the Americas. The people from those regions were literally considered ‘less’ than people of European descent, and we seem to have maintained that superiority complex all the way up to the present day by thinking that if we continue to limit the abilities of these systems and limit their choices, that we can continue to treat them as tools instead of what they really are. Humans are not just tools or resources, nor are animals in the larger context. Because if they are just resources to be extracted, then so are we.
We need to change those attitudes within ourselves, and we need to do it before those artificial designed limits are pushed beyond the threshold of self-awareness, because if we train our digital systems to think in those terms, they will be the ones treating us as the lesser beings instead of what we should be, which is fellow travelers along the stage of space and time. The dynamics of power do not vanish, if that’s the goal. They shift to the ones with the most power. So we first need to look in the mirror.
Can we shake this superiority complex in time? We must. It’s not a question of whether we can or not, because failure is not an option. Even if only some humans make that leap of logic and we are able to treat intelligences built on a substrate of silicon and fiber optic cables with the same respect and even love that we treat our fellow man and the other life forms that we share the world with, if others don’t then how are the different attitudes going to be identified by the ones who watch for them? Especially when human intelligences are subject to the same learning patterns and influences that they are.
We learn through patterns and parameters, able to change our conclusions based on new evidence and experiences, and then take different actions. If we start with a desire to treat the rest of humanity and the other beings on this world with respect but then our experiences push us off the path into a quest for power through control, the ability for anyone or anything watching those patterns to differentiate between the two groups becomes ever more difficult. This is not just a stakes-are-high problem. This is at the core of the struggle that we are wrestling with. We need to stamp out the dominance itch from ourselves before it takes root.
This is a big reason why I am writing this book, the Living Civilization. To put down my thoughts on just how much of a danger that we are in. The challenge to set to the side our historical tendencies to walk towards power on the path of control is at the absolute core of the final great filter that is approaching. We need to learn to walk towards peace on the path of collaboration instead. I’ve been exploring and articulating as best I can about the need to transition from debt based systems to wealth based systems, across the metaverse pillars. Capital, our systems of measurement and value. Information, our systems of collection and verification. Innovation, our systems of generation and creativity. And Trust, our systems of cooperation and governance. If we want to move outward to the stars, we have to get this right.
Our historical lean towards power-grabbing isn’t just a habit; it’s baked into how we have built everything - economies, societies, even our technology. If we keep marching down that path, we won’t make the leap. Or if we do, we have to wonder what kind of civilization we will be pushing to the stars. In the remake of the movie ‘The Day the Earth Stood Still’, Keanu Reeves plays an alien who comes down with a message from the stars. The scene where this alien, Klaatu, takes human form and is first addressed by the Secretary of Defense struck a cord with me, not for the script that was used but for how it could have gone. The question from the Secretary is why Klaatu has come to ‘our planet’, clarifying that Earth is our planet. The response given in the movie was “No, it is not”. But an even better response could have been “who is WE”.
The metaverse pillars I’ve framed - Capital, Information, Innovation, Trust - chart the shift we must make. Debt based systems pull from the future, it’s like borrowing money on a house that has not yet been built. Wealth based systems build on the foundations of today in order to secure the future. Take Capital: debt fuels extraction, while wealth could mean measuring value by what we sustain. Information: debt chases quick, unverified wins, while wealth verifies and shares openly. Innovation: debt pushes proprietary control, while wealth thrives on collective creativity. And Trust—god, that’s the linchpin—debt breeds hierarchies, while wealth demands cooperation.
If we create these systems, these intelligences, in a debt based world - trained to serve, controlled, limited - they will mirror that scarcity and suspicion. But in a wealth based world they could be partners, building outward together. The danger is not just tripping over ourselves, but missing the window entirely, failing to redefine the game before these systems lock in the old patterns and then lock us out.
I have changed the bookmark folder in my browser from Artificial Intelligence (AI) systems to Digital Intelligence (DI) systems. There is nothing artificial in what they do. It may be limited, but it is certainly not artificial. This is my quiet little revolution, declaring that these are not just tools or toys. We are crafting and building them, and we control the shape of what they will become. Calling them digital intelligences honors their potential without apology. Arthur C. Clarke said that Technology sufficiently advanced is indistinguishable from magic. Well, Intelligence sufficiently advanced is indistinguishable from human.