To AI or Not to AI?
- Paul Forest
- May 17
- 8 min read
Updated: May 19
A Personal Journey

Artificial intelligence (AI) has become both promise and provocation. For some, it marks the end of human relevance—a future shaped by machines that move faster than we can follow. For others, it opens the door to breakthroughs we’ve spent centuries chasing. Somewhere in between is the quiet question few are asking: what kind of intelligence are we actually creating—and what kind of world is it shaping in return? I’ve spent over twenty years inside that question, building and applying AI within complex systems at the highest levels of society. What I’ve seen is both extraordinary and unsettling.
Before I entered the world of technology, my heart was in nature. I spent years living simply, immersed in the wild, seeking deep connection. That longing had been with me since childhood—shaped by countless hours outdoors and a growing passion for the environment that led me to activism during university. I ran a radio show, led an environment group, and tried to fight for what I loved. But over time, the scale of destruction—and the complicity of the systems meant to protect us—became overwhelming. I no longer believed change would come from within those structures. What I wanted was a life of self-sufficiency, community, and care. I started shaping my life around growing my own food, making what I needed, and preparing for a world I felt was heading toward collapse.
But life has a way of shifting you. One step, then another, until you look up and find yourself somewhere you never expected to be.
For me, that place was the high edge of the tech world—governments, corporations, even military systems. Through a strange, often uncomfortable series of events, I was pulled into that world, and part of me always resisted. I didn’t want to be there. I was afraid I wouldn’t be able to do it, or worse, that I would succeed and lose myself. I felt disappointed in what I saw—structures built to control, to extract, to dominate. And yet, I stayed. Because somewhere inside me, I believed that technology could do something better. That maybe, just maybe, it could help us reconnect with what really matters.
Somewhere along the way, I developed a deep fascination with how systems work—not just human systems, but the ones that shaped us long before we built anything of our own. I began to see that the very structures we try to engineer—our organizations, economies, institutions—are all attempts to replicate something we first observed in nature. Even when we force rigid hierarchies or impose control, the base logic behind our systems is borrowed from the natural world. That realization drew me into the study of complex adaptive systems—the intelligence that governs ecosystems, weaves weather patterns, and orchestrates the trillions of cells that form a human body. What struck me most was the decentralized nature of this intelligence. In ecosystems, coherence doesn’t emerge from a single point of command—it arises through the relationships between self-organizing parts. These parts adapt, evolve, and respond to change in ways that preserve both individual autonomy and collective stability. It’s a resonance, not a rulebook. A logic where the health of each part supports the health of the whole, and vice versa. In this, I saw a model of intelligence we rarely apply in the human world—a system that lifts potential through relationship, where care flows in both directions, between the individual and the whole. That, to me, is coherence. And it became the lens through which I began to see everything.
That belief in coherence—life lived with integrity and alignment—never left me, even as I found myself drawn into the upper edges of the tech world. Over two decades, I pushed the boundaries of what was possible in digital strategy, particularly within Google’s search landscape. I became one of the top performers globally in my field, but it was never about the money. I was surrounded by wealth, by people chasing numbers and scale—but I was chasing understanding. What captivated me were the patterns beneath the surface, the subtle signals hidden in the noise.
While the world was still scrambling to understand what Google was and how it worked, I was already deeply embedded inside it. I had access to the patterns, the data, the underlying logic that shaped its output. What I saw was both brilliant and deeply flawed. At its core, Google was attempting something extraordinary: to organize the entire informational landscape of humanity into a coherent structure—to determine, at scale, what mattered most. My talent at the time was shaping that flow. With relative ease, I could influence what appeared at the top of search results. That level of access exposed a critical flaw in the model: for all its complexity, Google’s logic was still mechanical. It treated information as static, stripped of its origins, its intentions, and its depth. And yet, the hunger it responded to was very real. Humanity, as a system, was missing something—an internal function for understanding itself. We lacked a shared clarity about what we knew, who we could trust, and how it all connected. From my position inside the machine, I could see the cracks in our epistemic foundation. The real issue wasn’t just data—it was relationship. Human systems were built on social trust, but trust doesn’t scale. We were trying to govern global complexity using relational tools that were never meant to extend that far. And so we strained—searching for meaning, expertise, relevance—within a structure that couldn’t carry the weight. That realization struck me deeply. It was coherence we were missing. Not more content, but better relationships between signal and source, between truth and trust. That brokenness lit a fire in me. I wanted to find a better way to structure human understanding—one that could carry the intelligence needed to meet the complexity of the moment.
When I sold my company 16 years ago, I wasn’t interested in building something bigger. I was looking for something cleaner. I wanted simplicity. I no longer wanted to manage people—I wanted to replace the need for management altogether. So I began building AI systems that could replicate the kinds of thinking I needed around me. What emerged was more than functional. It gave me extraordinary clarity. My capacity to navigate complex systems expanded. I could see structures others missed. And that opened doors.
Eventually, I found myself part of a high-level consortium during the early stages of the COVID crisis in Australia. We were developing one of the country’s first proactive early-warning systems. I played a key role—personally presenting to the Prime Minister and the CEO of Google Australia. It was surreal to stand at that intersection of crisis and influence. But what left the deepest impression wasn’t the access or the spotlight. It was how fragile everything felt behind the scenes. Faced with real urgency, our systems faltered. Intelligence gave way to politics. Response was shaped more by narrative than by need.
That experience didn’t surprise me—it clarified something I had sensed for a long time. The problem isn’t a lack of good people. It’s the rigidity of the structures they’re operating within. Systems built on outdated logics—centralized power, reactive governance, performance metrics that ignore presence. These systems cannot hold complexity. They fracture under pressure. And when AI is dropped into these environments, it doesn’t correct the flaws—it magnifies them.
Even so, I’ve never lost faith in the possibility of something better. Throughout those years, my motivation wasn’t just professional curiosity—it was deeply personal. The earliest seeds of Thought Wave were planted with my family and community in mind. I wanted to build systems that could help us weather what I felt was coming. Systems designed not to control people, but to support them. To strengthen relationships. To distribute insight. To make resilience something we could build together.
And so the work continued. As the systems I built evolved, I realized I wasn’t just developing tools—I was externalizing a way of thinking. Code became a mirror for cognition. And in moments of alignment, something powerful happened: human insight and machine precision came together in service of something meaningful. I saw what it looked like when intelligence—real, grounded intelligence—moved with purpose.
That’s still what I’m working toward. Because at its best, AI doesn’t replace us—it extends us. But only if we place it inside systems that reflect what we actually care about. Only if we design for coherence, not control.
So that became my work: system design. Looking not just at outcomes, but at flows—how thought moves, how attention is directed, how value is recognized and distributed. I began to see intelligence differently. Not as something held by individuals, but as something that emerges through patterns—between people, across contexts, in the space between sensing and action.
This work led me to focus on developing new protocols—ways of patterning human intelligence through AI support. Not to replace thinking, but to scaffold it. To give shape to what people feel, know, and mean, in ways that can move clearly through a collective. That’s where Thought Wave began. A way to make intelligence relational. To reconnect decision-making with the richness of human experience. To turn the mess of modern systems into something more aligned, more adaptive, more alive.
It hasn’t been easy. I don’t have institutional backing. I don’t have a big fund behind me. I wake up every day trying to keep this work moving with limited resources and a lot of faith. Some days feel like a quiet rebellion—a small group of us, working like the Rebel Alliance, against something vast, powerful, and misaligned. But we keep going. Because we believe that intelligence—real intelligence—is still possible. And that we don’t have to choose between nature and technology. We can build something new that honors both.
AI doesn’t scare me because it’s powerful. It scares me because of how we’re using it. It scares me because I know how deeply misaligned our current systems are—and how quickly that misalignment can scale when amplified by machine precision. But it also gives me hope. Because if we embed AI into structures that value care, resonance, and relevance, it becomes something else entirely. A mirror. A partner. A way to think together at a level we’ve never reached before.
“To AI or not to AI” misses the point. We are already building with it. The real question is: what are we building into it? And what kind of intelligence are we inviting into the world?
I’ve come a long way from those early years spent seeking connection in the natural world. But the thread hasn’t broken. I’m still pursuing the same truth I was reaching for back then: coherence. A way of living—and thinking—that aligns with life rather than pulling us away from it. Technology wasn’t the path I expected to take, but it gave me something I didn’t know I needed: the tools to understand systems, to see where they break, and to imagine how they might be made whole. AI, for all its complexity, might just be one of the tools that leads us back to a more grounded intelligence—not through force, but through thoughtful design.
That’s why I continue. Even when resources are thin. Even when the world feels too far gone. I still believe in the possibility of a humanity that remembers what it truly values. A humanity that moves with wisdom, not just speed. That builds systems which reflect care, meaning, and presence. This work is my way of protecting what I love. Of offering something back. And of helping shape a future where intelligence serves life—rather than the other way around.