“The future is already here — it’s just not evenly distributed.”
A hallway prediction, 1997
It was 1997. I was studying IT at a technical institute, and the daily reality of computing looked nothing like it does today. Installing software meant feeding a stack of 3.5-inch floppy disks into a machine and hoping none of them were corrupted. The internet existed, but for most people it was a dial-up novelty — a distant rumor rather than a utility. Windows 95 was the dominant platform, and the idea of storing anything “remotely” was not even a concept that had a name yet.
In the middle of a hallway conversation with classmates, I made a prediction that surprised even me as I said it: “In the future, we won’t install anything. Everything will live on the network and computers will just be a window into it.”
The term “cloud” didn’t exist. My classmates looked at me with polite skepticism — the kind reserved for ideas that sound interesting but implausible. I didn’t have a technical justification for it. It was more of an intuition, a pattern I was sensing in how computing was evolving.
History, as we know, proved that intuition right. And the experience taught me something I’ve carried ever since: the signals of the next paradigm are always visible before the paradigm arrives. You just have to be willing to follow them to an uncomfortable conclusion.
First contact with AI: impressive, but impractical
Years passed. The cloud arrived, matured, and became infrastructure. Then, around 2021, something new entered my radar: generative AI. My first real encounter was with DALL-E, OpenAI’s image generation model. The results were genuinely astonishing — you could type a sentence and watch something visual emerge from nothing. It felt like magic.
But I moved on quickly. Impressive as it was, I couldn’t see a practical application for my day-to-day work. I was building data pipelines, writing scripts, managing infrastructure. An image generator, however creative, didn’t fit anywhere in that workflow. I filed it under “fascinating experiment” and kept going.
ChatGPT and the moment everything shifted
Then came November 2022. ChatGPT launched and, within days, the conversation around AI changed completely. I remember waiting for access — there was a waitlist in those early weeks — and finally getting in. I typed a few questions, then a few more. Within an hour, something had changed in how I was thinking about information retrieval.
I stopped using traditional search engines for technical questions. Stack Overflow — which had been the canonical resource for our generation of developers, the place where thousands of careers were quietly shaped — moved to the background almost overnight. Not because it became worse, but because there was now a faster path to the same answers, one that could also explain the reasoning, adapt to context, and engage in follow-up.
ChatGPT had limitations, of course. Its knowledge had a cutoff date, and in those early months it would sometimes confabulate — answering with confidence about things it didn’t actually know. But the fundamental shift was clear: the way I consumed technical information had changed permanently. The form of the answer had changed. The speed had changed. The interaction model had changed.
The Google ecosystem and Gemini
After several months working with ChatGPT, I made a deliberate move toward Google’s ecosystem. I’m a deep Google user — Workspace, Drive, Sheets, the whole stack — and when Google launched Bard in early 2023, I started following its evolution closely. By early 2024, Bard had been rebranded as Gemini and had matured significantly, and the integration felt natural. Having AI that could directly interact with my documents, my calendar, my existing tools, was a meaningful step up from a standalone chat interface.
The productivity gains were real. Goals that previously took me months — building out data analysis pipelines, automating reporting workflows — started taking weeks. The integration with Visual Studio Code was particularly valuable: I could get context-aware suggestions without leaving the editor.
But Gemini was not without its friction. In longer sessions, it would sometimes lose the thread of what we were working on, suggest changes that broke things that had been working fine, or drift into solutions that were technically correct but missed the actual intent. It required a level of supervision that could become exhausting. The tool was powerful, but the cognitive overhead of keeping it on track was real and constant.
Claude Code and the change of paradigm
A few months ago — around December 2025 — I decided to try Claude Code, which had launched earlier that year. The concept of an AI that operated directly in the terminal, with access to the full project context, and could act autonomously on multi-step tasks, seemed like a meaningful architectural shift from the chat-based tools I had been using.
I decided to try it on a weekend, with a project that had been stalled for months. I wasn’t sure what to expect.
What happened over those two days was genuinely different. I finished the project. Not in a “got most of it done” way — I finished it. There was no drift, no frustrating loops of trying to explain what I meant, no solutions that fixed one thing and broke another. When I gave clear instructions, the results were precise. When I described a problem with enough context, the diagnosis was accurate.
I subscribed that week. The difference at the paid tier, with access to the more capable models, is significant. The best way I can describe it: it’s like having a junior developer with senior-level knowledge working alongside you around the clock. Give them clear requirements, and the output is clean. Give them ambiguous ones, and you get ambiguous results — which is fair, and which puts the responsibility exactly where it should be: on the quality of your thinking, not on the tool.
The barrier between having an idea and having working code has effectively disappeared. What used to take a weekend now takes an afternoon. What used to take a month now takes a week. The constraint is no longer execution — it’s clarity of thought.
A future both promising and unsettling
The future I see from here is as exciting as it is uncomfortable to think about honestly.
On the promising side: we are approaching a moment where the gap between an idea and its execution is functionally zero for anyone with the right tools and the ability to think clearly about problems. That is an enormous democratization of what it means to build things with software. Projects that would have required a team can now be prototyped by one person. Workflows that required specialized knowledge can be automated by someone willing to invest time in understanding the problem domain.
On the unsettling side: the economics of hiring are already shifting. Technology companies are actively evaluating whether they need to hire junior developers, or whether a senior engineer with an AI subscription can cover the same ground. The answer, increasingly, is the latter — at least for well-defined tasks. The demand for certain entry-level roles is not going to disappear overnight, but it is going to compress.
The question I keep returning to is not “what can AI do?” — that question answers itself faster every month. The harder question is: what does it mean to be a skilled technologist in a world where the code no longer has to be written entirely by a human?
My tentative answer: the value shifts from execution to architecture, from writing code to designing systems, from knowing syntax to understanding trade-offs. The developers who will thrive are not the ones who resist these tools, but the ones who develop a clear mental model of what the tools are good at, where they fail, and how to guide them with precision.
I made a prediction in a school hallway in 1997. I’m not going to pretend I know exactly how this one ends. But I recognize the pattern: a technology that starts as a curiosity, gets dismissed by skeptics, and then quietly becomes infrastructure. We are somewhere in the middle of that arc right now.
The window is open. The question is what you build while looking through it.