Historically, I have been a backend developer with deep-rooted concepts in algorithms, traditional computer science knowledge, and architectures. One of the core lessons ingrained in me was the concept of machines being predictable—nothing is magic. Everything stems from a configuration or a piece of code somewhere.
Statistics, by contrast, never quite sparked the same interest in me. I understood it well enough, and I’ve always had a deep fascination for quantum mechanics, where probability and uncertainty are baked into the nature of things. But when it came to computer science—particularly machine learning and neural networks—it always felt like statistics had been handed a job they weren’t really qualified for. The early wave of ML models struck me as clever but fragile, impressive under lab conditions but hard to trust in the wild. And when AI burst into the mainstream with declarations that it would be “as revolutionary as the internet,” I’ll admit, my first reaction was to roll my eyes. I had seen this story before, and I wasn’t convinced it would end any differently. I felt it was a bit too much, too soon.
Still, I’m an engineer, and at the heart of that identity is a basic instinct: try it yourself. So when ChatGPT came out and people started talking, I gave it a spin. Here, I must say, the fact that it was just available was a huge deal. It wasn’t about the intelligence of the model—it was the simplicity of the experience. No setup, no configuration, no training pipeline to mess with, no frills, no content to skip through, no jargon I needed to learn. Just a text box on a familiar-looking website. That, whether by design or not, was genius. You could try it without needing to invest in it, and that made all the difference in getting people like me to show up.
Although I had access to ChatGPT within a week of its launch, I think I struggled at first to understand its power. I treated it like a search engine. Asked short, direct questions. Got short, mostly acceptable answers. The results were okay, but nothing stood out in terms of how it was different from a search engine. It wasn’t compelling enough for me to ditch my search engine and switch over. Sure, it could generate poems and come up with fun stories and chat like a realistic person, but to me, those were nothing more than party tricks. It was fun, but nothing that I would change my life around for.
Over time, I used it less and less. Occasionally, I’d fire it up to build a quick schedule for my toddler, or to rough out an itinerary for a trip, but I’d usually run into the same problems: hallucinations, missing context, the sense that I had to do too much hand-holding to get anything useful. Most times, I would just think it was easier to use my traditional tools and do it myself. And eventually, like a lot of tools that promise more than they deliver, it faded into the background.
Now, full credit where it’s due—my wife was always a believer. She has a background in machine learning and has always had more patience for this space. She’d occasionally nudge me to try some new AI tool or check out a recent paper, and I’d nod politely and go back to whatever stubborn thing I was debugging. Somehow, the whole thing didn’t appeal to me, and I didn’t find value. I was constantly looking for a killer moment or application, but nothing convinced me. Something was always off. Something was always missing.
Sometime in late 2023, I started hearing enough about “Prompt Engineering” that I could not ignore it anymore. There were articles about it on Hacker News. Some creators I follow were starting to talk about it, and the concept of “master prompts” had gained steam. I read through those articles, watched videos, and listened to podcasts. Some ancient rusty gears in my mind started to creak and turn. It reminded me, oddly enough, of Hitchhiker’s Guide to the Galaxy (one of my favorite books, by the way). In that story, the problem wasn’t that the computer didn’t know the answer (42, in the case of the book). It was that nobody really knew what the question was. Maybe it wasn’t that ChatGPT never had the perfect answers; Maybe I just didn’t have the right questions.
I logged back into ChatGPT. By this time, the app had received a few updates. There was now a “Personalization” section where I could insert a master prompt. So off I went and started typing. I gave it a brain dump of me as a person—what I like and dislike, what appeals to me and what doesn’t, and what kind of responses I prefer.
Then I asked it a question. Not a single question, though—we know that was fine. A long thought dump, followed by a question. What was the question? It was about the next video game I should play. I told it the games I had enjoyed, what was important to me, what console I had, how much I was willing to pay, how much time I could spend on it per week, and all the other bits and bobs of information I could find in my head. And the question at the end was simply: “What video game should I play?”
And voilà. There it was. The search engine killer in all its glory.
It didn’t just produce links to popular titles—even my self-hosted search engine could do that—it explained how it got there. It summarized the common thread it found among the games I liked. It acknowledged that playing video games as a middle-aged adult with a toddler has very specific needs. And it gave me a great suggestion, with a summary of why it thought it would be a great fit for me. It didn’t feel like an answer from a search engine. It felt like someone had actually listened.
Since then, I’ve found myself turning to AI more often—sometimes for code scaffolding, sometimes for writing, sometimes just to clarify my own thinking. I’ve explored other LLMs, played with AI-powered IDEs, and used tools to review, rewrite, and summarize documents. I’ve stopped looking for perfection and started looking for leverage. And more often than not, it’s there.
The shift didn’t happen all at once. There was no lightbulb moment. Just a slow accumulation of small wins until the scale tipped. These days, whenever I take on a new task—whether it’s technical, creative, or administrative—the first question I ask is, “Can AI help with this?” Sometimes it can’t. But often, it can. And that’s enough. I can both see where it’s going and feel intrigued about where it’s headed. I am aboard the hype train.
For those who are curious, the suggested game is Hogwarts Legacy. Yes, I played it months after it was released, but it was the first time in a long time that I completed a video game. And that, for me, felt like a little kind of magic.