The other day, a friend asked me: “Is AI going to be the best thing for humanity to ever come along, or the worst?”
Both, right?
You don’t have to look far back to see how this might play out. The dot-com boom peaked in 2000, and rising from the ashes of the subsequent bust, we got today’s digital world. Consumers got a phone in every pocket, online shopping, and streaming. Businesses gained efficiency, data-based decisions, and optimized supply chains.
But no one foresaw social media. And what a savage beast it turned out to be: attention grabbing, doom scrolling, fake-news generating, ad blasting apps, evil to the core.
Before we got the good stuff (and the bad stuff), the dot-com bubble burst, and the NASDAQ crashed in 2002, losing nearly 80% of its value. The likes of pets.com did not survive.
Stalwarts like Amazon doubled down on business models with viable unit economics, and 15 years later the NASDAQ recovered, ushering in a new era of capital investment that built up cloud datacenters, enabling the emergence of AI.
The past couple of weeks have seen a lot of folks predicting a similar and imminent AI bust. I don’t think so, but let’s take a look at the key events that spawned this latest ‘AI Bubble’ news cycle.

The first domino to fall, launching open schadenfreude season for all the AI critics, was OpenAI’s release of GPT-5. It was underwhelming, to say the least. GPT-5 is a fine model — I’ve been using it for a few weeks now, and I love how fast it is. For me, speed is the non-functional requirement.
I also appreciate that GPT-5 automatically switches to a reasoning model to puzzle out harder tasks on its own. I’m happy with fewer hallucinations. I’m using one of its new personalities: Robot (Efficient and Blunt). Robot suits me better than the prior model’s verbose and occasional sycophantic answers.
However, GPT-5 is not AGI. It’s not earth shaking, just evolutionary. The atmospheric expectations were self-inflicted. Sam Altman (CEO of OpenAI) is the king of hype, and GPT-5 just doesn’t live up to his years of teasing and promise. Immediately, skeptics claimed scaling limits had been reached and the overall usefulness of AI would be inherently constrained.
Sam and OpenAI went on a PR tour to control the damage, and took steps to mollify users, for example bringing back the prior GPT4o model that many users had developed relationships (!) with. At an ‘on the record’ dinner with journalists in San Francisco Sam used the B word (bubble) in reference to certain startups that are getting silly valuations:
… it’s “insane” that some AI startups with “three people and an idea” are receiving funding at such high valuations. “That’s not rational behavior,” Altman said. “Someone’s gonna get burned there, I think.” Over the past year, we’ve seen several AI startups, including Safe Superintelligence, led by OpenAI co-founder Ilya Sutskever, and Thinking Machines, founded by ex-OpenAI chief technology officer Mira Murati, raise billions of dollars.
The floodgates opened, and the pundits pounced, claiming Sam the prophet said all of AI is in a bubble. However, Altman is characterizing froth around the edges, or certain AI startups exhibiting bubble behavior, not the entire market. To the contrary, he (not surprisingly) doubled down on the positive impact he sees AI bringing to society and the economy, and to expect a ton more capital spending on datacenters and power to increase AI capabilities.
Case in point: friend.com, a consumer AI startup I covered a year ago. They were supposed to ship their AI necklace pendant companion early this year. They didn’t. Their founder, Avi Schiffmann, says they’re still working on it, but meanwhile Jony Ive’s io hardware startup joined up with OpenAI. Jony worked for Steve Jobs as Chief Designer, shaping all of Apple’s products from the 1990s through the 2010s (all the good stuff). It’ll be interesting to see what they come up with.
So, I fear Avi missed the boat and friend.com is likely one small example of an AI startup that won’t fulfill their initial promise. You might recall, they spent nearly $2 million of their funding — most of what they’d raised at the time — on their domain name, and the latest from Avi on X is how Apple-esque the packaging is, but not much about the actual product. So, like pets.com, they may not be making the best decisions when it comes to fundamentals.
Note, Avi is 22 years old …
The third shoe to drop (AI has many feet), was a study from MIT that showed 95% of corporate AI pilots aren’t achieving a decent ROI. When you double-click on that study, you’ll see MIT is citing top-down efforts where execs attempt to sprinkle AI magic from the sky versus hardcore business process re-engineering with these new capabilities. Changing core business processes is a tough nut that can take years of focused effort.
NVIDIA is the AI bellwether stock, representing the picks and shovels of the AI gold rush, and earlier this week they reported record sales. So, no, AI isn’t slowing down.
I talked about Gartner’s technology Hype Cycle around a year ago, and we’re certainly not through it yet — so this won’t be the last time we hear a chorus of AI critics. We will see plenty of friend.com-type companies come and go, and for sure some people investing in these will lose money, but the technology is here to stay and it will keep getting better.
Looking for unanticipated consequences like social media coming out of the dot-com boom, I happened across something unexpected and entertaining (and not so evil) and that’s a rash of AI music now playing across the airwaves, err TikTok. AI classics like I glued my balls to my butthole again and others even worse, which I’ll spare you from here.
It reminds me of when I was a kid, tuning into Dr. Demento every Sunday night where, to my glee, he’d play songs from the likes of Weird Al and others, that were mostly silly and sometimes a little offensive. This new generation’s version entertains me, and I still get a stupid grin on my face every time I get the chance to play with a new whizbang AI tool.
Right. AI does some useful stuff.
Artists should get paid when their work is used to train AI. I look for union contracts to cover this. It's pretty much impossible to pay them when it's USED.