When Video Broke Us All
From video hype to burnout — and AI is next
Video may have killed the radio star, but during the pandemic it damn near killed us all. Even today, I get a little PTSD twinge when I join a video call. Which is sad, because I used to be the most enthusiastic (to the point of being annoying) video evangelist the world had ever seen.
Zoom’s stock price tells the story:
Forget TP, ZM was the Kleenex we collectively sobbed into through the pandemic. And it wasn’t just Zoom, demand for my then product Teams video turned up to 11 overnight while we frantically scrambled to keep video rolling for a country now under lockdown. Two years running at full speed ground my team down and fed an intense burnout I’m finally shaking off.
After the pandemic, we all turned off our video. No more meetings! We were exhausted. You know how traveling in a noisy plane or driving for long periods will tire you out? Your brain is working hard filtering out the noise and the lights so you can function. The cacophony of atypical stimulation wears you down.
You might hear this called cognitive load and boy oh boy does it spike in video calls.
A video call is so much work: a screenful of ever moving faces, slides packed with datapoints and trends, chats scrolling ever faster; your brain compensating for mute, lip sync, lag, missing facial and body cues; voices talking over each other and — the worst — the constant and terrifying presence of your own zombie face staring back at you.
It wasn’t always like this. In the 2010s, video was the hot new frontier in collaboration with the brightest of futures.
Not unlike this AI moment we find ourselves in.
In 2010, I walked into what was then the top tier video setup — a Cisco TelePresence room — and bam, I was hooked. TelePresence was the future of meetings — Star Trek’s holodeck was within reach.
Cisco was my #1 enemy then and we — Microsoft — had nothing that came close. Like the holodeck in Star Trek, TelePresence required a dedicated room. Three massive screens took up the far side of the room, curving to form one side of a virtual table fronted with a matching real table and chairs. Cameras, mics, speakers and screens sprouted everywhere, connected with fat bundles of cable in the room — and out across the internet.1

The idea was to make the experience as lifelike, as natural, as possible. Telepresence was marketed as immersive video. But, it wasn’t.
TelePresence carried a silly price tag, so it was only for the people with a corner office2 and given Cisco’s dominant position in networking3, we (again with the royal we - Microsoft) countered with our strength, the PC on every desk — video for everyone! The Skype acquisition was part of this strategy, and we quickly brought a ton of cheap PC peripherals like webcams and speakerphones out so every PC could do videoconferencing.
Microsoft Research got spun up and developed the coolest 360 ring camera stuck on top of speaker phone that we called RoundTable4. Pop it in any conference room and you’re good to go! It had groundbreaking tech for the time. We even developed a prototype Telepresence Robot that could roll into a meeting, glide up to the table, and attend on your behalf.5

The mantra was reduce travel, improve remote collaboration and outcomes, save money. Video usage slowly grew. Then March 2020 hit, and Zoom came out of nowhere to win the day with an easy-to-use product where anyone could get into a meeting anywhere, with consistently good audio and the ubiquitous Brady Bunch video gallery.
Zoom capitalized on all the early work we did to make PCs video devices.
The subsequent pandemic years saw a hyper focus in video innovation with new layouts and features coming fast and furious. While quality steadily improved and we shipped a few great features6, it was mostly novelty stuff. What new video view could we add to entertain the increasingly bored and stressed-out masses joining call after call after call from their bedroom?
Remember when this Texas attorney joined a courtroom hearing on Zoom and couldn’t turn off his cat filter?
I’m here live judge … I’m not a cat!
We threw animated animal avatars in to go with the cat filters, fantastical backgrounds (look I’m taking my meeting from the Star Wars Cantina!), and a host of silly video layouts to avoid the Brady Bunch gut punch. At the same time Mark Zuckerberg went all in on the online metaverse featuring all the same oppressive cognitive load, now multiplied tenfold by a chunky VR headset strapped to your face7.
None of it stuck. None of it solved the foundational problem of video meetings — at the core they felt deeply unnatural — and we got sick of it, it wore us down.
So, exiting the pandemic, video use, along with Zoom’s stock price, plummeted. AI and new hardware have now solved many of the problems that plagued videoconferencing. Google Research’s Project Starline, commercialized into Google Beam and now sold by HP is the latest.8
Google found a way to make virtual meetings suck less
Too little, too late — collectively society made the call — we hate this. Let’s skip the holodeck. Millions of years of human interaction are wired into our brains that no twenty-year-old tech stack is going to displace.
Now, as we charge into the age of AI — where we are increasingly interacting with computers instead of people — will we see another massive societal allergic reaction?
AI agents are now commonplace in online meetings. They make perfect scribes, summarizing notes, capturing tasks. They are quickly evolving to the point where for example, a weekly project status meeting no longer requires humans. A group of AIs (imagine your agentic AI twin or double) can look across workloads, tasks and dependencies within an overall project plan backed by data and dashboards and figure out the best path forward better than any two-hour fustercluck9 could.
You get two hours of your life back; we’ll let you know what to work on for the rest of the week.
If that sounds a little dystopian, consider how ChatGPT spread like wildfire: 1 million users in 5 days; 100 million in 60; approaching a billion today. Why? Because human. Because you talk to it. Natural language, not code. Share pictures, talk through problems, brainstorm ideas, argue points. We’re comfortable with these interactions, it’s how we work, it’s what we need.
While video calls became profoundly inhuman, even inhumane — AI interaction gets increasingly human. I was checking if the movie I just watched had a sequel and when my go-to AI GPT couldn’t find it, we had a word:

Getting an unexpected response like this is one of the reasons we love AI. Generative AI, the engine behind chatbots and agents, slots into the same productivity and collaboration category of software — like videoconferencing — that I built out for companies throughout my career. We would often quantify productivity gains from these systems. For example, a project gets done 10% faster when the entire team can collaborate on it within Teams chat channels.
AI productivity results are much, much better. AI has shown the ability to reduce tasks by 40-60% across a broad range of functions like research, documentation, coding. In some cases, compressing a task by 80%.
But will we turn on AI like video? Do we really want a future where we don’t meet? Do we want to be represented by digital twins? Told by the AI boss what to do?
The pandemic showed that we humans will turn wholesale as a society against tech. AI is undoubtedly the next platform of computing, this new cognition layer is a boon across most disciplines, so it’s going to work under the covers regardless.
But how much of our humanity — our direct interactions with each other — will we tolerate it replacing?
If AI keeps on being that faithful dog trotting behind us, always ready to help, the love may grow. But if it starts replacing human connections we value, we need — like turning us into managers of digital twins — we might just pull the plug.
Just like we did with our webcams.
Telepresence needed ridiculous network performance to work well which — at least in theory — aligned perfectly with Cisco’s core networking business. However, it was too complicated, requiring a dedicated ‘concierge’ to keep it running. Execs didn’t like that and they didn’t like being relegated to a floating head sequestered away in the special purpose, dimly lit room down the hall.
Like a quarter of a million dollars for a single room. Ironically execs hated it.
While Microsoft owned the enterprise desktop, Cisco owned the enterprise network and they coupled TelePresence with it, developing an arcane set of network protocols that prioritized audio and video network traffic for performance, broadly called QoS or Quality of Service.
Microsoft couldn’t compete there, so we fell back to our strengths in software and the PC to subvert their customized hardware and networking. Cisco’s TelePresence QoS model is largely gone, but the underlying DSCP-based prioritization (EF for audio, AF41 for video) remains the standard for real-time media on enterprise networks.
Nicknamed the Klingon Warship for its long neck and stubby head, Polycom brought it to market under the uninspired name CX5000. In another sad casualty of the video wars, Polycom was then bought by Plantronics which — toward the end of the pandemic — was acquired by HP where printers go to die.
While other smaller companies shipped telepresence robots, we never shipped ours — while it had a cool nerd factor, like TelePresence, it really didn’t match anything close to what humans actually wanted.
Background blur was the best feature we shipped during the pandemic. It leveled the playing field for folks joining from their messy kitchen or McDonald’s. On the consumer side — home video devices like Facebook Portal, Amazon Echo, Microsoft Kinect — all turned out to be duds.
A friend who worked at Facebook at the time told me how hard it was to find engineers to work on the metaverse, because the headset gave you migraines.
At $25K apiece (you need at least two) this is the high end exec market Cisco Telepresence went after.




I still get that micro-panic when the camera light turns blue.
Like my brain is bracing for another marathon day of staring at tiny boxes.
I was so over it that in 2024, I opted to make my schedule hybrid.
I work three half days onsite to avoid video fatigue.
And you are most welcome, Andrew. Thank you for the mention.
Happy Friday!
The cognitive load angle is spot on. People forget that your brain is constantly doing extra work during video calls just to process all those unnatural inputs. The parallel to AI is interesting because AI actually reduces cognitive load in a lot of cases instead of adding to it, which might be why adoption feels different. The risk you're pointing out about replacing human connection is real though, especially if companies start pushing digital twins instead of letting AI stay in the assistant role where it works best.