Wednesday, March 18, 2026

Q&A with Vivienne Ming

  


 

 

Vivienne Ming is the author of the new book Robot-Proof: When Machines Have All the Answers, Build Better People. She is also a neuroscientist and an entrepreneur.  

 

Q: What inspired you to write Robot-Proof?

 

A: Honestly, the book is the story I've been living for 30 years. I just finally wrote it down.

 

When I was a kid I was supposed to win a Nobel Prize. That was the expectation. Instead, I struggled through school, flunked out of university, and [spent] a big chunk of the ‘90s homeless.

 

Years later I got a second chance. I was never going to win a Nobel Prize, but I could still live a life that had meaning. I completed my entire undergraduate degree in 15 months, alongside two research projects.

 

In my first programming course I earned a perfect grade. The professor recommended me for a research assistantship in the Machine Perception Lab, and that's where I built my first model: a neural network that could distinguish a real smile from a fake one in photographs of human faces.

 

I was hooked. Not on "AI" exactly, but on the idea that machines could help humans understand humans. That distinction has driven everything since.

 

Robot-Proof is part memoir, part science, part provocation. I built the first AI system for managing Type 1 diabetes…for my own son after his shock diagnosis.

 

I've founded companies and run experiments in education technology, hiring and talent assessment, and applied neurotechnologies, with AI running through all of them.

 

I've spent nearly 30 years studying what natural and artificial intelligence can accomplish together, while watching most of the AI industry go in what I believe is precisely the wrong direction. The book is my attempt to make that argument in a way that's useful to people who aren't researchers but still care about their futures.

 

Q: How did you research the book, and did you learn anything that especially surprised you?

 

A: The research behind the book spans my entire career, but the finding that surprised me most came from a recent experiment I ran on hybrid intelligence: what happens when humans and AI work together.

 

I recruited teams of people and gave them one hour to make predictions about real-world events alongside AI systems. The predictions were drawn from real financial markets, which let us verify accuracy against the collective wisdom of thousands of motivated forecasters.

 

Most teams either handed off to the AI entirely or used it to confirm what they already believed. Neither approach worked particularly well. But a small fraction—roughly 5-10 percent—did something fundamentally different. They argued with the AI. They demanded counter-arguments. They used it to stress-test their own thinking rather than replace it.

 

Those teams were the only ones who consistently matched or outperformed the market.

 

What surprised me wasn't that they did better. It was what predicted membership in that group. Not IQ. Not technical skill. The strongest predictors were perspective-taking, curiosity, fluid intelligence, and intellectual humility—the capacity to genuinely wonder what you might be missing.

 

I've been making versions of this argument for years, but seeing it hold up in a rigorous, verified experiment still stopped me cold.

 

The neuroscience surprised me too. Students using AI in the conventional way—asking for answers and submitting them—showed more than a 40 percent reduction in the gamma-band brain activity that signals active cognitive engagement.

 

Their brains were measurably less active. A majority of people are using AI to literally substitute for their own thinking.

 

Q: How does your experience as an entrepreneur and scientist affect your views about AI?

 

A: In most ways they reinforce each other, but they also create a productive tension that I think keeps me honest.

 

As a scientist I'm trained to be skeptical of my own hypotheses and to let data change my mind. As an entrepreneur I've had to build things that actually work for real people—AI for education, AI for hiring, AI for Alzheimer's detection—which means confronting the gap between what a model does in a lab and what it does in someone's life. That gap is enormous, and most AI discourse ignores it entirely.

 

What both experiences give me is a long view. I've been working in this field for nearly 30 years, and I was always interested in what artificial and natural intelligence could accomplish together. In fact, I interviewed for grad schools by telling them I want to build “cyborgs.” 

 

I watched the "learn to code" consensus solidify in the mid-2010s and gave a talk in 2016 predicting that machines would write most code within a decade. That was not a popular position at the time. The subsequent research has been kind to it.

 

I made predictions around the same time the GPS navigation would increase cognitive decline by robbing us of that valuable cognitive exercise. Now GPT is the new GPS as a threat to cognitive health.

 

The entrepreneurial experience specifically taught me that people never use products I invent the way I thought. They are some perfectly rational agents. Most take the path of least resistance, even when it hurts them in the long turn.

 

When you build AI that makes it maximally easy to offload thinking, people offload thinking. And they don't get it back just because you'd prefer they would.

 

Q: What do you hope readers take away from the book?

 

A: Two things, in tension with each other.

 

The first is that the situation is more serious than most people realize. We are not just facing a future in which AI might replace human labor. AI might well create an enormous number of new jobs, but who will be qualified to fill them?

 

We are living through a present in which AI is already reducing human cognitive engagement for the majority of people who interact with it. The brain's capacity for reasoning, for curiosity, for tolerating uncertainty—these are use-dependent. They atrophy.

 

The lazy myth that AI will free you to live an amazing, creative life…just because—it’s not true.

 

The second is that this is not inevitable. The people in my experiment who got genuinely smarter by working with AI, the “Cyborgs,” were not special. They didn't have supergenius IQs.

 

They had developed specific capacities: the capacity to push back, the capacity to ask what they might be missing, to treat discomfort as information rather than as something to eliminate.

 

These “foundation” skills can be taught. They can be role modeled. They can be rewarded by parents, educators, managers, and even by AI tools themselves. You can be a better person; it's hard but you can do it. In a world rich in AI, you might have to.

 

I want readers to finish the book feeling that the choice is real and that it belongs to them. Not to regulators or AI companies or some future version of ourselves to figure it out later. Right now, today, in how you use these tools and how you raise and educate the people around you.

 

Q: What are you working on now?

 

A: I’m working on many things (even a screenplay), but two projects stand out. They feel connected even though they look very different from the outside.

 

At the Possibility Institute, where I’m the chief scientist and cofounder, we're studying innovation itself as a phenomenon.

 

My team is building AI systems that trace threads of discovery across history—mapping how unexpected intersections between distant fields produce breakthroughs, and how the same dynamics produce spectacular failures.

 

Think of it as turning James Burke's (truly amazing!) ‘70s documentary series Connections into a predictive tool: given where the threads of innovation currently are, where might they cross next?

 

It's early and ambitious, but the question it's asking, “Can we forecast discovery before it happens?”, feels important for exactly the reasons the book describes. The future belongs to people and institutions that know how to explore the unknown.

 

The other is the ORA project, which uses psychological research, complex economic modeling, and AI to support political pluralism online. In plain terms: we're using machines to help humans see the humanity in their fellow humans.

 

If the book is about what AI does to individual cognition, ORA is about what it does to collective intelligence…and whether we can do something about that.

 

Q: Anything else we should know?

 

A: Only that the book is not a warning about AI. I've spent 30 years in this field because I believe deeply in what these technologies can do.

 

My first model distinguished real smiles from fake ones. Since then I've built models for diabetes and bipolar disorder, for reuniting orphan refugees with their families, for helping autistic children learn to read facial expressions.

 

I have seen what AI can do because I've done it. But I believe even more in human capacity…even when we let ourselves down, which we do, frequently, myself included.

 

What I am is impatient with a conversation that treats the future as something that will happen to us.

 

The K-shaped divergence I describe in the book—between the people who get sharper through AI and the people who get smaller—is not a law of nature. It's a choice we're making right now, mostly without realizing it.

 

I wrote Robot-Proof: When Machines Have All The Answers, Build Better People because I think we deserve to make that choice consciously.

 

--Interview with Deborah Kalb 

No comments:

Post a Comment