Mar 26, 2026

Apocaloptimist: A New AI Documentary Opens in Naples

A new film about AI's promise and peril opens Friday at three local theaters.

The AI Doc: Or How I Became an Apocaloptimist opens in U.S. theaters Friday, March 27. It is playing locally at Alamo Drafthouse Naples, Regal Naples, and Prado Stadium 12 in Bonita Springs. Check each theater's website for current showtimes.


About the Film

The subtitle is a nod to Stanley Kubrick's 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, a dark comedy about nuclear anxiety and the people responsible for managing it. The team behind The AI Doc ran through more than 200 options before settling on "apocaloptimist," a word coined on screen by Jason Matheny, President and CEO of the RAND Corporation.

  • Apocalypse = existential risk + power concentration
  • Optimist = massive upside (climate, medicine, productivity)

This might destroy us, but also might be amazing.

Apocaloptimist captures the film's central tension: AI could concentrate unprecedented power in a handful of labs and pose risks that the people building it openly describe as existential. It could also be the most transformative technology ever built, solving problems in climate, medicine, and productivity that humans alone cannot. The film argues that both of those things can be true at once.

The AI Doc was co-directed by Daniel Roher, who won an Academy Award for Navalny, and Charlie Tyrell. It was produced by the team behind Everything Everywhere All at Once. What began as a planned one-year production stretched to nearly three years as the technology kept evolving faster than the filmmakers could keep up with it.

The premise is personal: both Roher and Tyrell learned they would become fathers during production, and that changed the film. As Tyrell told Variety: "If we were both still probably single guys without children, we would've had a really negative film that would just be called 'Apocalypse.' But we decided to call it this instead." Roher pre-interviewed roughly 140 people, put more than 40 experts on camera, and generated 3,300 pages of transcripts in the process.

The film also includes about 15 minutes of hand-drawn animation, produced at a rate of four to seven seconds per day. A deliberate counterpoint to the digital world it examines — or simply something painstakingly human as the backdrop. Either way, the choice says something.

The driving question is less "what is AI?" and more "why are a handful of people deciding the future of humanity?"


Who's in It

More than 40 experts appear on camera. A few to know before you go:

Sam Altman is the CEO of OpenAI, the company behind ChatGPT. In the film, he acknowledges it's "impossible" to guarantee AI develops well, and that children today will "probably never be smarter than AI."

Dario Amodei is the CEO of Anthropic, an AI safety company he co-founded after leaving OpenAI. His line: "Am I confident that everything's going to work out? No, I'm not."

Demis Hassabis is the CEO of Google DeepMind, a Nobel Prize laureate, and the researcher behind AlphaGo and AlphaFold. His view: "If something is possible to do, humanity is going to do it."

Ilya Sutskever co-founded OpenAI and is now running his own AI safety company, Safe Superintelligence Inc.

Reid Hoffman co-founded LinkedIn and is one of Silicon Valley's most recognizable evangelists for AI's potential.

Peter Diamandis is the founder of XPRIZE and Singularity University, and the film's most vocal optimist on what AI could do for human health and longevity.

Yuval Noah Harari is the historian and author of Sapiens. In the film, he calls AI "a deadly threat."

Tristan Harris co-founded the Center for Humane Technology and was a central figure in the documentary The Social Dilemma. He says active AI researchers "don't expect their children to make it to high school."

Emily M. Bender is a linguistics professor at the University of Washington and a persistent critic of how AI capabilities are described and marketed. She argues that AI narratives often exclude and dehumanize the people already being affected by the technology.

Eliezer Yudkowsky co-founded the Machine Intelligence Research Institute and has spent decades arguing that AGI will kill everyone if built without solving alignment first. He warns in the film of the potential for what he calls "abrupt extermination."

Two notable absences: Elon Musk agreed to participate and then backed out. Mark Zuckerberg declined.


What Critics Are Saying

Reception has been mostly positive. On Rotten Tomatoes, 82% of critics' reviews are favorable. Variety called it essential viewing. The Roger Ebert site praised its emotional, inquisitive approach, while noting it gives optimistic AI visions significant screen time without interrogating questions of wealth and power concentration deeply enough. KQED offered a thorough look at how the film handles the full range of expert opinion it presents. The Sundance Institute published its own write-up after the film's premiere there in January.

For general audiences, it's an accessible introduction to a conversation that has largely been happening in tech circles. For people already following AI closely, it may cover familiar ground.


Why This Matters Beyond the Theater

Films like this shape how the public thinks about AI. Public sentiment shapes regulation. And regulation will directly determine which AI companies survive, which products reach the market, and how the technology gets built in the next few years. That chain of events is already in motion. An NBC News poll earlier this month found that 57% of registered voters believe AI's risks outweigh its benefits, and the policy response is accelerating. Three federal-level actions on AI landed in the past seven days, and protesters marched on AI company headquarters in San Francisco.

The White House framework · On March 20, the White House released a national AI legislative framework and handed it to Congress. The framework covers six areas, from protecting children online to workforce development, and calls for preempting state AI laws entirely, arguing that a patchwork of state regulation would undermine American competitiveness. The subtext is clear: the U.S. is in an AI race with China, and the administration wants to set the rules nationally so that race doesn't get slowed down by 50 different state legislatures.

Anthropic v. the Pentagon · On Tuesday, Anthropic argued in federal court against the Pentagon's decision to designate the company a "supply chain risk," a label previously applied only to foreign-linked companies like Huawei, never to a domestic one. The dispute started when Anthropic refused to let the Department of War (the Trump administration's rebranding of the Department of Defense) use its AI without restrictions, specifically for autonomous weapons and mass surveillance of Americans. The designation requires defense contractors, including Amazon, Microsoft, and Palantir, to certify they don't use Anthropic's Claude models on any military contract work. In practice, that threatens to cut Anthropic out of billions in government-adjacent business. The judge, pressing the government's lawyer on why the designation was warranted, said it looked like "an attempt to cripple Anthropic." An order on the preliminary injunction is expected in the coming days.

The data center moratorium · On Wednesday, Senator Bernie Sanders and Representative Alexandria Ocasio-Cortez introduced the AI Data Center Moratorium Act, which would freeze construction of new AI data centers nationwide until Congress passes federal AI regulation covering worker protections, environmental impact, and civil rights. The bill faces long odds in a Republican-controlled Congress.

The protests · On March 21, roughly 200 protesters marched through San Francisco from Anthropic's headquarters to OpenAI to xAI, demanding that AI companies commit to pausing development of frontier systems. These are small numbers, but the protests are growing, and the organizers include AI researchers and former tech workers.

The film doesn't cover these specific events, but it's asking the same questions: who should control AI, what limits should exist, and what happens when the technology outpaces the institutions meant to govern it.

Those questions are no longer theoretical.

For Southwest Florida, the specifics may feel distant. The consequences won't stay that way. The systems that emerge from these fights will touch every part of daily life here.


See It Locally

The AI Doc: Or How I Became an Apocaloptimist opens Friday, March 27 at:

Check each theater's website for current showtimes.

← All posts