POSTS

Insights and ideas from the world of technology.

Carl Pei’s 5 Bold Predictions on AI Agents Replacing Apps

Image credits: Slush

AI agents

 

The smartphone app economy is worth over $200 billion — and Nothing CEO Carl Pei just put a timer on all of it.

 

Speaking at SXSW in Austin this week, Pei made one of the boldest statements any sitting tech CEO has made about the mobile industry: apps are going to disappear, and the companies whose entire value lives inside an app will be disrupted whether they’re ready for it or not. This wasn’t a vague futurist talking about 2050. Pei backed this up with a $200 million Series C raise and a full product roadmap centered on building the device that replaces the app grid entirely. When a founder puts that kind of money behind a prediction, it’s worth paying close attention.

 

I’ve been following this conversation for a while, and honestly, the part that stuck with me wasn’t the headline—it was the economic number buried underneath it. According to data from Sensor Tower, mobile apps generated over $200 billion in revenue last year. Apple’s App Store alone pulled in $85 billion in 2023, taking a 15 to 30 percent cut from every developer on the platform. That entire machine—indie developers, enterprise giants, and platform gatekeepers—is what Pei is saying AI agents will eventually dismantle. That’s not a product refresh. That’s an industry-level upheaval.

 

The App Grid Nobody Admits Is Broken

Before you can understand where AI agents are headed, you have to be honest about where we currently are. Pei made a point at SXSW that sounds almost too obvious but hits hard once you actually think about it: the fundamental way humans use smartphones has not changed in 20 years. We still have lock screens. We still have icon grids. We still have app stores. We still launch individual apps one by one to accomplish a single real-world task, then close them and open the next one.

 

“The current way we use phones is very old-school. It’s pre-iPhone—there used to be Palm Pilots and PDAs back in the day. And if you think about the user experience, it’s still very similar,” Pei said. His example was refreshingly mundane: arranging to grab coffee with a friend. To do that in 2026, you open a messaging app to coordinate, a maps app to pick a location, a rideshare app to get there, and a calendar app to block the time. Four apps. For coffee. That friction is so deeply normalized we don’t even notice it anymore—but it’s exactly the kind of problem AI agents are designed to eliminate.

 

What I find interesting here is that the industry has largely confused hardware improvement with genuine progress. Thinner bezels, better camera sensors, brighter displays — all real advances. But the interaction model underneath all of that hasn’t moved. You’re still the operator of your own device, manually feeding it instructions one step at a time.

 

What AI Agents Actually Do Differently

This is the part most coverage glossed over. Pei isn’t just describing a smarter version of Siri. He draws a sharp line between two stages of AI capability—and only one of them is genuinely transformative.

 

The first stage, already being tested by various companies today, is AI that executes commands on your behalf. Ask it to book a flight, and it books a flight. Pei dismissed this as “super boring.” It’s reactive, it’s useful in a narrow sense, and it still requires you to know what you want and ask for it. You’re still the operator. The agent is just a faster interface.

 

The second stage is where the real disruption begins. This is AI that learns your long-term intentions—not just what you asked for today, but what you’re working toward over weeks and months. If you’ve mentioned wanting to be healthier, the device proactively nudges you during an opening in your schedule. If you always prefer window seats and non-stop flights, the AI doesn’t wait for you to specify — it just books correctly every time without being asked.

 

“I think it gets even more powerful when it starts surfacing suggestions for you; you don’t have to manually come up with an idea… when the system knows us so well, it will come up with things that we don’t even know we wanted,” Pei explained.

 

After looking into this more closely, I can tell you that the memory and long-term context capabilities being built by OpenAI and Google are early proof points for exactly this kind of intent modeling. ChatGPT’s persistent memory feature and Google Gemini’s Personal Intelligence rollout—which now draws from your Gmail, Photos, and search history—are both live experiments in what Pei is describing. The infrastructure for the post-app smartphone is already being quietly assembled.

 

The Interface Nobody Is Building Yet — But Should Be

Here’s the contradiction sitting at the center of this entire conversation, and it’s one that most outlets missed entirely. Today, most AI agent demos involve the agent navigating apps the same way human thumbs do—tapping buttons, scrolling menus, and filling forms on interfaces designed for human fingers. Pei says that’s completely the wrong approach.

 

“The future is not the agent using a human interface. You need to create an interface for the agent to use. I think that’s the more future-proof way of doing it,” Pei said. This is a technically important distinction that has massive downstream consequences. An AI clumsily mimicking human touch gestures on a screen built for human hands is slow, error-prone, and fundamentally awkward. What agents actually need are structured, machine-readable APIs that expose service capabilities directly—think of Android Intents or iOS Shortcuts taken to a completely different scale.

 

Instead of an agent navigating the Uber app the way you would, a machine-native interface lets the agent query availability, pricing, and wait times directly and then act—no visual interface required at all.

 

Personally, I think this is the most underappreciated part of Pei’s argument. It means the entire architecture of how apps and services expose their functionality needs to be rebuilt from scratch — not for users, but for agents. Discovery moves from app stores to capability catalogs. The interface layer disappears entirely from the user’s perspective.

 

What the Big Players Are Already Doing

This is where the predictions and the real-world evidence start to converge—and the picture is more advanced than most people realize.

 

Samsung is the furthest along among major phone makers. The Galaxy S26, which launched in March 2026, ships with a tri-engine AI setup: Google’s Gemini for agentic tasks like booking rides and acting across apps, Perplexity for web-based queries, and an upgraded Bixby as the on-device assistant.

 

Crucially, Gemini on the S26 can now take autonomous action inside third-party apps — not just Samsung’s native apps, as was the case with the S25. That’s a meaningful and concrete step toward the agent-first vision Pei is describing. Samsung also plans to double the number of mobile devices running Galaxy AI features from 400 million to 800 million in 2026, according to company announcements.

 

Google is moving with similar urgency. Gemini is on track to fully replace Google Assistant across Android devices in 2026, extending to phones, smartwatches, cars, and smart home hardware. Google is also reportedly building a successor to ChromeOS—internally referred to as Aluminium OS—that would merge Android and desktop computing into a single AI-first platform. Sources suggest Gemini-powered smart glasses, developed in collaboration with Samsung, Gentle Monster, and Warby Parker, are expected to ship in 2026 as well.

 

Apple, characteristically, is taking a slower and more controlled path. The company’s biggest AI move in 2026 is expected to be a major overhaul of Siri, making it capable of multi-step task completion and more natural conversation.

 

Reports indicate Apple is considering adopting Google’s Gemini to power this upgraded Siri, reflecting an internal view that large language models may become commoditized — a strategic calculation that keeps Apple’s enormous cash reserves intact while competitors spend aggressively. Whether Apple’s patience proves prescient or costly will likely become clear within the year.

 

Industry insiders hint that OpenAI is also repositioning ChatGPT away from being a question-answering tool toward becoming what they’re privately describing as a “digital butler”—a proactive assistant that manages your schedule, nudges you about appointments, and coordinates across services without being asked.

 

According to OpenAI’s CEO of applications, Fidji Simo, within a year, answering questions will be “the least useful thing AI can do,” replaced by proactive agents running constantly in the background and handling real-world tasks autonomously.

 

The Predictions, the Rumors, and the Timeline

Gartner predicts that 40 percent of enterprise applications will have task-specific AI agents embedded by the end of 2026 — up from less than 5 percent today. That’s not a small shift. It means the software people use every day at work will gradually transition from tools you interact with into platforms that take autonomous action on your behalf.

 

IBM’s AI trends report for 2026 echoes this, describing what one analyst called “agent control planes” becoming real this year—single interfaces that kick off multi-step tasks across your browser, inbox, editor, and calendar simultaneously, without you managing a dozen separate tools.

 

It is rumored that Nothing’s next flagship — likely the Phone (3) or Phone (4) — will be the first commercially available device to actively deprioritize the traditional app grid in favor of an agent-first interface. Pei has confirmed that Nothing is building its own proprietary operating system that will be, in his words, “significantly different” from anything available today.

 

In a post to the Nothing Community following the Series C close, Pei outlined a vision where each OS knows its user deeply, surfaces suggestions naturally, and executes tasks through agents once intent is confirmed. He extended this beyond smartphones, mentioning smart glasses, humanoid robots, and EVs as future form factors for the same OS.

 

Many believe the regulatory dimension of this transition is the most underexplored story in the space right now. If an AI agent books the wrong flight, sends a message you didn’t intend, or makes a financial decision on your behalf, legal responsibility frameworks are almost entirely unprepared to handle it. Privacy laws written for human app usage break down in a world where AI is acting across dozens of services simultaneously on your behalf. These questions don’t have answers yet, and the absence of regulatory guardrails may either accelerate adoption or trigger a sudden policy intervention that reshapes the entire market.

 

When I first heard Pei talking about this direction a year ago, I didn’t fully buy it. After digging into the Samsung S26 specs, the OpenAI agent roadmap, and Gartner’s enterprise adoption numbers, I changed my mind completely. The pieces are falling into place faster than the app economy wants to acknowledge.

 

The Real Stakes for Developers and What Comes Next

The developer implications here are significant and largely unresolved. A former Android executive described the potential trajectory as “Netscape-level disruption”—the kind that doesn’t just update a category but eliminates the underlying business model. If agents replace app-based workflows, the App Store model that generates $85 billion a year for Apple alone becomes far less relevant. Developers stop building for iOS and Android app stores and start building agent plugins, service APIs, and capability layers that AI can access directly.

 

According to an IEEE global survey, 52 percent of technologists expect AI personal assistants to reach mass consumer adoption in 2026—with agents handling grocery shopping, scheduling, and communication before users even register they needed help. Large retailers, including Walmart and Target, are already testing what they’re calling “agentic shopping”—AI that doesn’t just recommend products but actively manages reordering and price comparison across stores without any manual input.

 

Pei himself has been realistic about the timeline. He’s acknowledged that apps won’t disappear next year and that Nothing’s own OS still supports user-created mini apps as a practical bridge. In conversations predating SXSW, he suggested the full vision could take seven to ten years to materialize because people genuinely enjoy using apps, and habits shift slowly. But that’s not the same as the direction being uncertain. The direction is clear. The only variable is pace.

 

The smartphone app era was built on the premise that more tools equal more power. AI agents are being built on the exact opposite logic — that the best technology is the kind that removes itself entirely from your awareness and just handles things. Whether that future lands in three years or ten, the conversation Carl Pei is pushing into the mainstream deserves serious attention from every developer, founder, and platform company on the planet. The app icon has been the defining symbol of mobile computing for nearly two decades. The clock on it is running.

 

By Kavishan Virojh