A new interface era
User interfaces used to be pretty straightforward. You pointed, clicked, maybe dragged a file or two. Then along came smartphones, and we got used to swiping and tapping our way through life. Now we’re entering a new phase where you don’t interact with software so much as you talk to it.
With the rise of agentic AI, systems that can reason, act, and adapt on your behalf, the way we interact with technology is being quietly but radically rewritten. Tools like Microsoft’s Copilot, OpenAI’s GPTs, and Salesforce’s Einstein don’t just sit there waiting for input. They suggest, act, and sometimes even decide. The interface isn’t just evolving; in many cases, it’s vanishing altogether.
So, is this the death of UI? Or just its next transformation?
What is agentic AI? (And why everyone’s talking about it)
Agentic AI is what happens when software stops waiting for instructions and starts making decisions. It’s not just a clever autocomplete. It’s a system that perceives what’s happening, reasons through it, and takes action, often without needing to be micromanaged.
Jensen Huang, CEO of NVidia, describes agentic AI as software that doesn’t just retrieve data but “perceives, reasons, and acts on your behalf.” That could mean anything from writing a report, to booking a trip, to managing a logistics pipeline. It’s less like a digital tool and more like a very proactive colleague.
Microsoft’s CEO, Satya Nadella sees this as a seismic platform shift. At Microsoft Ignite, he described a “tapestry of AI agents” woven through our working lives. According to him, we’re moving toward software that understands context, uses tools on your behalf, and remembers what you care about. His take? Thirty years of change are being compressed into three. And at the heart of it is this move from forms and fields to natural language and intent.
Sam Altman, from OpenAI, sees agentic AI as the new universal interface. In his words, “talking to a computer has never felt really natural... now it does.” He envisions AI agents joining the workforce, handling tasks through conversation, whether that’s ordering groceries, drafting emails, or handling customer support. For Altman, natural language is no longer just an input method. It’s the whole interface.
So why does all this matter? Because it challenges the very foundation of how we interact with digital systems. Instead of learning software, users just tell the AI what they want. The system works it out. It’s a shift from control to delegation, from buttons and menus to dialogue.
That might sound like progress. But it also changes the rules. When the AI acts on your behalf, are you still in control? And what happens when it gets it wrong?
Here’s a link to some of the case studies, as they’re implemented into real-world software today.
We've been here before: a brief history of interface shifts
The idea of AI replacing your interface might sound radical, but it’s not the first time users have had to rethink how they interact with machines. If anything, agentic AI is just the next stop on a long and bumpy road.
Here’s a quick look in the rear-view mirror:
Command line to GUI (1980s–1990s)
Text-based commands gave way to windows, icons, and menus. Suddenly, you didn’t need to remember cryptic syntax; just click around until you found what you needed.
Impact: Opened up computing to a broader audience.
Pushback: Power users grumbled about losing precision and speed.
Desktop to web (Late 1990s–2000s)
Software moved into browsers. Everything became “just a link” away, accessible from anywhere. But it also meant slower performance and unfamiliar interaction patterns.
Impact: Lower barrier to entry, global access.
Pushback: Loss of control and reliance on connection speeds, the beginning of the work anywhere culture.
Web to mobile (2000s–2010s)
Touchscreens replaced mouse clicks. Apps were stripped down and optimised for fingers rather than keyboards. Context-aware features like GPS became standard.
Impact: Made computing more personal and portable.
Pushback: Smaller screens, limited functionality, constant notifications, we’re expected to be always available these days.
Traditional UI to Agentic AI (Now)
The interface is no longer visual, it’s conversational. You don’t browse for tools; you ask for outcomes. The system decides how to get there.
Impact: Removes friction for users who know what they want.
Pushback: Reduced transparency, less control, and new forms of error.
Each shift promised more simplicity but introduced new mental models. The mouse was intuitive… once you’d used one. The web was easier… once you figured out how links worked. Mobile apps felt natural… once you got used to touchscreens. Agentic AI is no different. Once the unfamiliarity wears off, the experience might just feel like magic.
But as always, the real test is how it plays out for the people using it day to day. Let’s get into that next.
The end user experience: smarter, simpler... or just confusing?
For end users, agentic AI can feel like a revelation or a riddle. It depends a lot on the context, the execution, and the person behind the keyboard. Here’s how things are shaking out so far.
What gets better
Natural interaction
Talking to software in plain English is a lot easier than learning a new system. As Don Norman, long-time champion of human-centred design, argued; technology should adapt to people, not the other way around. Conversational AI gets us closer to that ideal.
Less busywork
AI agents can summarise documents, write first drafts, analyse spreadsheets, and schedule meetings. It lets users focus on decision-making, rather than hunting for the right menu or exporting a report for the fifth time.
Easier access to insights
In tools like Excel, Copilot can surface trends or outliers you might never have spotted.
Consistency across tools
If your AI assistant works in your email, your calendar, your documents, and your CRM, that’s one less thing to learn. This consistency across platforms (like Microsoft’s Copilot or Salesforce’s Einstein) helps reduce friction and flattens the learning curve.
Room for personalisation
Over time, these agents start learning your preferences. Whether it’s how you like to communicate or what data you care about most, the potential for tailored support is high.
What gets complicated
Trust and accuracy
This is the big one. When the AI gets it right, it’s brilliant. When it gets it wrong, the confidence it delivers with can be dangerous. Users need ways to verify information, trace sources, and understand how the AI came to a conclusion. Otherwise, there’s a risk of blind trust in bad answers.
Loss of control and visibility
Traditional interfaces make options visible, menus, buttons, checkboxes. With AI, the possibilities are invisible until you ask. That can be freeing or disorienting. If the system acts on its own, users might wonder, "Why did it do that?" Designers need to offer clear explanations and let users take the wheel when it matters.
Privacy and data sensitivity
AI agents often rely on sensitive data to do their jobs. That raises legitimate concerns about what they can access, where that data goes, and who’s watching. Transparency here isn’t just nice to have; it’s critical. (This is a big enough topic that I’ll be diving into it in more detail in a separate article soon.)
Shifts in job roles
When AI takes over parts of your workflow, your role shifts. A support agent becomes a reviewer of AI-drafted responses. A marketer becomes more of an editor. This can be empowering, or it can feel like being reduced to quality control. As with any big tech shift, people need support to adapt, not just new tools.
Learning curve
Ironically, while talking is easy, knowing what to say isn’t always. Prompting an AI effectively is a skill. It’s not quite coding, but it’s not nothing either. End users are picking up new habits, learning to refine their requests and manage an unfamiliar relationship with software that sometimes pushes back.
I’ve been an expert in technology space for many, many years. Generally, as new technologies emerge, I’ve found them easy to adopt. I found the AI learning curve quite steep, decades of perfecting my Google-fu, meant that changing the way I interact with the technology was very challenging; it took some time before I was able to elicit good responses from AI.
Designing for the invisible: new UX challenges and considerations
If traditional UX design was about making the interface clear and easy to navigate, agentic UX is about making the invisible feel trustworthy and intuitive. That’s no mean feat.
Here are the big considerations designers and product teams now face.
Trust doesn’t come built-in
In the old world, trust came from seeing familiar patterns, save buttons, undo menus, dropdowns. With AI, users are asked to trust a system that might not show its logic or steps. Designers need to help users understand why the AI did something and give them ways to check its work.
That might mean:
Showing the source of facts or recommendations
Offering confidence indicators or uncertainty prompts
Letting users easily reverse or edit AI-driven actions
Clarity in ambiguity
Natural language is flexible. That’s a gift and a problem. A request like “Send the latest sales report to John” could mean five different things depending on who John is, what “latest” means, and which report you’re thinking of.
Designers have to help the AI disambiguate gracefully, without throwing up a wall of clarifying questions. A well-tuned agent asks just enough to stay on track, without making you feel like you’re filling out a form in disguise.
Human-in-the-loop is still key
Delegation is powerful, but full automation isn’t always the goal. Users often want to stay involved, even if it’s just to double-check or tweak.
Good UX means:
Keeping users informed before actions are taken
Offering previews, not just final results
Asking for approval in sensitive scenarios (e.g., sending an email, placing an order)
It’s not about taking control away, it’s about giving it back at the right moments.
Onboarding the user, not just the AI
You can’t just drop a Copilot into someone’s workflow and expect magic. People need help understanding what the AI can do, how to ask for it, and what to expect in return.
That might include:
Prompt suggestions and examples
Gentle nudges when users hesitate
Clear explanations when the AI can’t help
The mental model shift, from controlling software to collaborating with it, needs to be supported with just as much care as any new product launch.
Transparency around data use
If the AI is drawing from your calendar, your documents, or your company’s internal systems, users need to know. Not in a vague "we value your privacy" kind of way, but in clear, practical terms.
Can you delete the history? Can you restrict what the AI sees? Is it learning from you, or just using your input for one-off tasks?
Trust and adoption often hinge on these answers.
This is a fundamentally different design challenge, less about pixel placement and more about shaping a human-agent relationship. In a way, UX is now part behavioural psychology, part diplomacy, and part co-pilot training.
So, is UI Dead? Or just getting out of the way?
If this shift feels monumental, it’s because it is. We’re not just redesigning screens. We’re rethinking how people interact with software altogether.
Agentic AI isn’t killing the user interface. It’s making it less visible, more conversational, and in many cases, more useful. But that shift brings its own challenges: trust, control, clarity, and context. When the interface disappears, the real design work begins, because users still need to feel confident, capable, and in charge.
Industry leaders like Nadella, Altman, and Huang are aligned on one thing: this is the next big platform shift. But whether it empowers people or overwhelms them will come down to execution, not aspiration.
The early signs are promising. We’ve seen glimpses of real productivity gains, more natural interaction, and systems that feel more like collaborators than tools. But we’ve also seen hiccups, hallucinations, broken mental models, and trust gaps that designers are still racing to close.
As Don Norman has long argued, the goal isn’t to make tech smarter. It’s to make it work for people. Agentic UX should be measured not by how advanced the AI is, but by how well the human and machine work together.
We’ve been through interface revolutions before, from command lines to mobile apps. This one’s no different. It’ll take time, iteration, and a bit of patience. But if we get it right, we may find ourselves spending less time learning software, and more time just... getting things done.
Next up, I’ll be digging into one of the thorniest implications of this new world: data privacy and security in the age of AI agents. When the system sees everything, how do we make sure it doesn’t share too much?
Stay tuned.