For decades, digital products were built for a single audience: people. Today they have a second user - intelligent algorithms that can independently perceive information and act on behalf of humans. This tradeoff once appeared mainly in web design (a compromise between user experience and search engine optimization), but by 2026 it has expanded far beyond SEO.
AI agents are now full participants in the digital ecosystem. These autonomous systems ingest data, reason, and make decisions. They don't require familiar graphical interfaces — instead of screens, they "read" data structures, APIs, text, and contextual signals. As a result, the very notion of design is changing: we now must account for two fundamentally different recipients. Humans and machines perceive information differently and have different interface requirements.
This shift is already visible everywhere. Not long ago, the main touchpoints were websites and apps designed for direct visual interaction. Today, more tasks are handled through chatbots, voice assistants, and conversational AI - without familiar buttons and menus. Powerful language models allow millions of users to get answers and services simply by talking. ChatGPT reached 100 million users in just two months, becoming the fastest-growing consumer app in history. Meanwhile, devices with voice assistants have exceeded 8 billion, and a significant share of queries now happens without a screen — via voice or text dialogue. One in three homes has a smart speaker, and about 52% of people use voice search daily. 71% of consumers prefer voice input to typing when available. Voice commerce is expected to reach $80 billion by 2026 — and all these users interact not through a traditional GUI, but through voice commands (Source).
Against this backdrop, product owners and designers face a difficult choice: should they keep investing in classic human-centered UX, or reorient toward machine perception -LLMs, parsers, search bots, and autonomous agents? In practice, the most valuable and resilient systems don't pick one side. They are built as hybrids - simultaneously understandable to humans and legible to machines. That approach requires new design principles, where clarity, trust, and usability for people combine with effective automation and transparency for AI.
In this article, we'll examine the strategic and practical foundations of both approaches, their tensions, and key trends at the intersection of UX and AI. We'll look at real strategies and examples to answer the main question: where is it wiser to invest resources today — in design for people, in architecture for agents, or in a deliberate balance between the two?
New interaction patterns: from screens to assistants and agents
User experience is moving beyond screens. More cases exist where a person never sees a traditional product interface - a smart assistant or agent processes the request. The integration of GPT models into search engines (Bing Chat, Google Bard) and apps means users receive many answers directly in conversation with AI, without visiting websites. Generative AI makes it possible to get articles, advice, code, and more through a single universal interlocutor. For many scenarios, you no longer need a separate GUI for every service it's enough to "talk" to an intelligent agent that finds and processes data on its own. As Microsoft cofounder Bill Gates put it, in the near future there will be a personal digital agent that "takes over" searches and purchases: "you'll never go to a search site again, you'll never go to Amazon again" (Source). Instead of manual searching or navigating catalogs, people will delegate those tasks to an AI assistant, bypassing familiar web interfaces.
Even visual tasks are now solved in new ways: voice and multimodal interfaces are emerging, where speech, text, and images merge. Smart assistants with screens (like Amazon Echo Show or Google Nest Hub) can display results, not only speak them. Screen readers for blind users read web pages aloud — effectively acting as intermediaries between a site and a person. New AI models can understand images, opening the door to camera-based interaction.
The current trend is that the interface "dissolves" and becomes less visible, turning into a conversation or an on-demand service. But it's important to note: screen-based interfaces haven't disappeared - they're evolving. Younger generations actively use AI, but still value simple, intuitive apps. We're seeing a transformation of classic UI: the interface of the future may be invisible, but it hasn't gone away - it has become a conversation, a gesture, or an automatically executed action.
How AI agents differ from ordinary software
AI agents have characteristics that challenge traditional design principles. Unlike classic software, an agent's behavior is non-deterministic: the same input can produce different outcomes at different times. This resembles human creativity and adaptability — and radically changes what "good design" means. For conventional software, we could design a rigid interaction flow in advance. For an AI agent, we need to build in flexibility and variants. The agent can decide which tools to use and what steps to take toward a goal. The designer no longer controls every dialog turn or screen - instead, they define the boundaries and principles within which the agent will act.
Additionally, agents learn. Their experience across hundreds of users can change their responses and logic. That makes design dynamic: the interface or system behavior can evolve after the product ships. In classic UX, this would be unthinkable - a button wouldn't suddenly "decide" to change its color or label. With agents, we're dealing with a semi-living system that evolves over time. This requires a different approach to testing, reliability, and design ethics.
Semantics and data structure: a language machines can understand
As AI agents grow in importance, machine readability of content becomes increasingly critical. Classic UX focuses on how information will be seen and understood by a person. Now we also need to consider how algorithms will extract and interpret it. AI models don't "see" beautiful typography and animations - they ingest text, code, and data. So design must give content clear structure and semantics.
One major trend is the broad adoption of structured markup (for example, the Schema.org standard) on websites. Semantic markup helps search engines and assistants understand what on a page is the product name, the price, the rating, and the company address. Put simply: for a smart assistant to "recommend" your product, your data needs to be machine-readable and correctly labeled.
New initiatives are emerging as well. In professional circles, people discuss the idea of an LLMs.txt file - analogous to robots.txt where a website publishes AI-friendly instructions for models. The aim is to make it easier for an algorithm to recognize important sections, understand updates, and interact with content safely. This concept fits into a broader idea that Microsoft design leader John Maeda calls AX (Agent eXperience) - "experience for the agent." Maeda emphasizes that designers should think not only about human user experience (UX), but also about how easy it is for artificial agents to interact. A product should "know how to speak" to machines in their language — via open APIs, well-structured data, documentation, and metadata that act as "guide rails" for bots.
Notably, efforts to improve machine readability often improve accessibility for people too. Semantic HTML, alt text for images, and logical structure help not only robots but also users — for example, people with visual impairments whose screen readers also "parse" the page. In that sense, human-centered design and machine-friendly design can reinforce each other. The principle "make it understandable for everyone" sits at the core of both accessible UX and good SEO.
Headless and API-first: a product without an interface?
Another major trend is the shift toward headless architecture and the API-first principle. "Headless" means separating the frontend from the backend. In simple terms, a service provides functionality via APIs and isn't rigidly tied to a single interface. Data and logic are accessible through programmatic interfaces, while the client a website, mobile app, voice bot, smartwatch, or car dashboard, depends on the channel's needs.
For businesses, this brings flexibility: implement core functionality once, then connect new interaction channels without rebuilding the foundation. API-first strategy is already standard for many leaders. The rise of headless CMS and headless commerce shows that content is created independently of the channel: the same catalog or article can be delivered via API to a website, an app, partners, and voice services. Create content once and distribute it across unlimited platforms, from social media to AR/VR, without duplicating work (Source).
API-first success stories are striking. The payments platform Stripe positioned itself from the start as an "API-only" service. Instead of focusing on a flashy consumer-facing app, Stripe prioritized developer usability — offering a simple, powerful API with excellent documentation. That made it easy to integrate Stripe into any product. The result was rapid growth: as of 2024, Stripe holds about 17% of the online payments market, second only to PayPal. Another example is Netflix, which, after shifting to streaming, rebuilt its architecture around services and APIs to work across many devices. Netflix broke apart its monolithic system and separated the backend from client devices by exposing functionality through APIs. What previously required a separate app for each device became available through one shared set of services — enabling reach across hundreds of millions of devices worldwide and effectively ushering in a new era of digital streaming.
The key takeaway: a product designed as a platform with open integrations gains a major advantage. It becomes easier to plug into other services, easier for bots and assistants to recommend and use, and easier to adapt to new devices. In a world where users can arrive from anywhere (voice queries, chatbots, partner apps), API-first provides the flexibility you need.
This doesn't mean a "headless" product shouldn't have its own UI. It's about priorities: design the core and services first, then add "heads" for different contexts. That differs from the old model instead of optimizing design only for a desktop screen or mobile app, your product lives in the cloud and provides data to whoever needs it, whether it's a person at a laptop or an AI agent booking a ticket via API.
When bots win the design battle
There are domains where designing for bots has already become essential. Public APIs are the purest example of "design for machines." They're created not for end-user humans, but for developers and their software. Usability is measured differently here: predictability of responses, stability, and excellent documentation. An API interface is a strict structured contract that another system can understand. In this context, "bots-first design" isn't just acceptable — it's mandatory.
Social platforms have also, in many ways, handed control to algorithms. The Twitter (X) feed, Instagram Explore, TikTok's For You - these are built by recommendation systems. Content design aims to present material so a machine can recognize the topic, quality, and likely audience interest. The success metric is human engagement, but the "client" receiving the content is the ranking algorithm. It's no surprise that creators today optimize their work for the algorithm no less than for real people. From thumbnails and video previews to posting time — everything is shaped by what the recommendation system rewards. In effect, content creators increasingly design "for AI" to reach humans through the noise.
The 2025–2026 shift: from human-centered to agent-centered design
The last two years have been a turning point. AI agents are shifting from passive observers to active participants. They don't just adapt to user actions - they propose and execute tasks.
Analysts estimate that in 2026 up to 40% of apps will include specialized agents (compared to less than 5% in 2025). This leap forces a redesign of assumptions. We're moving from visual hierarchies for humans to machine-readable structure for agents. In e-commerce, for example, "shopping bots" are emerging that can make purchases autonomously based on criteria. For them, what matters isn't a beautiful catalog but clearly described product attributes, a reliable cart and payment API, and the ability to compare and decide quickly. Store interface design starts to focus more on intent and data than on clicks and animations.
At the same time, this dramatic shift is controversial. Critics worry that "design for AI" makes products soulless. Designs that are generated or optimized by algorithms can look formulaic — repeating grids, generic gradients, minimal creative individuality. They lack the "feel" human designers bring. And not everyone is comfortable handing agents so much control: fully automated interfaces can disorient users who don't understand what's happening behind the scenes.
When to apply each approach
There is no universal recipe. Much depends on context. Human-centered design is best where empathy, creativity, trust, and ethical judgment mater. In mental health or education apps, for instance, direct human nuance is irreplaceable -AI can assist, but tone, support, and understanding must remain human-level subtle.
Agent-centered design excels where speed, scale, and clear structure are paramount. Routine processes, large datasets, and instant computations are areas where algorithms outperform humans. If a task is well-formalized (e.g., system monitoring or instant translation), you can delegate it to an agent and design more around its needs, giving humans only final results and control indicators.
Often, though, hybrid design is optimal, since most real scenarios blend routine and creativity, scale and nuance. In online banking, an agent might flag suspicious transactions automatically, but a human analyst still decides whether to block them. The interface design must work for both: the system structures and visualizes data for the expert, and the expert can intervene at any time to adjust the system's actions.
Core principles of design — for people and for agents
Despite the changes, the core principles of good UX still hold. Clarity, consistency, user control, error recovery, and transparency remain relevant including in hybrid systems. The difference is that there are now two "users," and you need to ensure both can understand what's happening. If you introduce an AI feature, make sure it fits logically into the scenario and doesn't violate expected interface behavior. If a bot reads your content, ensure your data structure is consistent across pages. The goal is smooth interaction: a person should achieve their goals with minimal effort and maximum satisfaction, and a machine should get the required data without failures or ambiguity.
Practical questions when choosing an approach
Before deciding where to focus design efforts, it helps to ask several key questions about your product:
- Who is the primary user for this function? If the direct consumer is a person, start with human UX. If much of the work happens via integrations or bots, invest more in APIs and structure.
- How formalized and repetitive is the task? Clearly defined tasks are easier to automate and delegate to agents. Ambiguous, creative tasks need human involvement and flexible interfaces.
- What matters most for success? If it's emotional resonance and trust, focus on human-centered design. If it's scale and efficiency, focus on algorithms and data.
- How will the system behave during failures? Design failure modes. Ideally, a human can step in when an agent fails (or at least the user gets a clear explanation and an exit). If the human fails (e.g., wrong input), an agent can help by suggesting a fix.
- How will the product evolve over time? If you expect frequent new integrations and channels, build API-first. If long-term brand recognition and UI consistency matter most, invest in visual design.
These answers help you find the balance. In reality, it's rare to have a purely "human-only" or "AI-only" case — you're defining the proportions.
Conclusions and recommendations
The framing "design for people or for bots" is misleading. Experience shows that successful design increasingly means "both”. The best products evolve toward symbiosis, where people and machines interact naturally.
From a practical standpoint, product leaders should seek reasonable balance and move step by step. Here are key recommendations based on the trends and examples discussed:
- Keep developing human-centered UX. Users still value usability, beauty, and emotional resonance. A product that feels good to use directly will stand out. Study your audience, improve navigation, visual design, and copy. This raises loyalty and conversion. Also, well-designed UX often implies a well-structured product overall, which indirectly helps agents too.
- Make your product legible to machines. Evaluate how your content looks through the "eyes" of a search bot or voice assistant. Add microdata and semantic tags — products, reviews, events should be marked up. Ensure key information is accessible without heavy scripting and not hidden behind unnecessary logins. Where possible, provide public APIs for key functions. This increases the chances your service will work in new contexts — from partner apps to voice search results. Make it easier for algorithms to understand your product - and they'll bring users to you.

