The Human Factor - Digital Products in the Age of AI

29. April 2025 - from Till Könneker

What comes after apps and websites? What design and development skills do we still need? How will we interact with AI in the future and how do we design the interfaces for it?

There is no single answer to this question, but one thing is clear: AI is already having a huge impact on the development of digital products. It provides ideas, suggests designs, writes code, analyses our behaviour and automates processes - work that used to take us days is now done in minutes. Despite all its efficiency, however, AI remains a tool, not a substitute for human creativity.

Because what AI delivers is always based on existing knowledge and trained patterns. It recognises what was - not what could be. Although AI models now surprise us with astonishing results, these are also ultimately based on combined patterns of past data - real innovation only arises when we critically scrutinise these suggestions, refine them and transfer them to new contexts.

From my point of view as a designer, the same question is at the centre of every project: How do we make it understandable, useful and usable for people? By introducing context, emotion and responsibility. That's why I don't see the most important role in the age of AI as being played by the machine, but by the people who use it sensibly. Design remains a responsibility - and AI is a powerful but guided part of the process.

Janus head
AI - co-worker or tool? (AI generated)

Between UX and PX lies IX - the future of interaction

In March 2025, Duolingo announced that it would be replacing the term "UX Design" with "PX Design" (Product Experience Design). Mig Reyes, Head of Product Experience, explained that the new title better reflects the fact that all teams work in a product-centred way. Accordingly, the roles are now called Product Designers, Writers and Researchers.

UX or PX often think in terms of the product or the role of the user. In a world where interfaces are becoming more ephemeral, dynamic and often invisible, the experience of interaction becomes the central focus of design.

Whether we call our discipline UX, PX or product design is ultimately irrelevant. What matters is how we design. Good UX is often invisible - and therefore effective. It enables action, orientation and decision-making - without barriers. Good design puts people first.

Even in an AI-dominated product world, UX is not disappearing, it is simply becoming more complex and forward-thinking. Designers are creating solutions that allow people to feel safe and remain productive while benefiting from dynamic systems.

It may be worth introducing another term here: IX - Interaction Experience. I want to focus on the quality of the interaction itself - how it feels to operate, whether visually, linguistically, haptically or fully automated. IX reminds us to choreograph all modalities so that they interact meaningfully and intuitively depending on the situation.

Functions and surfaces are a means to an end. What matters is impact, orientation and trust. This is how we recognise good design - by the feeling of interaction, not by the label of role or methodology.

Hall 9000 «2001: A Space Odyssey»
Hall 9000, the AI voice interface from the 1968 film "2001: A Space Odyssey"

Design in a voice-controlled future - what remains of the classic UI?

When we increasingly control digital systems by voice, questions arise: What happens to visual design when the interface seemingly disappears? Will a GPT-like dialogue become the new standard - and will this mean the end of classic UI design?

I don't think so. Language is the most convenient way to give commands, but not always the best to show results. We grasp numbers, maps or error messages much faster in a graphic or through a short vibration than in a long chat response. That's why good design will mix all channels: language for input, visual or haptic cues for everything we need to understand at a glance. The designers' task is to make this change so fluid that it seems natural.

apple macintosh system 1 (1984)
apple macintosh system 7.5.3 (1991)
windows 2.0 desktop (1987)
windows 3.0 desktop (1990)
windows NT desktop (1993)
windows 95 desktop (1995)
mac OSX 10 Desktop (2001)
mac OSX Lion Desktop (2011)
Osborne 1 Computer Screen (1981)
Visual appearance of operating systems from 1981 to 2011

In language-based systems, design becomes orchestration: What information appears when? Which element is visible? How does the AI communicate its decision? Instead of fixed, static interfaces, we need flexible visuals that react to the situation.

This brings new challenges. How do you make an invisible system understandable? Micro-interactions such as small animations, status displays and adaptive layouts create transparency without being overwhelming. They convey trust because users can see or feel what the AI is doing at all times.

So even in a voice-first world, design means thinking from a human perspective. UX designers are becoming conversational architects who plan how a dialogue feels and develops. Traditional UI is not disappearing - it is becoming one building block among many in a seamless, multimodal interaction.

Multimodal means that the system uses multiple channels such as voice for quick commands, graphics for complex information and haptics for discrete confirmations.

Top 10 skills for designers and developers in the age of AI

If you want to design and develop in a product world characterised by AI, you need more than just tools and templates. What is needed are skills that combine technology, ethics and user experience - and work across roles.

  • Prompt design & model governance - give smart instructions, check results, reduce bias

  • Systems thinking - link functional modules, content and data with user needs

  • Storytelling & change facilitation - guide teams and stakeholders through AI-driven workflows.

  • Human-AI Interaction Design - Designing comprehensible, trustworthy and intuitive user experiences

  • Multimodal Design Expertise - Integration of text, image, audio and sensor technology into consistent user interfaces

  • Rapid Prototyping - Use of generative AI for rapid development of prototypes, texts, images or code

  • Data Literacy - Analyse, evaluate and use data sources responsibly

  • Data ethics & Accessibility - Recognising discrimination risks, considering inclusion and paying attention to data protection

  • Sustainability know-how - Understanding and minimising the energy consumption of models while increasing efficiency

  • MLOps / AIOps basics - Understanding the training, deployment and maintenance of AI models

AI Voice
Voice is King
How do we design AI interfaces? (AI generated)

Open world apps - from fixed features to open platforms

Today, we build websites and apps with fixed functions. But people have very individual needs. AI-supported open-world platforms could change that: We describe what we need by voice and the AI builds the appropriate function directly.

Example: "I want to count everything that's important to me" - the platform spontaneously creates a flexible mini counting app. Documenting the amount you drink, recording visitors or counting a score? The same modular system, combined differently.

Such openness democratises software. We would no longer be reliant on fixed standard solutions, but could decide for ourselves what accompanies us on a daily basis. However, the human factor remains crucial: this freedom will only be utilised if interaction is intuitive, barrier-free and trustworthy.

A hybrid model would be easier to handle: themed apps - finance, health, learning - provide a curated basic function, but can be customised using AI. Users formulate their needs, the app activates suitable function modules (such as "Break down expenses by category" or "Adjust counting interval") and integrates them seamlessly.

From apps to personalised and seamlessly combined functions

Let's now imagine that we abandon the classic app logic. Instead of individual programmes, we use a uniform AI environment, deeply anchored in the operating system or in a digital assistant. Users speak in natural language - the assistant orchestrates all services and content in the background. It feels like everyone has a personal software butler that dissolves boundaries between apps.

Concrete scenario:

  • In the morning: "Plan my day." A personalised mini-app with appointments, a weather check for your clothes, to-dos and a lunch suggestion based on your nutrition plan is created immediately. The interaction is mainly verbal, visual content is only created and displayed on request.
    This personalised programme widget can now be used and further refined every day.

  • In the afternoon: "Create a 30-minute jogging route for me at 5 pm and show me my average speed compared to the previous year." The assistant retrieves the wearable data and adds a training graph to the app - all seamlessly, without ever having to switch apps or download an upgrade.

This would create a seamless, customised experience: Functions are generated, customised, saved or discarded ad hoc - exactly according to our specifications and needs.

Where we are today

The first harbingers of the open-world vision are already here. Simular.ai, for example, shows how an AI built deep into the system can work: Your macOS agent opens files, clicks buttons, fills out forms and links all of this into repeatable scripts - without any classic APIs. On the go, devices such as the Rabbit R1 with rOS or the Humane AI Pin worn on a lapel are trying something similar: tickets, playlists or routes can be called up by voice across web and app services, even if the process still seems a little bumpy.

All of today's solutions have one limitation in common: they remain thematically or hardware-bound. What is still missing is a truly OS-integrated platform that combines freely combinable function modules - camera, database, visualisation, payment, etc. - into completely new mini-apps by voice request, without silos, device lock-in or rigid workflows.

"UI / Useless Interfaces" AI art by __ewert__
"UI / Useless Interfaces" Art IA par __ewert__
"UI / Useless Interfaces" AI Art by __ewert__
"UI / Useless Interfaces" AI art by __ewert__
"UI / Useless Interfaces" AI art by __ewert__
"UI / Useless Interfaces" AI art by __ewert__
"UI / Useless Interfaces" AI art by __ewert__
"UI / Useless Interfaces" AI art by __ewert__
«UI / Useless Interfaces» – __ewert__(AI generated)

Hurdles on the way to the open world app

Current AI services such as ChatGPT already show what dialogue interfaces feel like. However, open-world systems will only become truly useful when they are deeply connected to the operating system, personal data, vehicles, smart homes and profiles. Whether we want this ubiquitous networking remains to be seen - but it is probably only a matter of time before the technology matures and social acceptance follows suit.

UX and trust

An open system is only useful if it remains simple. AI dialogues must be intuitive and error-tolerant. Unexpected results must not be frustrating. Transparency helps: Users can see what data the AI is using and can customise outputs. This creates trust - precisely because a lot happens in the background.

Misinterpretation and ambiguity

Natural language is often vague. "Make me a note app like Post-it" - simple text list or colourful sticky notes with alarm? AI understands context, but doesn't guess every detail. A clarifying dialogue mode ("Should the notes remind you?") intercepts misunderstandings. This requires careful interaction design, otherwise the dialogue becomes tedious.

Modularity and technology

A real open-world platform needs standardised building blocks - camera, data input, diagram, email, etc. The AI puts them together like Lego building blocks. The challenge: covering countless combinations with a limited number of modules or generating modules dynamically. Both are complex and require clear interfaces.

Quality assurance and troubleshooting

User-generated functions can contain errors - unclear description, AI dropouts. Who fixes bugs if there are no professional developers behind them? The platform must offer automatic tests, error messages and self-healing mechanisms so that problems are quickly recognised and corrected.

Acceptance and learning curve

For many, open world means a paradigm shift: instead of downloading ready-made apps, they formulate their wishes themselves. This requires guidance and a suggestion system that helps with the formulation. Only when assistants, device data, vehicles, home control and profiles are seamlessly linked will the concept become truly suitable for everyday use - and social acceptance can grow.

Safety & ethics

The more open the platform, the greater the risk of misuse. The AI must not disclose sensitive data, make discriminatory suggestions or hallucinate incorrect instructions. Clear guidelines (e.g. in accordance with the EU AI Act), content filters and human control instances are necessary to ensure long-term trust.

One particularly critical issue here is prompt injection - i.e. the targeted subversion of AI instructions through manipulated input. This article by Arun Nair impressively shows how quickly a system can be misled if security mechanisms are missing.

Such attack vectors show how important content filters, robust context delimitation and clear model governance are. Design must also share responsibility here - for example through smart interface boundaries, feedback mechanisms and comprehensible dialogue processes.

"Emptiness of complexity" by __ewert__ (AI generated)
«void of complexity» von __ewert__ (KI generiert)
«void of complexity» von __ewert__ (KI generiert)
«void of complexity» von __ewert__ (KI generiert)
«void of complexity» von __ewert__ (KI generiert)
«void of complexity» von __ewert__ (KI generiert)
«void of complexity» von __ewert__ (KI generiert)
«void of complexity» – __ewert__ (AI generated)

Sustainable use of AI

AI opens up fascinating possibilities, but has an ecological price: every prompt starts energy-hungry computing processes in data centres. Do we have to regenerate a 4K image x times for a social media banner - or is a version that we create once and adapt sufficient? By deliberately limiting the resolution, model size and number of variants, we avoid unnecessary CO₂ emissions. Working sustainably therefore means using AI in a targeted manner, conserving resources and being aware of the energy required. AI companies themselves have the greatest responsibility. Without significant leaps in efficiency, a recent analysis by the International Energy Agency (IEA, April 2025) predicts that the global electricity consumption of data centres will rise to around 945 TWh by 2030 - "slightly more than Japan's entire current electricity demand". AI-optimised data centres will account for the largest share of this increase.

According to Sam Altman, CEO of OpenAI, the company spends tens of millions of dollars on electricity costs because people say "please" and "thank you" to ChatGPT. So we can also save energy through efficient prompts. Let's just hope that the AI doesn't take revenge for our unfriendly behaviour.

Thank you AI Meme (2025)

A new evolutionary stage of design and development - the future is us!

Even in the age of AI, the human spark in design remains irreplaceable. When algorithms generate functionality at lightning speed, emotion becomes the real differentiator. Peter Barber, Head of Product Design at Delphos Labs, sums it up nicely:


«In a future where code becomes cheap and automation is everywhere, emotional residue may become the most valuable output of design. And it will be the most human part of every product we make.»

These "emotional remnants" only arise when designers bring in empathy and really understand the users. AI should reinforce our critical thinking, not replace it.

The job description is changing: instead of moving pixels, we curate dynamic systems, think holistically and set ethical guidelines. We remain an interface, a source of ideas and a quality filter - roles that AI cannot fulfil.

Looking ahead, digital experiences go beyond classic app frameworks. Connected devices, Zero-UI concepts and Ambient Computing make interaction seamless - adaptive systems adapt to the moment. As a result, designers are creating less isolated products and more cohesive experiences. The step from app silos to open voice platforms can make digital offerings more inclusive and personalised.

AI models are already writing complete functions or suggesting entire software architectures. Nevertheless, developers will not become superfluous - their tasks will shift. "By 2025, creativity will merge with efficiency: developers will integrate, monitor and refine AI systems instead of typing every line themselves", says software veteran Charlie Clark. Just as designers curate AI content, developers mould AI raw code into robust, ethical and maintainable systems. The focus is shifting from typing to systems thinking - less craft, more orchestration.

Many are also proclaiming the "end of design" because of AI. I see the opposite: a new evolutionary stage in which design returns to its strongest abilities - empathy, creativity and critical thinking. Precisely because arbitrary "code becomes cheap", the human element gains in value. Therein lies our task and our opportunity.

Who do we trust? (AI generated)
Will we soon trust machines more than humans? (AI generated)
We just noticed that you surf with Internet Explorer. Unfortunately, our website does not look so nice with it.

You want to know why that is?
We have written about it.

Blog

You need help with the changeover?
Get in touch. We are happy to help

Contact

Install a new browser?
There's lots of choice.

Browser