top of page

5 Considerations on How AI Rewrites the Rules of User Interaction in Complex Systems

  • davidearcoraci
  • 5 hours ago
  • 8 min read
ree

Over the past months, I have had the opportunity to design AI features for a complex enterprise application built on the OutSystems Platform. What began as a technical exploration quickly became something far more profound: a realization that AI doesn’t just add new capabilities to a product, it transforms the way humans and digital systems interact. 




As AI begins to permeate workflows, interfaces, and decision-making, we are not simply adjusting our design toolkit. We are entering an entirely new paradigm where traditional navigation, user expectations, and established design patterns are being redefined in real-time. This article investigates through personal experiences and public research what that means for the future of UX and for all of us who build digital products.




A Real-World Lesson: How AI Transformed a Complex Review Workflow


In our recent project, the transformative potential of AI became evident almost immediately.
The application we designed served a company in the Waste Management Services sector, where one of the most critical internal processes is the daily review of operational data from the field. Each service generates a complex and wide range of information, including photos, notes, clock-in/clock-out records, GPS traces, and other activity logs that the internal team must examine to determine whether the service meets quality standards.
This review process is both high-stakes and labor-intensive: every service must be evaluated and classified as a pass or fail, requiring meticulous attention, consistent judgment, and substantial cognitive effort from the team.




After conducting user interviews and mapping the current-state workflow, it became clear that this review process was not only a bottleneck but also an ideal candidate for intelligent automation. It exhibited all the key indicators:



  • High volume: The team had to process a large volume of field data every day.

  • Repetitive tasks: Much of the work involved performing the same validation steps across thousands of service records.

  • Complex but learnable rules: The judgment criteria were complex and detailed, but consistent enough for an AI model to learn and apply reliably.

  • Low strategic value of manual effort: Human time was being spent on mechanical checks rather than higher-value analysis or decision-making.

  • High opportunity for impact: Automating this process would significantly reduce cognitive load, enhance consistency, and enable the team to focus on deriving meaningful insights.


When we introduced an AI concept into the workflow, everything changed, not just the interface, but the mindset of the users.
Suddenly, the dense, information-heavy pages that once dominated the legacy application lost their prominence.


Users no longer needed to dig through endless lists to search for issues or anomalies. 



Instead, their focus shifted to a very different question:




“How is the AI performing, and where should I intervene?”




The role of the user evolved from “manual reviewer” to “AI auditor.”
This shift reshaped not only the structure of the application but also the internal dynamics of the team. 

Here are some outcome considerations from the design process:




1. Challenge the Old Paradigm: Design Beyond Pre-Defined Navigation Paths



ree

Understanding the paradigm shift was a fundamental step in setting the base for a visionary project.




For decades, complex applications have relied on a stable and predictable interaction model. Users search for information through structures we have carefully designed, including menus, sections, dashboards, filters, and hierarchies. Every step is mapped out. Every workflow is predetermined.




The unspoken contract was simple: designers decide how information is organized, and users learn how to find it. If users needed insights, they navigated through our architecture to reach it screen by screen, click by click.




This rigidity was not a flaw; it was a necessity.


Systems could only serve what they were explicitly designed to serve. Interfaces acted as maps, guiding users through structured data and predefined use cases.



However, the introduction of AI breaks all these consolidated rules, providing an entirely new, large, and very different "toolbox" with which we must work.




2. Embrace the Shift: From Navigating a System to Converse With it



ree


When a system becomes intelligent, able to interpret context, predict needs, and respond dynamically  the purpose of the interface also begins to change.



Instead of navigating tons of information in a rigid structure, users can:



  • ask questions in natural language,

  • request tailored insights,

  • generate actions on demand,

  • explore multiple scenarios instantly,

  • or receive proactive recommendations before they even start searching.


The UI is no longer a static map. It becomes a living, adaptive partner that reorganizes itself in response to user intent. For example, instead of choosing filters to gain service insights, users can simply ask, “Show me the anomalies from yesterday.”




In our design, for example, the focus shifted from optimizing navigation and content presentation, where users had to browse detailed information and manually reach a decision, to a new pattern in which the system analyzes the information in the background and presents a suggested decision directly in the interface.
This transition shifts the experience away from heavy manual data navigation and toward a more conversational, collaborative interaction between the user and the AI, enabling the system to handle the most repetitive and stressful tasks while the user focuses on validation and judgment.




This type of transition puts into discussion many of our assumptions as designers. What does “navigation” mean when the system can retrieve any information on demand? Do workflows need to be linear when the interface can flex to each user’s goals? What is the role of layout when interactions happen through conversation?




We are not just designing screens anymore.




We are designing collaborative intelligence.




3. Think AI as a Coworker: Design a Good Model for Human-AI symbiosis


ree

AI agents push this transformation even further.



These systems do not simply answer questions; they observe, anticipate, and act. They resemble coworkers more than tools.For this reason, the key to a good Human-AI symbiosis is probably contained in the interaction design, which plays a fundamental role in how information is exchanged and perceived by both sides.




During our design process, this topic raised psychological and behavioral questions:



  • Will users trust the judgment of a digital colleague that seems capable but is not human?

  • Will they feel monitored or supported?

  • Will they sense empowerment or displacement?

  • How will the accountability of  the reviews shift when decisions are co-created with an algorithm?


AI agents occupy a curious space: neither alive nor conversational, yet not human, but collaborative.
Users begin to attribute personality, reliability, and even moral expectations to them.
As designers, we have a responsibility to craft these interactions with care, ensuring transparency, setting boundaries, and reinforcing human agency. AI should elevate people, not overshadow them.





To visualize in detail this interaction, I choose to bring the DECAI model as an example that explains how an AI system, a human user, and an interface can work together as a cycle:

Image from: Characterizing and modeling harms from interactions with design patterns in AI interfaces https://arxiv.org/pdf/2404.11370
Image from: Characterizing and modeling harms from interactions with design patterns in AI interfaces https://arxiv.org/pdf/2404.11370

To better understand this concept, think of it like a conversation loop with three players:




The AI Block: “The Brain”


  • This is where the AI thinks, analyzes, and generates answers or actions.

  • It receives information, processes it, and produces an output.

Example:
 The AI decides, “I think this document should be approved.”




The User Block: “The Human”


  • The person interacts with the system by providing input and responding to what the AI displays.

  • The user sees the AI’s output (through the interface), and then provides new instructions, corrections, or feedback.

Example:
 The user says, “Why did you approve this?” or “Reject it instead.”




The Interface Block: “The Translator / Middle Layer”
The interface sits between the user and the AI. It has two roles:
Actuator: Shows AI output to the user


  • It transforms the AI’s raw output into something the human can understand:
 a result, a message, a visual, a notification, a recommendation.

Sensor: Captures user input and sends it back to the AI


  • It collects the user’s responses: clicks, buttons, prompts, corrections, approvals, and questions.

The interface is like a translator that turns AI decisions into usable UI and user actions into something the AI can learn from.




  1. Design for Uncertainty: New UX, New Challenges



Unlike traditional software, AI is not deterministic; it is probabilistic. It reasons. It interprets. It adapts. It sometimes gets things wrong.
In our case, it may seem that reducing the entire review experience to a single window with a suggested decision would simplify the UX and reduce the complexity of the design effort. In reality, removing the whole manual process introduced an entirely new set of challenges we needed to consider in order to make the user interaction comfortable:




  • Explainability: Why did the AI suggest this?

  • Trust Calibration: when should users rely on AI, and when should they question it?

  • Control vs automation: How do users stay empowered when the system acts autonomously?

  • Error Recovery: How do we Help Users Correct or refine AI outputs?

  • Dynamic interface: How do we design for experiences that evolve with context and learning?


Design becomes less about perfect predictability and more about creating safe and transparent interactions between humans and intelligent systems. 




The most important thing to consider in each design decision is always, keep the human in the Loop.


Image from: Human-AI interaction research agenda: A user-centered perspective https://www.sciencedirect.com/science/article/pii/S2543925124000147?utm_source=chatgpt.com
Image from: Human-AI interaction research agenda: A user-centered perspective https://www.sciencedirect.com/science/article/pii/S2543925124000147?utm_source=chatgpt.com



5. Implementing AI is not a single launch it is a journey.


Despite the excitement over the significant advantages, after careful consideration, we all agreed that introducing AI into a complex application is not a single release, but a progressive transformation that unfolds in stages, driven by both technical and UX reasons.




In our project, we decided that the most effective approach begins with an initial assistive phase, where the AI supports user decisions rather than replacing them.

At this point, the system offers suggestions about possible decisions, summaries, or insights while maintaining full control for humans.


This stage is essential for building familiarity and trust: users need to understand how the AI thinks, why it makes certain recommendations, and how to easily correct or override its output.




OutSystems


A key enabler in this early phase was our use of the OutSystems platform.

Its rapid prototyping capabilities allowed us to experiment quickly with new interaction patterns and iterate at high speed. Because we were able to validate concepts and flows so efficiently, we could dedicate a significantly larger portion of our time to deep analysis, experience design, and fine-tuning the human–AI collaboration model.




ree

Once this confidence is established, the future implementation can evolve into a transitional collaboration phase, where users and AI share responsibility for the ongoing process. Here, the AI begins to automate repetitive, rule-based tasks, while users shift into the role of auditors reviewing exceptions, supervising outcomes, and fine-tuning system behavior. This shift not only reshapes workflows but also reduces cognitive load, allowing users to focus on making meaningful, high-value decisions.



Ultimately, as the AI proves its reliability, the product vision can progress toward a future autonomous phase in which the AI operates proactively and independently throughout the full review process, requiring human intervention only in ambiguous or critical scenarios.




This gradual evolution is key to user acceptance.


Trust grows when the AI is transparent, visible, and explainable; when users feel they remain in control; and when the benefits—time saved, errors reduced, work improved—are clearly demonstrated. Allowing users to correct the AI, monitor its performance, and understand its reasoning fosters a sense of ownership and partnership, promoting a collaborative approach.


Over time, this creates the foundation for a true human–AI symbiosis, where the technology handles the mechanical aspects of work and humans contribute creativity, judgment, and emotional intelligence."


Example of an AI Implementation strategy we applied to our project, a schema that summarizes how the  review workflow evolves in the short and long term.
Example of an AI Implementation strategy we applied to our project, a schema that summarizes how the  review workflow evolves in the short and long term.


Conclusions: A Future Shaped by Human Value


As we stand at the edge of this new paradigm, with millions of companies already starting to implement AI in their systems, one question becomes impossible to ignore:


How will AI redefine the way we interact with digital products and, ultimately, the way we work?




If automation increasingly takes over the mechanical, repetitive, and rule-based tasks, what remains is the deeply human: imagination, empathy, judgment, creativity, and connection. Whether AI frees us from work entirely, as some visionaries boldly predict, or simply reshapes our roles, the direction is unmistakable.

We are moving toward a future where digital systems don’t just respond to us, they collaborate with us, anticipate our needs, and amplify our capabilities.

The real opportunity is not in replacing human effort, but in elevating human value.

Perhaps the most exciting part is that we are only just at the beginning.
We are not designing the end of work.
We are designing the beginning of a new kind of partnership, one where humans and intelligent systems build the future together.




Your role won’t be to operate software anymore…


Your role will be to set the direction and constraints for an autonomous digital system.

 
 
 
direction.png

Ready to get started?

Let us help transform your business on the industry leading modern application platform

img

Accelerated Focus is an OutSystems Premier Partner based in North America and Europe.

© Accelerated Focus 2025

CONTACT US

We don't just build apps, we build businesses

img
  • LinkedIn
  • Twitter
bottom of page