Risk Intelligence Agent

In high-pressure risk environments, users don't need more data — they need clarity and faster insights.

I led the UX design of an AI Agent embedded into the workflow, helping users surface critical risks, understand system logic, and make faster, more confident decisions.

How might we transform fragmented reading into intelligent decision support?

Launch: Deployed to financial clients in 2024, enhancing risk visibility and decision-making efficiency.

Role

Product Design

Team

1 product manager, 1 designer (me), 20+ engineers

Duration

3 months (design phase: 1 months)

Client

Internal enterprise risk management platform

Business Goal

Our team maintains a data-intensive risk control platform used by a wide range of commercial clients. As the platform matured — and as AI capabilities rapidly evolved — we saw a clear opportunity to push the product forward.

I was initially tasked with “adding an AI Agent” to the system. But as I explored the broader context, I realized this wasn’t just about embedding new technology — it was a chance to reimagine how the platform supports decision-making.

The goal became clear: To move from a manual, search-heavy experience→ to an intelligent assistant that:

  • Reduces cognitive load

  • Detects early risk signals

  • Helps users make faster, more confident decisions


Design Challenge

Accuracy & Accountability

In a financial risk context, any AI-generated insight must be verifiable. Users needed clear traceability before they could trust the output.

Signal Overload

The system ingests large volumes of real-time, multi-source data. The Agent had to extract what matters — and do it without overwhelming users.

Initiative vs. Oversight

While the Agent could take action, users had to remain in control. It could assist — but never replace — critical human judgment.

Workflow Preservation

Our users had stable, regulated workflows. The Agent needed to fit seamlessly into existing review routines without requiring retraining or rethinking their process.

What should this Agent actually do?

And how should it behave in a sensitive, high-stakes workflow?

Research

To design an AI Agent that genuinely helps users make better risk decisions, I first needed to understand:

📑 How do users currently collect, filter, and interpret risk-related information?

⚖️ What would they trust an AI assistant to do — and what would they resist?

🧠 Which points do they feel overwhelmed or unsupported?

What users say & what behavior reveals

💬

“There’s too much risk info every day— all in raw text. I’m constantly reading, comparing, scanning. It’s exhausting.”

📍

Insight

Users are overwhelmed by the volume and format of incoming information

They don’t need everything — they need help surfacing the right signals

✏️

Design Implication

Prioritize extraction of high-value information

💬

“I don’t know where the output came from. When writing a risk report, I can’t use something I can’t verify.”

📍

Insight

Users need to verify, trace, and edit the AI’s output — especially in high-stakes tasks

✏️

Design Implication

Show source links, expose reasoning steps

💬

“Don’t make me learn a new system. If it’s not embedded in my current workflow, I won’t bother.”

📍

Insight

In a high-pressure, regulated context, users rely on stable workflows

Any tool that disrupts those patterns will be ignored

✏️

Design Implication

Agent functionality must be embedded in the existing interface

💡 I realized the Agent wasn’t just a feature — it had to be a quiet, trustworthy partner:

One that reduced information overload, earned user confidence, and respected existing workflows

Solution

Based on the insights surfaced from user research, I structured my solution around three key user needs: 

  1. Reducing cognitive overload 

  2. Building trust in AI-generated outputs 

  3. Embedding the Agent into existing workflows 

Design Principle 1: Help users cut through information overload 

🎯 Design Goal: Structure Focus Prioritize

Design Principle 2:  Design for Transparency & User Control

🎯 Design Goal: Transparent Traceable Editable

✏️

In the first version, I had already considered the need for explainability and control

📍

However, during evaluation, I realized these elements were too coarse and loosely connected to the actual content users were reviewing.

💬

“I can see where it came from, but I still don’t know whether to trust this specific part.”

💡

To truly support user confidence, it wasn’t enough to “show the source.”
Users needed to:

  • Verify each piece of information

  • Understand its significance and urgency

  • Adjust or regenerate content selectively

💬

“Now each signal is traceable, ranked, and editable — giving users full agency.”

Design Principle 3:  Seamless Integration into Existing Workflows

🎯 Design Goal: Embed Contextualize Minimize Friction

With the solution in place, we started to see early signs of impact

— both in how users interacted with the system and how they perceived the Agent.

Outcomes & Reflections

While formal tracking metrics were planned for a later stage, early user and stakeholder feedback pointed to meaningful improvements:


  • Users reported reduced cognitive load when processing daily risk information.

  • Workflow friction decreased — users could access AI assistance without leaving their primary task flow.

  • Trust in AI-generated outputs improved, supported by clearer sourcing and editing controls.

  • Users described the embedded Agent as a "quiet yet powerful presence" within the workflow.

💬

"It feels like the system is thinking with me, not just for me."
— Internal feedback

Takeaway

What I learned

  • 🛠️ Embedding support means designing with user habits, not against them.

  • 🔍 Transparency and editability are not optional in high-stakes environments — they are foundations of trust.

  • ✨ The best assistance is often quiet, contextual, and available exactly when needed, with minimal effort required from the user.


Looking back, this experience reinforced for me that thoughtful AI design is not about dazzling users — it's about empowering them.