UX for GEN AI

UX for GEN AI

Building a personalised AI solution to eliminate redundant tasks & boost productivity

Building a personalised AI solution to eliminate redundant tasks & boost productivity

Duration

Sept 2024 - Dec 2024

My Role

Extensive UX Research, UX Writing, Iterating designs based on feedback, Design Handoff to Developers

Team

2 UX Designers 2 Product Managers

IMPACT

IMPACT

IMPACT

Currently in Development

The project is actively being developed, with key features being refined.

Potential Increased Adoption Rate

We foresee a 40% increase in adoption rate, driven by the improvements in usability.

OVERVIEW

OVERVIEW

OVERVIEW

It's a company-wide, LLM–based chat system that allows employees to interact with AI for a wide range of use cases.

It answers company-specific questions, assisting with documentation, synthesizing reports, and more. To make these interactions more efficient, it also supports the creation of persistent, task-specific assistants. These assistants help employees maintain continuity in their AI workflows.

PROCESS

PROCESS

PROCESS

To tackle these issues, we followed a structured design process over the course of 3 months.

GOAL

GOAL

GOAL

We set out to design buildable assistant framework: a way for users to…

♻️

Create Once, Re-use Always

Create assistants once and reuse them continuously.

🎚️

Choose your Path

Choose between conversational or form-based creation.

🧪

Test & Refine

Test and iterate on prompts before finalizing.

Test and iterate on prompts before finalizing.

We believed a dual-path creation model (guided vs manual) with persistent memory would unlock long-term value and deepen user trust in AI.

RESEARCH

RESEARCH

RESEARCH

To understand expectations better and shape our direction, we:

Interviewed 10+ employees across 5 teams

Interviewed 10+ employees across 5 teams

Conducted moderated usability tests with the existing builder

Benchmarked tools like ChatGPT, ClaudeAI, Perplexity, Cohere, Postgres, HuggingChat

Held co-design workshops focused on flow + clarity

Held co-design workshops focused on flow + clarity

Held co-design workshops focused on flow + clarity


INSIGHTS

INSIGHTS

INSIGHTS

From 10+ Users across 5+ Departments and rest of the research, we identified the following themes

🎯

Most platforms didn’t offer persistent assistant creation.

Interviewed 10+ employees across 5 teams

🤖

“Bot as builder” was a promising pattern, but often unsupported.

💬

Form experiences worked better when labels felt like questions.

🔍

🔍

Users needed clarity, not just control.

Users needed clarity, not just control.

Following our research and co-design workshops, we generated a wide range of ideas which we needed to strategically prioritize.

To move forward, we needed to identify what to build first. We conducted a 3-axis prioritization exercise, scoring each idea based on:

  1. User Impact: How much value it would bring to employees.

  2. Design Effort: Time and complexity to prototype and test.

  3. Development Effort: Engineering cost and feasibility.

This helped us map ideas into quick wins vs. long-term investments.

Idea Prioritization based on Scoring User Impact, Design & Dev Effort

SOLUTION

SOLUTION

SOLUTION

Translating Insights into an Intuitive Assistant-Building Experience

Flowchart for Creating Assistant

Final Design for Creating Assistant

KEY DESIGN DECISIONS

KEY DESIGN DECISIONS

KEY DESIGN DECISIONS

Our goal was to reduce friction and cognitive load without sacrificing flexibility. So we took some key decisions to deliver clarity at every step

Switchable paths
⦿ Some users preferred guidance; others wanted control. So we allowed seamless switching between bot-led and form-led creation, without loss of progress.
⦿ This met users where they were and respected their comfort levels.

Switchable paths
⦿ Some users preferred guidance; others wanted control. So we allowed seamless switching between bot-led and form-led creation, without loss of progress.
⦿ This met users where they were and respected their comfort levels.

Prompt scaffolding with starter templates
⦿ We introduced structured prompts with pre-filled starter text to help users articulate what they wanted their assistant to do.
⦿ This removed ambiguity and gave the LLM a better foundation to work from, resulting in clearer, more grounded outputs.

Prompt scaffolding with starter templates
⦿ We introduced structured prompts with pre-filled starter text to help users articulate what they wanted their assistant to do.
⦿ This removed ambiguity and gave the LLM a better foundation to work from, resulting in clearer, more grounded outputs.

Preview first, not post submitting
⦿ Rather than asking users to “submit and hope,” we enabled live, inline previews of assistant responses.
⦿ This helped users test prompts incrementally, building trust through visibility and reducing fear of getting it wrong.

Preview first, not post submitting
⦿ Rather than asking users to “submit and hope,” we enabled live, inline previews of assistant responses.
⦿ This helped users test prompts incrementally, building trust through visibility and reducing fear of getting it wrong.

Progressive disclosure of capabilities
⦿ In early versions, users couldn’t find advanced settings. We redesigned this to surface capabilities contextually.
⦿ This approach kept the UI clean for novices but still powerful for advanced users.

Progressive disclosure of capabilities
⦿ In early versions, users couldn’t find advanced settings. We redesigned this to surface capabilities contextually.
⦿ This approach kept the UI clean for novices but still powerful for advanced users.

CHALLENGES & COLLABORATION

CHALLENGES & COLLABORATION

CHALLENGES & COLLABORATION

While we aimed to restructure the assistant flow for long-term scalability, we received pushback from the business

We were working towards reorganizing IA and making space for advanced capabilities. But the sprint focused towards shipping refinements for immediate rollout.

So we were holding off on foundational improvements we felt were critical for scale. Still, we optimized what we could, ensuring the experience remained clear, flexible, and ready for future evolution.

NEXT STEPS

NEXT STEPS

NEXT STEPS

I would have loved to explore more intelligent and proactive capabilities that go beyond static assistant creation

Suggesting assistant creation based on frequent repeated queries, providing help even before they ask.

Moving towards an Agentic Assistant model, where users can pull context and respond proactively.

LEARNINGS & TAKEAWAYS

LEARNINGS & TAKEAWAYS

LEARNINGS & TAKEAWAYS

How this project is helping me grow as a designer?

01

Prompt Design is UX too

Designing structured, supportive prompts isn’t just about backend performance. It's about empowering users to communicate clearly and get reliable outcomes.

I honed my communication skills through direct client engagement and gained the ability to understand understand and address client needs effectively.

I honed my communication skills through direct client engagement and gained the ability to understand understand and address client needs effectively.

02

Users Need Trust Cues

AI outputs aren’t always predictable, so we designed for reassurance. Real-time previews helped users see what the assistant “understood,” and fallback options ensured users never felt stuck or misled by the system.

I honed my communication skills through direct client engagement and gained the ability to understand understand and address client needs effectively.

I honed my communication skills through direct client engagement and gained the ability to understand understand and address client needs effectively.

You might also like to see…

©2025 Sakshi Sonawani 🤍

©2025 Sakshi Sonawani 🤍

©2025 Sakshi Sonawani 🤍