Moka Eva

Shipped a 0-to-1 AI recruiting platform in 4 months. Designed the calibration system and information architecture that enabled recruiter-AI collaboration, achieving 54% day-1 retention and NPS of 29. Defined the AI agent's decision logic and edge cases, developing foundational AI agent design principles.

AI Hiring Agent

SaaS

Company

Moka HR

Company

Moka HR

Role

Product Designer

Role

Product Designer

Tools

Figma

Tools

Figma

Time

Sep 2025 - Jan 2026

Time

Sep 2025 - Jan 2026

PRODUCT

Moka Eva

Moka is a leading HR SaaS platform backed by $150M in funding, serving global enterprises like DJI and Tesla.

But the traditional ATS market is crowded, so Moka made a strategic move: build Eva AI, a lightweight, AI-first recruiting tool designed for independent recruiters and small agencies.

As the founding product designer, I delivered the end-to-end design of Eva from 0 to 1, turning a strategic vision into a shipped product in 4 months.

USER journey map

Recruiter is the bridge

Our core user is the recruiter, who sits between candidates and hiring managers throughout the hiring process. Their goal: find the best match.

But through mapping the full recruiting workflow and interviews, I uncovered a key insight: the problem isn't finding resumes, it's what comes after.

Recruiters struggle to translate vague hiring needs into clear criteria, lack confidence in identifying the right fit, and are drained by a slow, manual evaluation process.

Our core user is the recruiter, who sits between candidates and hiring managers throughout the hiring process. Their goal: find the best match.

But through mapping the full recruiting workflow and interviews, I uncovered a key insight: the problem isn't finding resumes, it's what comes after.

Recruiters struggle to translate vague hiring needs into clear criteria, lack confidence in identifying the right fit, and are drained by a slow, manual evaluation process.

Landscape Research

Understanding the AI Recruiting

To contextualize recruiter pain points, I also analyzed major AI recruiting tools. Most automate sourcing and screening, but lack calibration and meaningful feedback. Three key takeaways emerged:

  1. Automation dominates, no designs for calibration.

  2. Feedback is binary or absent.

  3. AI acts for recruiters rather than with them.

These reveal opportunities for Moka Eva.

To contextualize recruiter pain points, I also analyzed major AI recruiting tools. Most automate sourcing and screening, but lack calibration and meaningful feedback. Three key takeaways emerged:

  1. Automation dominates, no designs for calibration.

  2. Feedback is binary or absent.

  3. AI acts for recruiters rather than with them.

These reveal opportunities for Moka Eva.

INSIGHT

Why it's hard to tell AI what a "good candidate" means

Our interviews revealed that hiring criteria aren't static inputs you can type into a search bar.

They're implicit: recruiters can't articulate what they want until they see it. They're evolving: standards shift as recruiters review real candidates. And they're multidimensional: a candidate who's 9/10 technically but 5/10 culturally can't be reduced to a simple yes or no.

Our interviews revealed that hiring criteria aren't static inputs you can type into a search bar.

They're implicit: recruiters can't articulate what they want until they see it. They're evolving: standards shift as recruiters review real candidates. And they're multidimensional: a candidate who's 9/10 technically but 5/10 culturally can't be reduced to a simple yes or no.

Goal #1

Goal #1

How might we enable AI and Recruiter collaboration to find precise matches and improve matching efficiency?

Goal #1 Overview

3-Steps AI Calibration System

Instead of asking recruiters to define their criteria upfront in a static input, I designed a 3-step AI calibration system where the recruiter and AI refine their understanding together.

Step 1

JD Analysis

The recruiter starts by entering a job description. The AI extracts requirements and generates a structured hiring doc.

But two questions emerged: how should the recruiter refine what AI generates, and what format should the document take?

For prompt refinement, a blank input was too vague, users didn't know what to write. I landed on a magic wand approach where users edit freely while AI assists.

The recruiter starts by entering a job description. The AI extracts requirements and generates a structured hiring doc.

But two questions emerged: how should the recruiter refine what AI generates, and what format should the document take?

For prompt refinement, a blank input was too vague, users didn't know what to write. I landed on a magic wand approach where users edit freely while AI assists.

For the document format, tabs scattered information and forms were too rigid, I chose an editable doc that felt natural, scannable, and easy to edit inline.

For the document format, tabs scattered information and forms were too rigid, I chose an editable doc that felt natural, scannable, and easy to edit inline.

Step 2

Interactive Calibration

Then, the AI searches and presents candidates one by one. The recruiter accepts or rejects, and with each decision the AI learns their implicit criteria, which the recruiter themselves is still discovering.

I explored two approaches: showing 10 candidates and letting the recruiter pick 3, or presenting candidates one by one for individual evaluation.

Showing all 10 overwhelmed users and led to shallow comparisons. One-by-one forced deeper evaluation, and each accept or reject became a learning signal for the AI. If the recruiter continuously says no, the agent prompts: 'Tell me why, what's missing?', turning rejection into richer training data.

Then, the AI searches and presents candidates one by one. The recruiter accepts or rejects, and with each decision the AI learns their implicit criteria, which the recruiter themselves is still discovering.

I explored two approaches: showing 10 candidates and letting the recruiter pick 3, or presenting candidates one by one for individual evaluation.

Showing all 10 overwhelmed users and led to shallow comparisons. One-by-one forced deeper evaluation, and each accept or reject became a learning signal for the AI. If the recruiter continuously says no, the agent prompts: 'Tell me why, what's missing?', turning rejection into richer training data.

Step 3

Multiple Feedback Mechanisms

After calibration, the recruiter moves to reviewing the full candidate list. But how do they continue giving feedback without it feeling tedious?

I explored four approaches: binary yes/no feedback was too simplistic and lost nuance. A magic comment model confused users with unclear interaction. Drag-and-drop cards into chat was clunky and tedious.

I landed on AI-initiated suggestions paired with chat, the agent proactively asks targeted questions, guiding users while capturing rich feedback naturally.

After calibration, the recruiter moves to reviewing the full candidate list. But how do they continue giving feedback without it feeling tedious?

I explored four approaches: binary yes/no feedback was too simplistic and lost nuance. A magic comment model confused users with unclear interaction. Drag-and-drop cards into chat was clunky and tedious.

I landed on AI-initiated suggestions paired with chat, the agent proactively asks targeted questions, guiding users while capturing rich feedback naturally.

Goal #1 Learning

Better matches through mutual learning

This 3-step calibration system works on two levels.

For the AI, it gradually learns user intent: match rate improved from 30% to 75% as each round of feedback trains the model.

For the user: it acts as scaffolding: instead of forcing recruiters to define perfect criteria upfront, they discover what they actually want by reviewing real candidates. The result is better AI candidate output, which means more accurate recommendations and stronger matches.

But finding the right candidates is only the first step. Recruiters still need to evaluate them quickly and act at scale: that's where Goal 2 begins.

This 3-step calibration system works on two levels.

For the AI, it gradually learns user intent: match rate improved from 30% to 75% as each round of feedback trains the model.

For the user: it acts as scaffolding: instead of forcing recruiters to define perfect criteria upfront, they discover what they actually want by reviewing real candidates. The result is better AI candidate output, which means more accurate recommendations and stronger matches.

But finding the right candidates is only the first step. Recruiters still need to evaluate them quickly and act at scale: that's where Goal 2 begins.

Goal #2

Goal #2

How might we help recruiters make faster, more confident hiring decisions?

Goal #2 Overview

AI-driven Information Design

The calibration system helps the AI find better candidates, but recruiters still need to evaluate them quickly with confident.

Our research showed recruiters previously spent 10 minutes per candidate, manually cross-referencing resumes against requirements. The challenge: how do I surface enough information for confident decisions without overwhelming the user?

Exploration #1

Split screen over full-screen chat

I first explored a full-screen chat-based layout, but information felt fragmented and difficult to act on.

The split-screen design won, chat on the left, candidate details on the right. One user said, 'It feels like pinning important candidates on a wall. I can reference them while chatting with AI.'

I first explored a full-screen chat-based layout, but information felt fragmented and difficult to act on.

The split-screen design won, chat on the left, candidate details on the right. One user said, 'It feels like pinning important candidates on a wall. I can reference them while chatting with AI.'

Exploration #2

Candidate profile cards

With the layout decided, how should each candidate's information be presented?

I explored three approaches: a five-star rating system felt meaningless, users said 'star ratings don't help me understand why someone is a good fit.' An expanded view was overly dense. I removed the star rating and focused on qualification match highlights and AI-generated summaries, concise, helpful, and actionable.

With the layout decided, how should each candidate's information be presented?

I explored three approaches: a five-star rating system felt meaningless, users said 'star ratings don't help me understand why someone is a good fit.' An expanded view was overly dense. I removed the star rating and focused on qualification match highlights and AI-generated summaries, concise, helpful, and actionable.

IMPACT

Big breakthrough

Finally, Eva shipped in 4 months. The calibration system reduced the gap between what recruiters wanted and what AI delivered, users went from rejecting most candidates to finding strong matches within 2-3 calibration rounds. The information design cut candidate evaluation time from 10 minutes to 2-3 minutes, freeing recruiters to focus on judgment instead of data processing and boosting actionable confidence.

Finally, Eva shipped in 4 months. The calibration system reduced the gap between what recruiters wanted and what AI delivered, users went from rejecting most candidates to finding strong matches within 2-3 calibration rounds. The information design cut candidate evaluation time from 10 minutes to 2-3 minutes, freeing recruiters to focus on judgment instead of data processing and boosting actionable confidence.

Learnings & Next step

Learnings & Next step

Effective AI products aren't about automation. They are about designing the collaboration between human judgement and machine intelligence.

REFLECTIONS

4 principles for AI agent design

Through designing Eva, I synthesized four AI design principles:

Through designing Eva, I synthesized four AI design principles:

NEXT STEPS

Bias handling & AI Outreach

AI in recruitment is highly sensitive, and clearer mechanisms for bias detection and mitigation are needed. For instance, if the AI's recommendations skew heavily toward male candidates, the system should proactively alert the user.

AI Outreach remains an open area for further exploration.

Logo
Stay connected and let's build something great together.
Logo
Stay connected and let's build something great together.
Logo
Stay connected and let's build something great together.