Gooey.ai Data Dashboard

Gooey.AI Impact Dashboard is a web-based tool designed to help nonprofits and social enterprises measure and communicate their impact using the Theory of Change framework.

This project combines UX research & design, translating complex chatbot performance metrics into meaningful outcomes. Using interviews, visualization testing, and usability studies, we created a clear and actionable dashboard for impact organizations.

My Role: UX Researcher & Designer

Timeline: 14 weeks (team project)

Deliverables: Interviews, visualization testing, heuristic evaluation, usability study

View In Figma

Problem & Audience

Problem

Gooey.AI partners with organizations that use conversational AI to support social impact work in fields like education, agriculture, and public health. These organizations collect valuable data from chatbots but often struggle to connect those interactions to real-world outcomes.

Existing analytics tools emphasize technical performance metrics but offer little insight into why outcomes happen or how AI-driven interactions contribute to long-term goals.

Audience

Program managers, evaluation specialists, and funders who rely on chatbot data to make decisions and demonstrate impact.

Goal & Research Questions

Can we turn messy chatbot signals into clear, trustworthy, and actionable impact insights for program teams, analysts, and funders.

Methods & Recruitment

Approach: Mixed methods: secondary research → stakeholder interviews → synthesis → concept exploration → prototype testing and heuristic evaluation to validate design choices and measure usability, trust, and time-to-insight.

Participants:
8 practitioners recruited from Gooey partners, US government aid agencies, nonprofit program/product managers, M&E specialists, and data/visualization experts.

Recruitment prioritized people who regularly make program decisions from chatbot or program analytics.

What we did (activities)

  • Discovery interviews (45–60 min, n=8) — semi-structured: decision workflows, evidence needs, report use cases. Example: “Tell me about the last time you changed a program because of dashboard data.”

  • Synthesis workshops — affinity mapping and journey mapping to surface top pain points, terminology gaps, and opportunities for automation.

  • Concept testing (n=6 preference sessions) — three distinct layout patterns (linear, accordion, sidebar) presented as clickable mid-fi prototypes; participants ranked preferences and explained tradeoffs.

  • Moderated usability testing (n=6) — scenario-based tasks on the chosen prototype measuring task success, time-to-insight, and perceived ease. We used SUS and SEQ to capture perceived usability and task difficulty.

  • Heuristic review & prioritization — crosswalked usability findings with product constraints to create a prioritized backlog of fixes.

Analysis approach

  • Thematic coding of interview transcripts → affinity clusters → prioritized insights.

  • Quantitative triangulation: task success + time metrics used to validate that design changes reduced cognitive load and sped decision making.

  • Convergence rule: design changes were prioritized if they improved a quantitative measure or solved a top qualitative pain point repeatedly cited by participants.

Timeline

Previous
Previous

UX Design- Accessibility App

Next
Next

UX Research- Michigan's Water (Copy)