HomeAbout
Projectsarisantonioco@gmail.com
LinkedInGithub
00
00

All designs on this website were created by Aris. Some projects are owned by the respective companies I have worked with, and some are conceptual pieces that may not represent real products.

© 2025 Aris. All rights reserved.

Symmetry RAG Chat Product Designer & Developer · Zaigo Labs
2025

At a Glance

• Role: Product Designer (End-to-end UX/UI)
• Duration: 6 weeks
• Platform: Web (Desktop-first, Responsive)
• Tools: Figma, Next.js 16, Tailwind CSS v4, shadcn/ui, Recharts

About Symmetry

I designed the end-to-end UX and UI for Symmetry, a healthcare operations platform built for anesthesia departments. The platform helps department leads, medical directors, and operations managers see how clinician workload is distributed, where OR block utilization is falling short, and where proactive intervention can recover lost capacity. Before Symmetry, most groups relied on disconnected spreadsheets, tribal knowledge, and reactive firefighting to manage these decisions.

The Problem

Anesthesia departments operate in a high-stakes environment where workload imbalances, underutilized OR blocks, and staffing gaps directly impact clinician burnout, patient outcomes, and facility revenue. Most groups manage this with disconnected spreadsheets, monthly reports that arrive too late to act on, and informal conversations between schedulers and department leads.

The result is predictable. The same clinicians get overloaded while others are underutilized. OR blocks sit empty because no one flagged the risk early enough. Recruiting decisions happen reactively after someone leaves rather than proactively based on demand signals. Symmetry already had the data infrastructure to surface these patterns. The question became: how do we turn raw operational data into a platform that lets anesthesia leaders see problems before they happen, distribute work fairly, and make staffing decisions with confidence?

User Research

Our team worked directly with anesthesia department administrators and medical directors across multi-facility groups to understand the daily operational challenges. Those insights informed every design decision from information architecture through individual component design.

Research inputs included structured interviews with department leads about their scheduling and workload management workflows, a review of existing reporting tools and spreadsheets used by anesthesia groups, competitive analysis of healthcare workforce management platforms and OR scheduling tools, and a detailed breakdown of the decision-making process when OR blocks go underutilized or staffing gaps emerge.

Key findings:
• Department leads check workload distribution weekly but act on it monthly because the data is too hard to parse in real time
• OR block utilization below 70% represents significant lost revenue, but teams rarely know a block is at risk until the day of
• Fairness in shift distribution is the number one driver of clinician satisfaction and retention, yet most groups have no quantitative way to measure it
• Recruiting is entirely reactive. Groups start looking for new clinicians after someone leaves rather than anticipating demand based on utilization trends
• Pre-operative optimization flags exist in patient records but aren't surfaced proactively, leading to day-of cancellations and delays

Personas

Based on the research, I developed two primary personas representing the core user types.

Medical Director
• MD in their late 40s, oversees anesthesia operations across two facilities
• Manages a team of six clinicians (MDs and CRNAs), handles scheduling disputes, reports utilization metrics to hospital administration
• Main frustration: not knowing whether workload is distributed fairly until someone complains
• Wants a single dashboard showing team health, block risk, and staffing gaps at a glance
• Clinically focused with limited patience for complex interfaces

Operations Coordinator
• Department administrator in their early 30s, handles day-to-day scheduling, block management, and the recruiting pipeline
• Builds the weekly schedule, tracks OR utilization, manages the recruiting funnel when positions open
• Main frustration: spending hours in spreadsheets cross-referencing block schedules with clinician availability and utilization data
• Wants to flag at-risk blocks two weeks out and generate recruiting outreach without starting from scratch every time
• Comfortable with SaaS tools and adopts new platforms quickly

Design Process

From the research, I identified the operational loop that every anesthesia department follows: monitor workload distribution, identify utilization gaps, assess risk on upcoming blocks, optimize pre-op readiness, and staff appropriately. This five-part cycle became the product's backbone.

Feature prioritization:
• Must-haves: Workload fairness dashboard with real-time metrics, OR block utilization tracking with AI-powered optimization suggestions, block risk early-warning system, case management, and an AI assistant for natural language queries
• Should-haves: Pre-operative patient optimization, recruiting pipeline, trend analysis, clinician comparison views, and dark mode
• Could-haves: Team collaboration features, direct calendar integration, automated block release workflows, and a mobile experience

Information architecture:
• Workspace: the core operational tools including Cases, Pre-op Optimization, OR Utilization, Talent Attraction, Workload Overview, Trends, Comparison, and Notifications
• Proactive Intelligence: the forward-looking layer with Block Risk dashboard, Early Warnings, and Insights
• AI Chat: the conversational interface for querying any platform data in natural language

Key Screens

The following screens represent the core feature areas I designed for Symmetry.

Dashboard

The landing experience after login surfaces four KPI cards with sparkline trends showing team fairness index, active alerts, clinical hours, and utilization rate. Below that, a team overview table gives per-clinician workload snapshots with fairness scoring. The design principle was zero-click insight: every metric that matters should be visible without drilling down.

Workload Fairness

This is the screen that delivers on the product's name. Each clinician's workload is broken down across shift types (OR, call weekday, call weekend, admin), after-hours hours, high-acuity case distribution, and top-of-license ratio. A composite fairness index quantifies balance across the team. I chose a data-dense table layout with inline sparklines over a chart-first approach because department leads need to compare specific numbers across clinicians, not interpret visual patterns.

Block Risk Early-Warning

The newest and most technically complex feature. This system scores every upcoming OR block on a 0 to 100 risk scale using seven weighted drivers: no cases scheduled, low fill rate, repeated unused history, demand mismatch, historical patterns, scheduling patterns, and time proximity. I used non-judgmental language throughout, choosing "lower," "moderate," and "higher" instead of "low," "medium," "high" because labeling a surgeon's block as "high risk" creates political friction. The three-tab structure separates the assessment dashboard, active early warnings, and retrospective insights so users can monitor, act, and learn in distinct contexts.

OR Utilization

A filterable table of every OR block across facilities showing scheduled hours versus used hours, utilization percentage, and status. The target band of 70 to 80% is the key design anchor. Blocks below 70% surface AI suggestions for optimization: release hours back to the pool, schedule an add-on case, or swap with another block. Each suggestion includes an impact estimate so coordinators can prioritize. The detail panel slides in from the right with full action history, preserving the user's position in the main table.

AI Chat

A conversational interface where users query platform data in plain English. "How is Dr. Reyes's workload compared to Dr. Patel this month?" returns a structured comparison with citations linking back to the source screens. I designed quick-prompt suggestions for common queries and an auto-complete system that recognizes clinician names and metric types. Citations are first-class. Every AI response includes linked references so users can verify the data behind the answer.

Recruiting Pipeline

Tracks prospective clinicians (MDs and CRNAs) through a status pipeline: not contacted, contacted, in conversation, declined. Each target has interaction history, AI-generated outreach drafts with facility-specific selling points, and source tracking. The design mirrors CRM patterns that operations coordinators already understand: pipeline stages, interaction logging, and bulk import via CSV.

Pre-op Optimization

Surfaces patient readiness scores (0 to 100) with risk flags for anemia, frailty, diabetes, and smoking. Each flag maps to a specific action plan (IV iron, nutrition program, glucose optimization, pulmonary prehab). The design decision was to front-load risk flags as colored badges in the table view so coordinators can triage without opening individual records. Status tracking (pending, in progress, ready, needs review) lets teams manage interventions as a workflow rather than a checklist.

Visual Design

The aesthetic direction is clinical and utilitarian. This is a tool for healthcare professionals managing high-volume operations, so the interface prioritizes density, scannability, and trust over visual flair.

• Typography: FK Grotesk in two weights (regular and medium). Geometric, professional, and reads cleanly at the small sizes required for data-heavy tables
• Color: OKLCH color space for perceptual uniformity. Every shade maintains consistent visual weight across light and dark modes. The palette is deliberately neutral with selective color reserved for status indicators and data visualization
• Components: Built on shadcn/ui with the radix-mira style variant for accessible, themeable primitives from day one. Five chart colors tuned for distinguishability at small sizes in both themes. Block risk levels, alert severities, and utilization bands each have consistent color coding across every screen
• Layout: Resizable three-panel architecture with sidebar (200-400px), main content (fluid), and detail panels (480-620px) so users control their own information density. Transitions disabled during drag operations to maintain 60fps responsiveness

User Flow

Medical Director Flow
Login → AI Chat (default landing) → "Show me this week's fairness breakdown" → Structured response with drill-down links → Workload Fairness screen → Clinician-level detail with composite index → Dashboard for team-wide overview

Operations Coordinator Flow
Login → Sidebar → OR Utilization → Filter by facility → Spot underutilized block (below 70%) → Review AI optimization suggestion → Release hours back to pool

Block Risk Weekly Review
Early Warnings → Scan active alerts (sorted by days until block) → Acknowledge or resolve each alert → Monthly: check Insights → Review prediction accuracy → Calibrate trust in the system

Recruiting Flow
Workload Overview (identify staffing gap) → Talent Attraction → Review pipeline → AI-generated outreach draft → Customize per facility → Track status: Not Contacted → Contacted → In Conversation → Hired or Declined

Pre-op Optimization Flow
Cases list → Flag patients with readiness score below threshold → Review risk flags (anemia, frailty, diabetes, smoking) → Assign action plan → Track status: Pending → In Progress → Ready

Challenges and Solutions

Workload fairness is politically sensitive. Displaying raw numbers can create friction between clinicians. I solved this with a composite fairness index that normalizes across shift types, acuity, and hours. The index gives leadership a single defensible number for each clinician rather than raw metrics that invite cherry-picking.

Block risk language matters. Calling a surgeon's block "high risk" implies judgment about their practice. I adopted non-judgmental terminology throughout: "lower," "moderate," and "higher" risk levels paired with objective driver signals like "no cases scheduled within 14 days" rather than subjective assessments.

Information density across six distinct feature areas could easily overwhelm users. I used a consistent screen pattern throughout: KPI cards at the top for summary metrics, a filterable table as the primary workspace, and a slide-in detail panel for deep dives. This three-layer progressive disclosure keeps every feature learnable because the interaction model is identical.

AI responses in a clinical context need trust signals beyond what consumer AI products provide. Every AI response includes citation links to the source data, every block risk score shows its weighted drivers, and every optimization suggestion includes an impact estimate. The system shows its reasoning at every level.

Outcomes

• 9 major feature areas designed and built end-to-end
• 55+ UI primitives and 15+ custom composite components
• Resizable three-panel layout system supporting variable information density
• Full dark mode coverage with OKLCH perceptual uniformity
• Block risk early-warning surfaces at-risk blocks 2 to 4 weeks in advance
• AI chat answers operational questions with cited, verifiable responses
• Any feature reachable in 2 clicks or fewer from the sidebar
• Workload fairness quantified as a single composite index per clinician

Reflections

Fairness needs a number. Qualitative workload discussions turn political fast. Giving leadership a normalized, composite fairness index transformed workload conversations from "I feel like I'm working more" to "the data shows a 12-point gap between these two clinicians this month." Quantification depoliticizes the conversation.

Language is a design material. In healthcare operations, word choice carries institutional weight. The shift from "high risk" to "higher risk" and from subjective labels to objective driver signals wasn't cosmetic. It determined whether department leads would actually show the tool to surgeons. Terminology was reviewed as carefully as layout.

Consistent interaction patterns scale better than clever features. By using the same KPI-cards-plus-table-plus-detail-panel structure across all nine feature areas, users who learn one screen can navigate every screen. The cost of this consistency is that some features could theoretically benefit from bespoke layouts, but the reduction in cognitive load across the full product was worth the tradeoff.

Designing with real components changes the output. Building with shadcn/ui primitives, Tailwind CSS v4 tokens, and Recharts meant every design decision was immediately testable in code. The resizable panel system, streaming AI states, and sparkline KPI cards were all designed knowing exactly how they'd be implemented, which eliminated the handoff gap entirely.
← Back to project