Role

UX/UI designer

Target

B2C

Year

2019

Duration

6 months

Skills

Sketch

APEER - Machine learning

APEER - Machine learning

Overview

Overview

The target audience for Apeer includes scientists, researchers, and students in the biomedical, pharmaceutical, and life sciences fields who rely on microscopy imaging but often lack coding or machine learning expertise. From cell biologists and pathologists to neuroscience students, these users need intuitive, no-code tools to analyze large image datasets efficiently and accurately. Apeer meets their needs with AI-powered, cloud-based solutions that automate image segmentation and data extraction, enabling faster, more reproducible research and learning without technical barriers.

The Challenge

The Challenge

As a UX designer, I was tasked with introducing a complex machine learning workflow—annotate, train, and segment—to non-technical users. The topic itself was challenging, and I had to design the experience as an MVP using only pre-existing components and patterns from the UI library. My focus was on simplifying the process without losing depth, using clear guidance, visual feedback, and progressive disclosure to make each step intuitive. The goal was to make AI feel less like a black box and more like a powerful, accessible tool.

My Role & Responsibilities

As the sole UX/UI designer, I:

  • Conducted user interviews, built personas, and synthesized research to frame the user experience.

  • Translated complex ML concepts into a user-friendly experience

  • Visualised task analysis and developed user flows to simplify the machine learning process (annotate, train, segment).

  • Improved the UX in the in-house annotation tool

  • Designed the MVP experience using pre-existing UI components and patterns, while navigating strict limitations from business and development teams.

  • Collaborated with product managers and developers to ensure technical feasibility and maintain a balance between user needs and technical constraints.

  • Advocated for non-technical users by simplifying language, interactions, and visual feedback.

Research Process

Research Process

To deeply understand the user experience around machine learning in microscopy, I conducted interviews with six participants across different levels of technical expertise. After each session, I mapped out the insights on a shared board using post-its, organizing observations around key moments in the workflow. This visual mapping allowed patterns and recurring pain points to emerge quickly.

Through this process, I identified a clear sequence of core steps users needed to go through: annotating datasets, training the model, receiving augmented data, processing 2D segmentations, and finally extracting insights. Each of these steps came with its own set of mental models, expectations, and areas of uncertainty.

To make sense of the complexity, I created a synthesis of the interviews that distilled user goals, frustrations, and informational gaps. This synthesis helped me uncover what users needed to know at each step and where they struggled most—especially with confidence, understanding outcomes, and knowing what would happen next.

I then built a detailed task analysis of the entire machine learning process, breaking it down step by step from the user's point of view. This task analysis became a foundational reference that guided design decisions throughout the project. It helped me align the interface with users’ mental models and ensure that each part of the experience felt purposeful, clear, and supportive—even within the tight constraints of our MVP.

To deeply understand the user experience around machine learning in microscopy, I conducted interviews with six participants across different levels of technical expertise. After each session, I mapped out the insights on a shared board using post-its, organizing observations around key moments in the workflow. This visual mapping allowed patterns and recurring pain points to emerge quickly.

Through this process, I identified a clear sequence of core steps users needed to go through: annotating datasets, training the model, receiving augmented data, processing 2D segmentations, and finally extracting insights. Each of these steps came with its own set of mental models, expectations, and areas of uncertainty.

To make sense of the complexity, I created a synthesis of the interviews that distilled user goals, frustrations, and informational gaps. This synthesis helped me uncover what users needed to know at each step and where they struggled most—especially with confidence, understanding outcomes, and knowing what would happen next.

I then built a detailed task analysis of the entire machine learning process, breaking it down step by step from the user's point of view. This task analysis became a foundational reference that guided design decisions throughout the project. It helped me align the interface with users’ mental models and ensure that each part of the experience felt purposeful, clear, and supportive—even within the tight constraints of our MVP.

Sacrificial Concepts

Sacrificial Concepts

Based on the research insights, I created several sacrificial concepts to test with users. These included:

1. A mobile interface for pathologists to receive real-time notifications
2. A desktop interface for surgical teams to manage cases
3. A collaborative viewing platform for image analysis
4. A digital annotation system for highlighting areas of interest

These concepts were presented through interactive prototypes and tested with users at various hospitals, including follow-up sessions at Hôpital Rechts der Isar.

Prototyping and Testing

Prototyping and Testing

The sacrificial concepts helped me and the team creating the ideal user journey, which then I could use to design alone high-fidelity prototypes using Figma and Framer. I conducted multiple rounds of task-based scenarios for usability testing. The prototypes were tested with:

- 11 neurosurgeons
- 14 pathologists
- 5 operating room nurses

The final design included three main components:

1. Surgical Team Interface
- Real-time image capture and sharing
- Case management dashboard
- Integration with existing hospital systems

2. Pathologist Interface
- High-resolution image viewing
- Digital annotation tools
- Synchronous communication features

3. Mobile Companion App (only on a concept level)
- Push notifications for urgent cases
- Quick case review capabilities
- Secure communication channel

Concepts translated into UI screen for testing

Design Solution for showroom

Design Solution for showroom

For the AANS (American Association of Neurological Surgeons) fair, the prototypes were used to present and simulate the future evolution of Convivo, generating excitement within the community and gathering further valuable feedback from visitors.