Assessment
Overview

Assessment Engine

Overview

Mind Measure uses a sophisticated multimodal assessment engine that combines audio, visual, and text analysis to generate wellness scores. The system supports two distinct assessment types:

Assessment TypePurposeDurationFrequency
BaselineEstablish personal baseline with clinical questions5-10 minutesOnce (or periodically)
Daily Check-inQuick wellness monitoring1-2 minutesDaily or as needed

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                      Mobile App (Capacitor)                      │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│   ┌─────────────────┐          ┌─────────────────┐              │
│   │   Baseline      │          │   Check-in      │              │
│   │   Assessment    │          │   Assessment    │              │
│   │   SDK           │          │   SDK           │              │
│   └────────┬────────┘          └────────┬────────┘              │
│            │                            │                        │
│            └────────────┬───────────────┘                        │
│                         │                                        │
│            ┌────────────▼────────────┐                          │
│            │    ElevenLabs SDK       │                          │
│            │  (Conversational AI)    │                          │
│            └────────────┬────────────┘                          │
│                         │                                        │
│            ┌────────────▼────────────┐                          │
│            │    MediaCapture         │                          │
│            │  (Audio + Video)        │                          │
│            └────────────┬────────────┘                          │
│                         │                                        │
└─────────────────────────┼────────────────────────────────────────┘

          ┌───────────────┼───────────────┐
          │               │               │
┌─────────▼─────┐ ┌───────▼───────┐ ┌─────▼─────────┐
│ Audio Feature │ │ Visual Feature│ │ Text Analysis │
│ Extraction    │ │ Extraction    │ │ (Bedrock)     │
│ (Client-side) │ │ (Rekognition) │ │ (Claude)      │
└───────┬───────┘ └───────┬───────┘ └───────┬───────┘
        │                 │                 │
        └─────────────────┼─────────────────┘

              ┌───────────▼───────────┐
              │   Fusion Algorithm    │
              │   (V2 Scoring)        │
              └───────────┬───────────┘

              ┌───────────▼───────────┐
              │   Aurora PostgreSQL   │
              │   (fusion_outputs)    │
              └───────────────────────┘

Key Components

ElevenLabs Integration

Mind Measure uses the ElevenLabs React SDK (not the HTML widget) for conversational AI:

import { useConversation } from '@elevenlabs/react';
 
const conversation = useConversation({
  onMessage: (message) => handleMessage(message),
  onConnect: () => console.log('Connected'),
  onDisconnect: () => handleDisconnect()
});
 
await conversation.startSession({
  agentId: 'agent_xxxx'
});

Agent IDs:

  • Baseline Assessment: agent_9301k22s8e94f7qs5e704ez02npe
  • Daily Check-in: agent_7501k3hpgd5gf8ssm3c3530jx8qx

Multimodal Feature Extraction

The system extracts features from three modalities:

ModalityFeaturesExtraction Method
Audio10 featuresClient-side Web Audio API
Visual10 featuresAWS Rekognition
Text16+ featuresAWS Bedrock (Claude 3 Haiku)

Scoring

The final Mind Measure score (0-100) is computed using weighted fusion:

V2 Scoring Weights (December 2025):

  • 70% Text (Bedrock analysis)
  • 15% Audio (voice features)
  • 15% Visual (facial features)

See Scoring Algorithm for full details.

Data Flow

  1. User initiates assessment (baseline or check-in)
  2. ElevenLabs session starts with appropriate agent
  3. MediaCapture begins recording audio and capturing video frames
  4. Conversation proceeds with real-time transcript accumulation
  5. User finishes assessment
  6. MediaCapture stops and returns audio blob + video frames
  7. Parallel analysis runs:
    • Audio features extracted client-side
    • Visual features extracted via Rekognition API
    • Text analysed via Bedrock API
  8. Fusion algorithm combines modality scores
  9. Result saved to Aurora PostgreSQL
  10. Dashboard updated with new score and insights

Database Schema

Assessment results are stored in fusion_outputs:

CREATE TABLE fusion_outputs (
  id UUID PRIMARY KEY,
  user_id UUID REFERENCES auth.users,
  score INTEGER,           -- Mind Measure score (0-100)
  final_score INTEGER,     -- Same as score for now
  analysis JSONB,          -- Full analysis payload
  created_at TIMESTAMPTZ
);

The analysis JSONB contains:

  • assessment_type: 'baseline' or 'checkin'
  • mind_measure_score: Final score
  • mood_score: User's explicit mood rating (1-10)
  • themes: Detected conversation themes
  • keywords: Key terms extracted
  • driver_positive: Positive wellbeing factors
  • driver_negative: Negative wellbeing factors
  • conversation_summary: AI-generated summary
  • modalities: Per-modality scores and confidence
  • uncertainty: Confidence in the overall score (0-1)
  • risk_level: 'none', 'mild', 'moderate', 'high'

Last Updated: December 2025
Version: 2.0 (V2 Scoring)