The hypothesis is not that AI should intervene more, but that it can structure better.
AI Mental Health Project Updates
A solo, early-stage exploration into how AI might support therapeutic continuity without replacing care.
This project sits at the uncomfortable intersection of emotional vulnerability and machine intelligence. It is an active attempt to think carefully about what AI should and should not do in mental health contexts.
The Idea
I am exploring whether AI can scaffold therapeutic continuity between sessions, not by offering advice, but by structuring reflection, surfacing patterns over time, and supporting follow-through.
The focus is continuity: helping people keep track of emotional themes, behavioural shifts, and therapeutic goals between care touchpoints, without simulating expertise or replacing human support.
Why
Core Problem
- Therapy sessions are spaced apart.
- Between sessions, insight fades.
- Emotional intensity shifts.
- Homework gets postponed.
- The therapeutic thread weakens.
- By the next session, people often feel like they are starting again.
Unstructured journaling can amplify rumination rather than direct reflection.
Many digital tools flatten emotional complexity or feel detached from lived context.
Mental health unfolds over time. Most tools react to moments.
Hypothesis
AI can scaffold structured reflection in ways that:
- Reinforce therapeutic themes without generating new advice
- Surface patterns across time rather than reacting to single entries
- Support follow-through without simulating authority
- Maintain boundaries that prevent emotional over-reliance
This is not about replacing therapy.
It is about strengthening the thread between sessions.
Why I Am Working On It
I studied artificial intelligence and psychology. I have seen how powerful generative systems can be and how easily they can simulate understanding.
I am not interested in building something that mimics therapy. I am interested in testing whether AI can strengthen human care without quietly replacing it.
You can learn more about my experiences and education on CV.
Mental health support should prioritise continuity, agency, and transparency. If AI enters this space, it must do so with restraint.
Research Grounding
Longitudinal Monitoring
Mental health is longitudinal. Yet many digital tools respond to isolated entries.
I am interested in systems that think in trajectories, not moments, tools that track themes across time rather than reacting to single prompts.
Sleep, Mood, Behaviour
My thesis work on sleep and adolescent depression informs how I think about signals, emotional variability, and pattern detection.
Emotional states are rarely static.
Interpretability
In sensitive emotional contexts, opaque systems erode trust.
Interpretability is not a technical luxury, it is a safety feature. People should understand why something is being surfaced or highlighted.
Cognitive Science of Reflection
Structured reflection can improve therapeutic engagement when designed carefully.
The question is whether AI can support that structure without amplifying rumination.
Human-AI Trust
Trust should emerge from boundaries, transparency, and user control, not from anthropomorphic language or simulated empathy.
Adherence and Follow-Through
A central question is whether AI can help people sustain healthy reflection habits over time without becoming something they feel dependent on.
Boundaries
This project is intentionally constrained.
It is not:
- A therapist
- A diagnostic tool
- Crisis support
- A replacement for human care
The aim is augmentation, not substitution.
Current Stage
- Refining problem framing and design principles
- Developing early concept sketches and guardrails
- Writing publicly to clarify hypotheses
- Exploring research, funding, and translational pathways
This is a thinking-in-public process.
The direction will evolve.
How You Can Contribute
I am particularly interested in conversations that stress-test this direction clinically, technically, or ethically.
Startup and Product
- Founders and product collaborators
- Clinicians and domain advisors
- Design and engineering partners
Funding Conversations
- Early-stage backers and grant pathways
- Accelerator and incubator networks
- Impact-focused funding opportunities
Research and PhD
- Potential supervisors and research groups
- Collaborative research on mental health AI
- Translational work from research to practice
If this resonates, I would value a conversation.
Reach out via the Contact page.