One of the best indicators of therapeutic engagement and results is having clear, common goals. Co-creating goals with practitioners strengthens the “working alliance” and makes it simpler to track and recognize progress. However, research also demonstrates that objectives may not always coincide between physicians and clients, and that this misalignment might impede or degrade care.

Image Credit: demaerre from Getty Images Signature
AI is simultaneously transitioning from novelty to infrastructure in the field of mental health: mood-tracking companions, hybrid digital-human programs, and triage chatbots are demonstrating early potential, particularly for mild to moderate symptoms, but they are also correctly being criticized for needing strict controls.
When these two currents are combined, a useful future is revealed: collaborative goal-setting that is co-authored by AI, physicians, and clients, each of whom does their best work.
Brings daily data (sleep, activity, triggers), values, living context, and preferences.
Owns the concept of “better” and establishes limits on what may be shared and measured.
Offers risk assessment, formulation, and evidence-based techniques (e.g., ACT, BA, CBT, etc.).
Safeguards safety, equity, and ethics while converting values into treatment targets.
Determines, in accordance with local guidelines, when digital tools are appropriate (or not).
Organizes the discussion, offers potential goals, verifies that they are reasonable and quantifiable, and monitors progress in between meetings.
Trends that are visible, such as “panic spikes on days with <6 hours sleep.”
Although flags stray from objectives, they respect safety procedures in emergency situations and leave clinical judgements to people.
SMART-ER objectives
Transform results into goals that are Time-bound, Relevant, Specific, Measurable, Achievable, Equity-aware, and Revisable.
For instance, “Do ten minutes of paced breathing after lunch, five days a week, for three weeks; evaluate utility on September 28.”
Consent and a shared plan
“Helps reduce somatic arousal before afternoon meetings” is the app’s stated purpose, along with the data sources (self-report + heart-rate trend). Consent is revocable, and clients decide what information to gather and share.
Micro-assistances in between sessions
Low-friction prompts are sent by the copilot (“ready for 10-min breathing?”), completion is recorded, and quick observations are encouraged (“what made it easier today?”). Instead of seeing a firehose, the clinician sees a succinct summary.
Reviews of checkpoints
The team evaluates goal accomplishment, symptom change, burden, and fit every two to four weeks. Objectives are changed or retired. This rhythm is similar to how NHS Talking Therapies uses digitally enabled therapies in conjunction with human support.
What AI is particularly good at (and what it shouldn’t)
Advantages
Scalable structure helps transform hazy aspirations into measurable goals and maintains organised documentation.
Finding subtle but significant patterns in mood and behaviour logs that could otherwise go unnoticed is known as pattern spotting.
Engagement nudges: prompt, tailored reminders improve compliance with at-home routines. Promising but not conclusive, early RCTs of guided chatbot use demonstrate short-term symptom reductions for anxiety and depression.
Risk and diagnosis: AI cannot diagnose or triage risks on its own; escalation pathways to medical professionals are non-negotiable.
Myth of replacement: Chatbots can support human therapy but not take its place; professional associations and regulators are placing more emphasis on strong evidence and clinical supervision.
Guardrails: constructing this in a responsible manner
Designing with humans in the loop
The UK recommends digitally enabled therapies as supported interventions, meaning that clients utilise the technology while a qualified professional monitors their care and results. This idea ought to apply to AI copilots as well.
Safety procedures
Crisis routes in real time. Clinicians can view logs of triggers and escalations.
Transparency and data minimisation
Simple off-switches, plain-language model cards, performance restrictions, and explicit agreement regarding what is gathered, why, and for how long.
Fairness and bias checks
Conduct routine audits to identify any disparities in target success rates or suggestions between groups; disclose disparity metrics along with remedial strategies that are in line with the latest LMM guidance and WHO AI guidance for health.
Levels of evidence
Choose instruments with solid empirical support or peer-reviewed trials.
Keep tabs on within-service results (target achievement, dropout, symptom change), and remove underperforming tools.
Important metrics (not only symptom scores)
Goal Attainment Scaling (GAS) assigns a weight to the client’s top priorities.
Alliance & SDM metrics: quick in-app pulse checks on collaborative alliances and shared decision-making forecast results.
Fit and burden: weekly minutes, perceived usefulness, and sources of friction.
Using an equity perspective, group dropouts and attainment according to demographics and access requirements (with consent).
A first service checklist
□ Specify which issues and levels of difficulty are within the purview of AI-supported goal work and which are not.
□ Create one-tap escalation pathways by mapping them.
Standardise goal templates that are connected to your modalities, such as BA activity goals and CBT exposure ladders.
□ Establish shared dashboards and review schedules (e.g., every two to four weeks).
□ Release a model fact sheet that covers the capabilities, limitations, drift strategy, and reporting procedures for harm.
□ Assess against local KPIs and NICE-aligned results; decommission or iterate as necessary.
What might not go well (and how to avoid it)
Over-reliance on automation: the user follows the app rather than their values → Design for introspection; goal “re-endorsement” is necessary on a regular basis.
Narrowing of care: only quantifiable things receive attention → Incorporate qualitative victories and narrative outcomes.
Privacy backfire: private logs seem unsettling → Give clients granular opt-in options and default to minimal data.
Equity gaps: prompts that disregard socioeconomic or cultural background → Co-design with a range of users; regularly review proposals.
Mismanagement of crises: delayed human response → round-the-clock helpline linkages; prompt clinician notifications; avoid putting emergencies behind automated gates.
The future of therapy isn’t about people vs AI; rather, it’s about humans deciding where to go, doctors guiding the way, and AI maintaining a clear map, milestones, and mile markers. Without sacrificing what matters most to people: a skilled, human, safe relationship, AI-assisted collaborative goal setting can make care more person-centred, quantifiable, and responsive when done with consent, openness, and oversight.
Note: This material is not a replacement for expert treatment; it is merely informational. Get emergency assistance immediately if you or someone you know is in imminent danger.