Skip to main content

Basic

A prompt-driven therapist with optional chain-of-thought reasoning, configurable for different therapeutic approaches.

Overview

PropertyValue
Keybasic
TypeLLM-based
FocusConfigurable via prompt template

Key Features

  • Prompt-driven: Therapeutic approach is defined entirely by the prompt template
  • Chain-of-thought (CoT): Optional reasoning step before generating a response
  • Configurable: Swap prompt files to target different therapeutic modalities (CBT, MI, etc.)

How It Works

  1. Prompt Loading: Loads a system prompt from the configured prompt_path.
  2. Session Init: Builds the conversation with the system prompt as the first message.
  3. Response Generation: On each turn, appends the user message, calls the LLM, and appends the assistant reply to history.
  4. CoT Mode (optional): When use_cot=True, the model returns structured reasoning alongside the response content.

Usage

CLI

uv run python -m examples.simulate therapist=basic

Python

from patienthub.therapists import get_therapist

therapist = get_therapist(agent_name="basic", lang='en')

response = therapist.generate_response("Hello, I've been feeling anxious lately.")
print(response.content)

Configuration

OptionTypeDefaultDescription
prompt_pathstringdata/prompts/therapist/CBT.yamlPath to prompt file
use_cotboolFalseEnable chain-of-thought structured output
model_typestring"OPENAI"Model provider key
model_namestring"gpt-4o"The LLM model to use
temperaturefloat0.7Controls response randomness
max_tokensint8192Max response tokens
max_retriesint3API retry attempts

Response Format

Without CoT, returns a plain string response. With use_cot=True, returns a structured object:

class ResponseWithCOT(BaseModel):
reasoning: str # Internal chain-of-thought (not shown to client)
content: str # Therapist's actual response

Use Cases

  • General-purpose therapist baseline
  • CBT training simulations (default prompt)
  • Research on different therapeutic prompt designs
  • Educational demonstrations