Dashboard

AI Status: Idle
Twitch: Disconnected
Discord: Disconnected
VTube Software: Disconnected
OBS: Disconnected
Performance: N/A

Real-time Activity

Chat Messages

  • Waiting for chat messages...

AI Responses / Thoughts

Waiting for AI activity...
Voice Activity: Idle

Screen Perception (Preview)

Screen preview updates periodically based on Vision Model settings.

Configuration

Personality Studio

Model Selection & Configuration

Large Language Model (LLM)

Specific model identifier for the selected provider/type.

Vision Model

Select the model for screen perception tasks.
How often to capture the screen for analysis.

Voice Settings

Text-to-Speech (TTS)

1.0
1.0

Note: Emotion mapping from AI state to TTS prosody needs configuration (Not implemented in UI yet).

Speech-to-Text (STT)

Specify language code (like 'en', 'ja') or 'auto' if supported.
Helps detect when speech starts and stops to reduce processing.

Integration Settings

Twitch

Discord

Vtubing Software

Default VTube Studio port is 8001. Warudo default is 19190.

OBS Studio

Alert Platforms (Webhooks)

Allows AI to react to follows, subs, etc. Requires setup in Streamlabs.

API Keys & Authentication

Enter API keys required by selected models or integrations. These should be stored securely by the application backend.

Required for sending messages. Generate via sites like twitchapps.com/tmi/ (use with caution).

Memory System (Long-Term)

Determines how the AI remembers past interactions long-term.

Vector Database Settings (if enabled)

Path for local DB or URL for remote.
Model used to create embeddings for memory storage/retrieval.
Max number of past interactions/facts to fetch for context.
Helps condense information and manage database size.

Action & Tool Configuration

Enable and configure actions the AI can perform beyond speaking.

Available Actions:

(Requires OBS Integration)

More actions will appear here as they are developed or added via plugins.

Advanced Settings

Influences internal trade-offs between speed, resource usage, and response quality/complexity.
Controls the verbosity of logs shown in the 'Application Logs' section.
Display intermediate steps like retrieved memory or action plans in the AI Response view (if supported by the pipeline).
-1 for auto/max, 0 for CPU only. Requires GPU support.

Application Logs

Application logs will appear here...