VoiceTray
Offline-first
System tray
Hotkeys
Snippets
Notes
Optional local LLM

Offline dictation that behaves like a trustworthy typing assistant.

VoiceTray lives in your tray. Press a hotkey, speak, and it types into whatever app you’re using. It cleans transcripts conservatively with deterministic rules, then optionally applies an on-device LLM cleanup step with strict safety checks and fallback.

What you get

Tray-first workflow

Runs in the system tray. One hotkey to dictate into the active app, another to save notes to files.

Offline speech-to-text

Uses Vosk locally for fast dictation without sending audio anywhere. Online STT remains optional fallback when configured.

Snippet expansion

Expand short triggers into full text using snippets.txt after cleanup.

Notes with timestamps

Save clean transcripts to saved_texts.txt for quick capture and later review.

Personal glossary

Protect names and technical terms, plus apply custom replacements using glossary.json.

Profiles & modes

Choose cleanup mode (raw/balanced/aggressive), formatting profile (email/chat/notes/code), and per-app overrides.

Hybrid dictation intelligence
Stage A (rules): whitespace, punctuation spacing, conservative capitalization, repetition cleanup, optional list formatting, self-correction handling.
Stage B (local LLM, optional): tiny grammar/readability improvements under a strict JSON-only prompt.
Stage C (validation): reject risky edits (numbers/protected tokens/too-different) and fall back to the rule output.

Trust over rewriting

Accuracy is prioritized over aggressive paraphrasing. If the model output looks unsafe, it is rejected automatically.

Setup
Install dependencies:
pip install -r requirements.txt
Run:
run_app.bat
pythonw speech_to_text_app.py
Use: F9 to dictate into the active app, F10 to save a note.
Configure: open the tray menu → Settings (minimal GUI) or edit settings.txt.
Optional local LLM cleanup: tray menu → LLM Setup to install runtime and choose a GGUF model path.

Local-first by default

VoiceTray is usable offline. Optional local LLM cleanup runs on-device and remains guarded by validation + fallback.