Reference

CLI Reference

Every flag, environment variable, and workflow for running Chalkboard from the terminal. For concepts like templates, themes, and effort levels, see the Guide.

Basic usage

The CLI entry point is main.py. At minimum, pass a topic:

python main.py --topic "explain how B-trees work"

This runs the full pipeline: script generation, fact-checking, Manim code generation, layout validation, TTS voiceover, Docker render, and ffmpeg merge. Output lands in output/<run-id>/final.mp4.

Common patterns

# Quick low-effort run
python main.py --topic "what is a hash table" --effort low

# High-effort with web research, algorithm template
python main.py --topic "explain merge sort" --effort high --template algorithm

# Expert audience, formal tone, light theme
python main.py --topic "CAP theorem" --audience expert --tone formal --theme light

# Pipeline only (no Docker render)
python main.py --topic "recursion basics" --no-render

CLI flags

Core

Flag Default Description
--topic required Topic to explain, e.g. "how B-trees work"
--effort medium Validation thoroughness: low, medium, high. See effort levels.
--audience intermediate Target audience: beginner, intermediate, expert
--tone casual Narration tone: casual, formal, socratic
--theme chalkboard Visual color theme: chalkboard, light, colorful. See themes.
--template Animation template: algorithm, code, compare, howto, timeline. See templates.
--speed 1.0 Narration speed multiplier. OpenAI: native 0.25–4.0. Kokoro/ElevenLabs: ffmpeg atempo.

Context

Flag Default Description
--context File or directory to use as source material. Repeatable.
--context-ignore Glob pattern to exclude from context directories. Repeatable.
--url URL to fetch as source material (HTML stripped to text). Repeatable.
--github GitHub repo (owner/repo or full URL); fetches its README as context. Repeatable.
--yes off Skip the large-context confirmation prompt. Useful for scripted runs.

Output

Flag Default Description
--quiz off Generate comprehension questions as quiz.json after the pipeline.
--burn-captions off Burn subtitles into the video (re-encodes). captions.srt is always written regardless.
--run-id auto Resume a previous run using its ID. See resuming runs.

Render

Flag Default Description
--preview off Render a fast low-quality preview (480p15) to preview.mp4.
--no-render off Run the AI pipeline only, skipping Docker render and ffmpeg merge.
--verbose off Stream raw Docker/Manim output to the terminal while rendering. Cannot be combined with --preview.
--qa-density normal Visual QA frame sampling: zero (skip), normal (1 frame/30s, up to 10), high (1 frame/15s, up to 20).

Context injection

Pass local files, URLs, or GitHub repos as source material. The pipeline builds the animation from your content rather than Claude's training data alone.

Local files and directories

# Explain a codebase
python main.py --topic "explain this codebase" --context ./src --context ./docs

# Turn a paper into an animation
python main.py --topic "summarize this paper" --context paper.pdf

# Exclude lock files and build output
python main.py --topic "visualize this" --context ./repo --context-ignore "*.lock" --context-ignore "dist/"

# Obsidian vault page
python main.py --topic "visualize my notes" --context ~/Documents/vault/page.md

URLs and GitHub repos

# Web article
python main.py --topic "explain this concept" --url https://en.wikipedia.org/wiki/Binary_search_tree

# GitHub repo README
python main.py --topic "explain this project" --github nicglazkov/Chalkboard

# Combine files and URLs
python main.py --topic "explain my project" --context ./README.md --url https://example.com/blog-post

Supported file types

Text and code files (.py, .js, .md, .yaml, .ps1, .bat, and many more), images (.png, .jpg, .webp, .gif), PDFs, and Word docs (.docx). URLs are fetched with HTML stripped to plain text, truncated at 100k characters.

File uploads via the web UI

The web UI also supports file uploads with drag-and-drop folder support. See file uploads in the Guide for details and size limits.

Token reporting

Before the pipeline starts, Chalkboard reports how much of the context window your source material uses. If context exceeds 10k tokens you'll be prompted to confirm. Pass --yes to skip this prompt for scripted runs. If context exceeds 90% of the model window, Chalkboard aborts.

Resuming with context

--context, --url, and --github are not stored in the checkpoint. Pass them again on resume:

python main.py --topic "..." --run-id <id> --context ./src

Environment variables

All settings can be overridden via .env or environment variables. Copy .env.example to get started.

Variable Default Options
TTS_BACKEND kokoro kokoro, openai, elevenlabs. See TTS backends.
ANTHROPIC_API_KEY Required. Get yours at console.anthropic.com
MANIM_QUALITY medium low, medium, high (1080p60)
DEFAULT_EFFORT medium low, medium, high
DEFAULT_AUDIENCE intermediate beginner, intermediate, expert
DEFAULT_TONE casual casual, formal, socratic
DEFAULT_THEME chalkboard chalkboard, light, colorful
OUTPUT_DIR ./output Any path
CHECKPOINT_DB pipeline_state.db Any path
SERVER_PORT 8000 Overridden by --port at runtime

Resuming runs

Every run is checkpointed after each pipeline stage. If it crashes or you abort, resume with the same run ID:

python main.py --topic "..." --run-id <previous-run-id>

Execution picks up from the last successful checkpoint. The pipeline does not re-run stages that already completed.

Preview to full render workflow

Run --preview first to quickly check the visuals at low quality, then do the full HD render. The pipeline result is already checkpointed, so it won't re-run:

# Step 1: generate script + animation, render preview
python main.py --topic "how B-trees work" --preview
# → output/<run-id>/preview.mp4 (480p, fast)

# Step 2: full HD render (pipeline skipped — uses checkpoint)
python main.py --topic "how B-trees work" --run-id <run-id>
# → output/<run-id>/final.mp4 (full quality + visual QA)