Brady-Voice Skills — Architecture & Operations
For agents: Operational reference for the 12 production Brady-voice skills — file layout, Claude Code discovery, invocation flow, new-machine setup. Higher-level framing: README. Voice content the skills load: voice-profile and brand-voice.
The 12 skills
| Skill ID | Target platform | What it does |
|---|---|---|
newsletter-intro | newsletter | 1-2 paragraph opener for The 167 |
newsletter-outro | newsletter | Closing paragraph + soft CTA for The 167 |
youtube-title | youtube_longform | 4-12 word title for @prochurchtools videos |
youtube-script | youtube_longform | Full 8-12 minute video script (hook → 3-5 points → CTA → outro) |
podcast-show-notes | podcast | Show notes for Pro Church Tools podcast episodes |
ig-caption | IG caption for @bradyshearer | |
ig-carousel | Full carousel — slide-by-slide poster text + caption + hashtags | |
ig-carousel-opener | Just slide 1 — the hook frame that earns the swipe | |
ig-reel-script | 30-60 second reel script for Brady to perform on camera | |
blog-intro | blog | Opening paragraph(s) of a prochurchtools.com blog post |
x-single | Single standalone tweet from @BradyShearer | |
facebook-post | Post for the Pro Church Tools FB page |
Each skill loads its own platform-specific voice subprofile — tweet voice ≠ podcast voice ≠ blog voice. The merged-corpus profile exists but isn’t used by any current skill; subprofiles produce sharper drafts.
Where the files live (three layers)
Layer 1 — Skill code (Python)
Location: ~/Documents/content-engine/skills/{skill_name}/
Each skill is a Python package with three files:
skills/
├── _shared/
│ ├── runner.py # generic orchestrator — voice load, retrieval, Claude call
│ ├── retrieval.py # hybrid ranking + GCS pool sync
│ ├── voice_loader.py # Firestore voice profile loader
│ └── rag.py # optional RAG over the full archive
├── newsletter_intro/
│ ├── __init__.py
│ ├── config.py # SkillConfig (target_platform, pool_name, word_min/max, top_k)
│ └── prompt.py # SYSTEM_PROMPT template with {tonal_rules}, {signature_phrases}, etc.
├── ig_caption/
│ └── (same shape)
└── ... 10 more directories
Naming convention: Python module names use underscores (newsletter_intro, ig_caption).
On GitHub: bradyshearer/content-engine (private), tracked in git, pushed on every change.
Layer 2 — SKILL.md manifests (Anthropic format)
Location: ~/Documents/content-engine/anthropic_skills/brady-{skill_id}/SKILL.md
Each skill has a corresponding SKILL.md file with YAML frontmatter that Claude Code reads to decide when to trigger:
anthropic_skills/
├── brady-newsletter-intro/SKILL.md
├── brady-ig-caption/SKILL.md
└── ... 10 more
Naming convention: Manifest folders use the brady- prefix and hyphens (brady-newsletter-intro). This matches the slash-command convention.
On GitHub: Same repo, tracked in git. Generated by scripts/install_skill_md.py.
Layer 3 — Discoverable installation (symlinks)
Location: ~/.claude/skills/brady-{skill_id}/
These are symlinks pointing back to ~/Documents/content-engine/anthropic_skills/brady-{skill_id}/. Claude Code scans ~/.claude/skills/ on startup to find available skills.
~/.claude/skills/
├── brady-newsletter-intro -> ~/Documents/content-engine/anthropic_skills/brady-newsletter-intro
├── brady-ig-caption -> ~/Documents/content-engine/anthropic_skills/brady-ig-caption
└── ... 10 more
Symlinks instead of copies so a git pull auto-updates installed skills — edit prompt.py, push, next invocation on any machine picks it up.
How the layers connect (anatomy of an invocation)
When you say “Brady newsletter intro about visitor follow-up” in Claude Code:
1. Claude Code reads frontmatter from ~/.claude/skills/brady-newsletter-intro/SKILL.md
↓
2. SKILL.md tells Claude: "run scripts/run_skill.py newsletter-intro '<TOPIC>'"
↓
3. run_skill.py imports skills/newsletter_intro/config.py + prompt.py
↓
4. skills/_shared/runner.py orchestrates:
├── load_voice() — pulls /voice_profile/platforms/by_platform/newsletter from Firestore
├── load_pool() — reads data/skill_pools/newsletter-intro.json (auto-syncs from GCS if stale)
├── embed_query() — Voyage embeds the topic
├── hybrid_rank() — selects top 5 examples by 0.55 × semantic + 0.25 × engagement + 0.15 × recency + 0.05 × diversity
├── retrieve_chunks() — optional RAG over /chunks (~30 sec slower; off by default)
├── build system prompt: tonal_rules + forbidden_moves + rhetorical_moves + signature_phrases + few_shot_examples + topic
└── client.messages.create() — Claude Sonnet 4.5
↓
5. Returns: draft text + provenance (which examples were used, cost, tokens)
Per-call cost: typically $0.01-0.05 depending on skill (carousel is the most expensive at ~$0.05; tweets are pennies).
What’s NOT in the repo (but skills depend on)
| Asset | Where it lives | Why not in git |
|---|---|---|
| Voice profile (canonical runtime data) | Firestore /voice_profile/current + 8 platform subprofiles | Live read; updates without redeploy |
| Skill few-shot pools (~324 MB JSON) | gs://ezra-processed/skill_pools/ + local cache data/skill_pools/ | Too heavy for git; mtime-synced on demand |
| Curated examples (accepted drafts) | data/skill_pools/{skill_id}.curated.json + GCS | Same |
| Feedback events (accepts/edits/rejects) | Firestore /skill_feedback/{skill_id}/events | Live event log |
| API keys | .env (gitignored) | Never commit secrets |
| GCP service account JSON | service-account-*.json (gitignored) | Same |
The repo carries everything needed to compile a fresh skills environment on a new machine; heavy/sensitive assets live in managed cloud storage.
Setup on a new machine
# 1. Clone repo
git clone https://github.com/bradyshearer/content-engine
cd content-engine
# 2. Set up Python venv + install deps
python3.14 -m venv .venv
.venv/bin/pip install -e .
# 3. Drop secrets into .env (copy from password manager)
# Required keys: ANTHROPIC_API_KEY, VOYAGE_API_KEY, DEEPGRAM_API_KEY,
# GCP_PROJECT_ID=ezra-488314, GOOGLE_APPLICATION_CREDENTIALS=<path-to-sa.json>
# 4. Generate SKILL.md manifests + symlink to ~/.claude/skills/
.venv/bin/python scripts/install_skill_md.py
# 5. Pull skill pools from GCS (~324 MB, takes 1-2 min)
.venv/bin/python scripts/sync_skill_pools.py --pullAfter step 4, Claude Code shows the 12 brady-* skills. After step 5, they can actually run. Voice profile, feedback events, and curated examples sync live from Firestore + GCS — no extra setup.
Updating a skill
Two paths, both conversational from Claude Code:
Permanent prompt change
“Always avoid em dashes in newsletter intros.”
Claude edits skills/newsletter_intro/prompt.py, runs the test suite, commits, and pushes. Next invocation on any machine picks up the change after git pull.
Curated pool rebuild (after accumulating feedback)
“Absorb my recent feedback.”
Triggers scripts/rebuild_skills.py. The script:
- Reads accept/edit events from Firestore
/skill_feedback/{skill_id}/events - Wraps each accepted draft as a new few-shot example (max engagement weights so it ranks first)
- Re-embeds via Voyage-3-large
- Writes
data/skill_pools/{skill_id}.curated.json - Pushes to
gs://ezra-processed/skill_pools/
The runner loads .curated.json before the auto-built pool, so accepted drafts outrank archive examples. After ~10 accepts per skill the pool sharpens noticeably. No cron yet — runs on demand.
Feedback loop (what each verdict does)
| You say | What runs | Where it goes |
|---|---|---|
| ”Save it” / “Perfect” / “Use this” | record_feedback.py {skill} accept | Firestore event with verdict=accept, output text, topic |
| ”Reject — too generic” | record_feedback.py {skill} reject --reason ... | Firestore event with verdict=reject, reason |
| ”Shorter, punchier” (this draft only) | Re-runs the skill with constraint appended to topic | No feedback recorded — draft-specific edit |
| ”Always avoid em dashes” (permanent) | Edits skills/{skill}/prompt.py, commits, pushes | Code change |
| ”Absorb my recent feedback” | rebuild_skills.py --skill {skill} | Curated pool rebuild |
Feedback is async + cross-machine — accepts on the MBP show up on the Mac Mini’s next invocation via the GCS curated pool.
Cross-machine sync architecture
| Layer | Storage | Sync mechanism |
|---|---|---|
| Skill code (Python) | git | git pull on each skill run |
| SKILL.md manifests | git | Same |
~/.claude/skills/ symlinks | Created by install_skill_md.py | One-time per machine |
| Voice profile | Firestore | Live read, no sync |
| Skill few-shot pools | GCS | mtime-compare on each load, auto-pull if remote newer |
| Curated examples | GCS | Same |
| Feedback events | Firestore | Live read |
| API keys / SA credentials | Local .env + JSON | Manual per machine |
Changes propagate machine-to-machine via the storage layer — no sync UI, no manual export.
Naming convention reference
| Layer | Style | Example |
|---|---|---|
| Python module | snake_case | skills/newsletter_intro/ |
SKILL.md directory | kebab-case with brady- prefix | anthropic_skills/brady-newsletter-intro/ |
~/.claude/skills/ symlink | Same as manifest | brady-newsletter-intro |
| Skill ID (used in CLI args, Firestore docs) | kebab-case, no prefix | newsletter-intro |
| Pool name (data/skill_pools/) | kebab-case, no prefix | newsletter-intro.json |
| Conversational invocation | Plain English | ”Brady newsletter intro about X” |
The SKILL_DIR_MAP dict in scripts/install_skill_md.py maps between kebab-case skill IDs and snake_case directory names.
Adding a new skill
If the data exists for it (target platform has Brady chunks in Firestore):
- Create
skills/{new_skill_id}/directory with__init__.py,config.py(SkillConfig),prompt.py(SYSTEM_PROMPT) - Add the skill ID to
scripts/build_skill_pools.pyconfig — generates the few-shot pool from the archive - Run
.venv/bin/python scripts/build_skill_pools.py --skill {new_skill_id}to build the pool - Run
.venv/bin/python scripts/sync_skill_pools.py --pushto upload to GCS - Add the skill description to
SKILL_DESCRIPTIONSandSKILL_DIR_MAPinscripts/install_skill_md.py - Run
.venv/bin/python scripts/install_skill_md.pyto generate SKILL.md + symlink - Commit everything to git, push
If the platform doesn’t have a voice subprofile yet, run python -m processors.voice --platform {new_platform} first.
Known limitations
- Voice profile recency. v0.1.3 (2026-05-01) is current. Re-extract quarterly or after major content shifts. Use
python -m processors.voicefor the merged profile,--platform {p}for subprofiles. Cost ≈ $1 for full re-extract. - Skill pool size.
data/skill_pools/is ~324 MB. Excluded from git via.gitignore. Synced via GCS instead. New machines needscripts/sync_skill_pools.py --pullbefore first skill run. - Newsletter / Blog / Facebook signature-phrase contamination. Auto-injected templates (beehiiv legal footer, Squarespace post-listing, FB ad CTA boilerplate) bleed into top n-grams and signature phrases for those platforms. Fixable at the ingester layer (strip platform templates before chunking) — tracked as follow-up. Skill prompts get filtered data via runtime cleanup in
processors/voice/clean.py, so output quality isn’t affected. - No automated rebuild cron.
rebuild_skills.pyruns on demand only. Set a weekly cron once daily use is consistent.
Related
- README — Higher-level Content Brain overview (philosophy, ingestion, skills, feedback as one system)
- voice-profile — Canonical voice profile (n-grams, rhetorical moves, frameworks, tonal rules, forbidden moves)
- brand-voice — Brand voice doc with skill-platform mapping and “what to do without repo access” fallback
- README — Generic explainer for the company-os
/skills/folder (different concept — workflow skills in markdown, not code-driven) bradyshearer/content-engine/SETUP.md— Setup instructions for new machines (more detail than the summary above)bradyshearer/content-engine/README.md— Repo overview