🎛️ A modular control system for AI-generated nonfiction books
Created by Travis Eric
knobs
is a strategic control layer for nonfiction book generation. It injects intelligence, depth, and variation into AI-generated content — all while staying deterministic, parallel-safe, and fully phase-aware.
Rather than hardcoding one prompt format, knobs
dynamically adjusts each subtopic's depth, specificity, structure, and novelty using modular control switches.
These switches — or knobs — are controlled from a single YAML config file and determine:
- Whether a subtopic should be contrarian or aligned
- Whether it needs a proof point or expert quote
- Whether it should include troubleshooting or failure-modes
- Whether it should be scaled deeper based on phase
- Whether to inject unexpected cross-domain examples
build_knobs(...)
: decides which knobs to activate for each subtopicget_prompt_snippet(...)
: generates prompt fragments based on those knobsdepth_multiplier
: scales token count and complexity by chapter phasewildcard_domain
: surprises the model with analogies from wild domains- All knobs are stateless, reproducible, and independent per subtopic
Traditional AI generators work linearly:
“Write an outline. Then write a chapter. Then revise.”
knobs
is different.
I built it like a system — not a sequence.
- Subtopics don’t need memory of what came before.
- Direction emerges from structure, not serialization.
- Surprise, depth, and precision are shaped per subtopic using deterministic randomness.
- It’s designed to run in parallel across 1,000 pieces — and still sound like a coherent book.
Every subtopic is independently intelligent, but globally aligned.
- AI nonfiction book generators (e.g. Teneo)
- Phase-aware educational content
- Longform idea distillation
- Multi-threaded prompt workflows
It helps inject:
- Strategy
- Variety
- Realism
- Surprise
- Abstraction layering
All with one config file and a prompt adapter.
{
"angle": "contrarian",
"need_proof": true,
"troubleshooting_on": false,
"strategic_on": true,
"obstacle_on": false,
"wildcard_domain": "negotiation",
"depth_multiplier": 1.77
}
This knobset would trigger:
- A strategic lens
- A counter-argument framing
- An example from negotiation
- Longer, deeper content output
⚙️ Config via knobs.yml
yaml
Copy
Edit
angle_prob: 0.05
need_proof_prob: 0.10
wildcard_domains:
- negotiation
- parenting
- jazz improvisation
- museum curation
- esports coaching
depth_by_phase:
foundation: 1.0
development: 1.18
mastery: 1.77
Change the knobs → change how your content feels.
📁 Files
File Purpose
knobs.py Core logic (build_knobs, get_prompt_snippet)
knobs.yml Config file for probabilities and domains
prompt_snippets.md All guidance blocks knobs can inject
example_knob_output.json Sample knobsets for analysis
🧠 System Thinking vs. Linear Generation
Most AI writing systems are linear. They move from start to finish like an author: outline → intro → body → polish.
I didn’t build it that way.
This framework is parallel-safe, prompt-independent, and depth-scalable.
Every subtopic is treated like a node in a distributed system — not a line in a narrative.
That’s why this works at scale. That’s why it’s flexible. That’s why it feels intentional, not templated.
You can’t build emergent strategy with linear prompting.
You need knobs.
🛠 Integration
Use knobs with any generator that:
Builds structured prompts for subtopics
Accepts per-subtopic guidance
Supports token scaling or modular prompt logic
Ideal for:
Longform AI generation
Instructional books
Modular course builders
Content automation systems
📘 License
MIT — fork it, remix it, build with it.
If you make something cool with it, link back or say hi.
Built by Travis Eric
traviseric.com · @traviseric_