Replace 2-4 hours of manual weekly assembly with a 15-minute automated review
Most weekly reviews are assembled from memory, not data. The tooling gap leads to drift, missed signals, and repeated mistakes.
A single CLI entrypoint routes to over a dozen capabilities. No GUI. No SaaS dependency. Shell scripts, Python, and markdown.
Structured templates with 13 sections: highlights, lowlights, risks, issues, challenges, observations, metrics, and more. 2-3 minutes per day. Friday aggregation pulls the whole week together automatically.
Automated collection across your platforms: analytics, social, content, Git repos, and more. Configurable per customer. JSON schema validation, markdown rendering, and formatted handoff for WBR generation.
Converts WBR markdown to PDF and DOCX. 2x2 executive summary on page 1 with status indicators for every metric. Skips regeneration when output is current.
One flag redacts person names, company names, and sensitive references for sharing. Allowlists, CamelCase detection, and markdown-aware parsing preserve formatting while masking content.
Every weekly WBR gets frozen with SHA256 checksums, read-only permissions, and revision tagging. The complete record: WBR markdown, metrics handoff, and validated JSON.
Side-by-side delta across four dimensions: metadata, metrics (with percent change), content (lines added/removed), and sources. See what changed in the full picture between any two weeks.
A WBR is not just a tool. It is the complete system: challenge, outcomes, tools, adoption, inspection, inputs, and iteration.
Running a business without systematic weekly reflection leads to drift: missed signals, repeated mistakes, decisions based on memory instead of data.
WBR generation in under 15 minutes. Elimination of manual transcription errors. Week-over-week trend visibility without manual lookback. Context captured daily, not reconstructed Friday night.
A single CLI entrypoint with sensible defaults, or a pure-prompt workflow for non-terminal users. Daily notes take 2 minutes. The Friday workflow is one command to start, two prompts to complete, one command to finish.
Snapshot comparison surfaces what changed week-over-week: metrics, content, RICO items, data quality. The system surfaces the deltas; you interpret them and decide what to act on.
2 minutes per day capturing daily notes. Platform credentials for automated metrics collection. A WBR template defining metric targets and monthly reach ramp.
Metric catalog, pluggable ingestion, time comparisons, anomaly detection. Each phase tightens the feedback loop. The system keeps getting better because you use it every week.
Results from running this system every week for months
A 10-page executive report with a 2x2 summary on page 1 and nine detailed appendices
Four quadrants covering everything that matters:
Nine sections of operational detail:
22 requirements in EARS format, each with acceptance criteria, traceability, and a dependency graph. Built with Claude Code, Codex, and Kiro.
EARS-format specs with unambiguous acceptance criteria produced an implementation that passed 11 of 12 criteria on the first generation for the masking engine.
Paul Duvall has built software solutions and systems for 30+ years, with deep expertise in cloud and DevSecOps for more than 15 years. He led large-scale engineering and security programs at AWS and co-founded Stelligent, the first company focused exclusively on Continuous Delivery/DevOps on AWS. He has been building applications with AI-assisted development tools since early 2023.
The WBR system described on this page is informed by his experience as a Director at AWS, where Weekly Business Reviews are a core operational mechanism. He built this toolchain for his own business and uses it every Friday. The architecture, the metric design, and the adoption plan are the same things he helps customers implement.
Whether you want help designing the architecture, defining the metrics, or building the adoption plan for your business, get in touch.