Show HN: Promptproof – GitHub Action to test LLM prompts, catch bad JSON schemas

github.com

3 points by geminimir 2 days ago

We kept breaking production with small prompt edits — suddenly outputs weren’t valid JSON, fields disappeared, or formats changed silently.

So we built Promptproof, a GitHub Action that runs in CI and blocks PRs when prompts produce invalid outputs.

Features:

- Validates JSON output

- Enforces required keys & schemas

- Runs fast in CI (no external infra)

- Works with OpenAI, Anthropic, and local models

- Adds PR comments so reviewers see failures immediately

We’d love feedback: which rules or integrations would make this most useful for you?

Repo: https://github.com/geminimir/promptproof-action