1 comments

  • hyvarjus 9 hours ago ago

    Hi HN,

    I built this initially for my personal use because I found most AI-generated content to be untrustworthy. LLMs are great at sounding confident but not so good at being factual.

    Instead of just wrapping LLMs for speed, I focused on accuracy. I built ProofWrite that uses a multi-step agentic pipeline:

    1. Deep research: It crawls live data first to gather information like specs, pricing, and "trust signals" (official data, reviews, citations). 2. Drafting: It writes content heavily constrained by that evidence. 3. The "Audit" layer: It runs a self-verification pass on its own output.

    The editor features a built-in Fact Check feature that assigns a verdict to every claim:

    * Cleared: supported by evidence gathered by research pipeline or user verified claim. * Needs attention: claims that need to be resolved.

    You can then quickly fix unverified claims with one-click actions (verify, add source URL, or rewrite).

    The goal is to enforce quality and accuracy with AI-written content and significantly reduce hallucinations. You can use it to write how to articles, reviews, comparisons, listicles etc.

    Stack is Next.js + a mix of models (Haiku 4.5/Sonnet 4.5/Opus 4.5/Gemini 3/GPT 5.x).

    Happy to answer questions about the tool!