40 comments

  • dang 3 days ago ago

    > happy to run additional documents if people want to share examples

    I've got one! The pdf of this out-of-print book is terrible: https://archive.org/details/oneononeconversa0000simo. The text is unreadably faint, and the underlying text layer is full of errors, so copy-paste is almost useless. Can your software extract usable text?

    (I'll email you a copy of the pdf for convenience since the internet archive's copy is behind their notorious lending wall)

    • ritvikpandey21 3 days ago ago

      Results look pretty good (with the exception of one very faint page) - check it out here! https://platform.runpulse.com/dashboard/extractions/public/f...

      • dang 3 days ago ago

        Thanks!

        If anyone is interested in the history of the family therapy movement—that is, the movement that started in the 1950s where psychotherapists started working with entire families rather than individual clients—this is a great book of interviews and incredibly readable.

        From the chapter above, Jay Haley on Milton Erickson:

        But, you know, the real tragedy with Erickson was he spent so much time over the years teaching hypnosis when he had a whole new school of thera- py to offer. People did not recognize the significance of his work until he was too old to really demon- Strate it

        (I left in a couple of text glitches there...at least it's readable now!)

    • 3 days ago ago
      [deleted]
  • bambax 2 days ago ago

    OCR is fascinating; I did some experiments on OCR for an ancient French book that made it to HN last year:

    https://news.ycombinator.com/item?id=42443022

    I found that at the time no LLM was able to properly organize the text and understand footnotes structure, but non-AI OCR works very well, and restructuring (with some manual input) is largely feasible. Would be interested in what you can do with those footnotes (including, for good measure, footnotes-within-footnotes).

    Regarding feeding text to LLMs, it seems they are often able to make sense of text when the layout follows the original, which means the OCR phase doesn't necessarily need to properly understand the structure of the source: rendering the text in a proper layout can be sufficient.

    I worked on setting up a service that would do just that, but in the end didn't go live with it; but here's the examples page to show what I mean:

    https://preview.adgent.com/#examples

    This approach is very straightforward and fails rarely.

  • think4coffee 3 days ago ago

    Congrats on the launch! You mention that you're SOTA on benchmarks. Can you share your research, or share which benchmark you used?

    • ritvikpandey21 3 days ago ago

      thanks! we benchmark against all the major players (azure doc intelligence, aws textract, google doc ai, frontier llms, etc). we have some public news coming out soon on this front, but we have a very rigorous dataset using both public and synthetic data focusing on the hardest problems in the space (handwriting, tables, etc).

      • 3 days ago ago
        [deleted]
  • lajr 3 days ago ago

    Hey, congratulations on the launch. Just noticed a discrepancy in the financial 10K example:

    There is a section near the start where there are 4 options: Large accelerated filer, Non-accelerated filer, Accelerated filer, or Smaller reporting company.

    In this option, "Large accelerated filer" is checked on the PDF, but "Non-accelerated filer" is checked on the Markdown.

    • ritvikpandey21 3 days ago ago

      thanks for the flag! have pointed this out will be pushing an update here shortly

  • Ishirv 3 days ago ago

    Super interesting stuff. I’m a fan - been a pulse customer for a while. However, I’ve found it has trouble with things that need intelligence like quotes meaning to repeat the previous line. Is that something you’re working on or is that not the right use case for pulse?

  • scottydelta 3 days ago ago

    AI models will eventually do this natively. This is one of the ways for models to continue to get better, by doing better OCR and by doing better context extraction.

    I am already seeing this trend in the recent releases of the native models (such as Opus 4.5, Gemini 3, and especially Gemini 3 flash).

    It's only going to get better from here.

    Another thing to note is, there are over 5 startups right now in YC portfolio doing the same thing and going after a similar/overlapping target market if I remember correctly.

    • ritvikpandey21 3 days ago ago

      yeah models are definitely improving, but we've found even the latest ones still hallucinate and infer text rather than doing pure transcription. we carry out very rigorous benchmarks against all of the frontier models. we think the differentiation is in accuracy on truly messy docs (nested tables, degraded scans, handwriting) and being able to deploy on-prem/vpc for regulated industries.

      • scottydelta 3 days ago ago

        I agree with the second part in terms of differentiation you mentioned.

        That plus the ability to provide customized solutions that stitch together data extraction and business logics such as reconciliations for vendor payments or sales.

        I think both these reasons are what's keeping all the OCR based companies going.

        My only advice would be to figure out more USPs before native models eat your lunch. Like Nanonets has its own native OCR model.

        Congrats on the launch.

      • 3 days ago ago
        [deleted]
  • aryan1silver 3 days ago ago

    looks really cool, congrats on the launch! are you guys using something similar to docling[https://github.com/docling-project/docling]?

    • rtaylorgarlock 3 days ago ago

      Has docling improved? I had a bit of a nightmare integrating a docling pipeline earlier this year. Docs said it was VLM-ready, which I spent lots of hours finding out was not true, just to find a relevant github issue which would've saved me a ton of hours :/ allegedly fixed, but wow that burned me bigtime.

      • ritvikpandey21 3 days ago ago

        our team has tested docling pretty extensively, works well for simpler text-heavy docs without complex layouts, but the moment you introduce tables or multi-column stuff it doesn't maintain layout well.

  • throw03172019 3 days ago ago

    Congrats on launch! We have been using this for a new feature we are building in our SaaS app. It’s results were better than Datalab from our tests, especially in the handwriting category.

    • sidmanchkanti21 3 days ago ago

      Thanks for testing! Glad the results work well for you

    • ritvikpandey21 3 days ago ago

      thanks! appreciate the kind words

    • vikp 3 days ago ago

      Hi, I'm a founder of Datalab. I'm not trying to take away from the launch (congrats), just wanted to respond to the specific feedback.

      I'm glad you found a solution that worked for you, but this is pretty surprising to hear - our new model, chandra, saturates handwriting-heavy benchmarks like this one - https://www.datalab.to/blog/saturating-the-olmocr-benchmark ,and our production models are more performant than OSS.

      Did you test some time ago? We've made a bunch of updates in the last couple of months. Happy to issue some credits if you ever want to try again - vik@datalab.to.

      • throw03172019 3 days ago ago

        Thanks, Vik. Happy to try the model again. Is BAA available?

        • vikp 3 days ago ago

          Yes, we can sign a BAA!

  • DIVx0 3 days ago ago

    can't sign up with gmail or "personal" email addresses? What if I want to evaluate but I am not ready to inundated with sales calls? My 'work' email domain is one that many vendors would love to see in their CRM. I always sign up with disposables first.

    I guess I should thank you for saving my time? Plenty of others in this space.

  • sidcool 3 days ago ago

    Congrats on launching. Seems very interesting.

  • canadiantim 3 days ago ago

    Can you increase correctness by giving examples to the model? And key terms or nouns expected?

  • mikert89 3 days ago ago

    AI models will do all this natively

    • ritvikpandey21 3 days ago ago

      we disagree! we've found llms by themselves aren't enough and suffer from pretty big failure modes like hallucination and inferring text rather than pure transcription. we wrote a blog about this [1]. the right approach so far seems to be a hybrid workflow that uses very specific parts of the language model architecture.

      [1] https://www.runpulse.com/blog/why-llms-suck-at-ocr

      • mritchie712 3 days ago ago

        > Why LLMs Suck at OCR

        I paste screenshots into claude code everyday and it's incredible. As in, I can't believe how good it is. I send a screenshot of console logs, a UI and some HTML elements and it just "gets it".

        So saying they "Suck" makes me not take your opinion seriously.

        • ritvikpandey21 3 days ago ago

          yeah models are definitely improving, but we've found even the latest ones still hallucinate and infer text rather than doing pure transcription. we carry out very rigorous benchmarks against all of the frontier models. we think the differentiation is in accuracy on truly messy docs (nested tables, degraded scans, handwriting) and being able to deploy on-prem/vpc for regulated industries.

        • mikert89 3 days ago ago

          they need to convince customers its what they need

      • serjester 3 days ago ago

        This is a hand wavy article that dismisses away VLMs without acknowledging the real world performance everyone is seeing. I think it’d be far more useful if you published an eval.

      • mikert89 3 days ago ago

        one or two more model releases, and raw documents passed to claude will beat whatever prompt voodoo you guys are cooking

        • holler 3 days ago ago

          Having worked in the space I have real doubts about that. Right now Claude and other top models already do a decent job at e.g. "generate OCR from this document". But as mentioned there are serious failure modes, it's non-deterministic, and especially cost-prohibitive at scale.

    • 3 days ago ago
      [deleted]
    • throw03172019 3 days ago ago

      This is like saying AI models can generate images. But a hyper focused model or platform on image generation will do better (for now)

  • asdev 3 days ago ago

    How is this different from Extend(Also YC)?

    • ritvikpandey21 3 days ago ago

      we're more focused on the core extraction layer itself rather than workflow tooling. we train our own vision models for layout detection, ocr, and table parsing from scratch. the key thing for us is determinism and auditability, so outputs are reproducible run over run, which matters a lot for regulated enterprises.

  • TZubiri a day ago ago

    How does it handle tables with invisible lines and inconsistent justification? (For example one centered column and one right justified column.