24 comments

  • cybice a day ago ago

    Hi! In your benchmark, do you use a fixed number of iterations to stop the test, or do you apply a statistical criterion, such as the Student's t-test, to determine when to stop?

    • evnwashere a day ago ago

      i didn’t want it to be complex so it uses simple time budget + at least x amount of samples, both and more can be configured with lower level api.

      in practice i haven’t found any js function that gets faster after mitata’s time budget (excluding cpu clock speed increasing because of continuous workload)

      another problem is garbage collection can cause long pauses that cause big jumps for some runs, thus causing loop to continue searching for best result longer than necessary

  • FractalHQ 19 hours ago ago

    Definitely going to try this out!!

    I’ve been using the `vitest bench` command; being able to slap a `.bench.ts` file next to a module and go to town is convenient: https://vitest.dev/guide/features.html#benchmarking

    • evnwashere 14 hours ago ago

      vitest is nice but it’s completely unsuited for micro-benchmarks as it ends up oom crashing after just 2 optimized out benchmarks

      • utkut 7 hours ago ago

        Yeah, I’ve hit those OOM issues with vitest before too. Mitata’s time budget + sample approach sounds like a solid way to keep things simple while avoiding those long GC pauses. Excited to give it a try on my own benchmarks!

  • izakfr a day ago ago

    This is awesome! I’ve been working on optimizing a javascript library recently and am feeling the pain of performance testing. I’ll check this out.

  • kookamamie a day ago ago

    Is it by accident that "mitata", the project name, means "to measure" in Finnish?

    • evnwashere a day ago ago

      it was hand picked for exactly that :)

  • moltar a day ago ago

    Any plans for web compatible output?

    I maintain this repo, and we hand roll the stats page, but if we could get that for free it’d be so great!

    https://github.com/moltar/typescript-runtime-type-benchmarks

    • evnwashere a day ago ago

      I have been thinking of reusing/creating something like https://perf.rust-lang.org/ that lets you pick and compare specific hash/commit with all data from json format

  • enahs-sf 17 hours ago ago

    The lack of miata is always the answer comments in this thread and the readme are troubling.

  • steve_adams_86 a day ago ago

    Hey, I wrote about this once! I use it a ton. Thanks for your work. I can’t wait to dig into 1.0.

  • wonger_ a day ago ago

    This is for "headless" JavaScript outside the browser, right?

    • tbeseda a day ago ago

      Never heard JS called "headless". Not sure I like it.

      edit: all JS is "headless". almost all languages are headless. _Software_ can be headless or have a GUI. but languages are naturally headless.

      • Waterluvian a day ago ago

        Headless browsers. I guess this is a very closely related concept.

      • blovescoffee a day ago ago

        There’s a lot of server side js. Mostly plumbing code but there’s certainly “headless” js

        • tbeseda a day ago ago

          I'm very aware of JS run on servers. And I knew that's what OP meant. I'm saying I'm not sure I like the usage. Maybe it's a generational dev vocabulary thing... I prefer "browser" or "client" JS vs "server" or backend

    • evnwashere a day ago ago

      It works anywhere where javascript works, so you can easily run it in browser too. Tho idea of making jsbench like website but with mitata accuracy (+ dedicated runners) keeps bugging me.

  • pavi2410 a day ago ago

    wow! what a timing! I started building Speedrun yesterday to accommodate my daily needs

    https://toolkit.pavi2410.me/tools/speedrun

    https://github.com/pavi2410/toolkit/issues/8

  • golergka a day ago ago

    Wow, I was just looking how to benchmark a streaming JSON parser that I'm working on! I'm creating it specifically for performance-intensive situations with JSON strings sizes up to gigabytes, and I thought that I had to implement about half of the features you mention there, like parametrisation and automatic GC after every test.

    • dumbo-octopus a day ago ago

      When you say streaming JSON parser, do you mean that it outputs a live “observable” object as it is steaming, or that it just doesn’t keep the entire source data in memory? I’ve done some work for the former for displaying rich LLM outputs as they are delivered - it’s a surprisingly underexplored area from what I’ve seen.

      • golergka 19 hours ago ago

        It means that prior to parsing JSON, parser is given exact path (or paths, or wildcards) it must retrieve, and then it will scan the string in one forward path with minimum possible allocations. It's for cases where you, for some reason, have to process enormous amount of serialised objects as strings, and need to get just a few small things out of them occasionally, and do it in JS.

        As it processes input in batches, you can also use it in cases where you don't even need to load the whole input data in memory, if you chose so.