Show HN: FastLanes based integer compression in Zig

(github.com)

12 points | by ozgrakkurt 4 days ago ago

7 comments

  • o11c 20 hours ago ago

    This is missing a lot of context.

    What integer patterns does it do well on, and what patterns does it do poorly on?

    How many strategies does it support? It only mentions delta which is not compression. Huffman, RLE, variable-length encoding ...

    Does it really just "give up" at C/1024 compression if your input is a gigabyte of zeros?

    • ozgrakkurt 19 hours ago ago

      Working on improving and clarifying this!

      It only does delta and bitpacking now.

      It should do fairly well for a bunch of zeroes because it does bitpacking.

      I’m working on adding rle/ffor and also clarifying the strategy and making it flexible to modify the format internally so it won’t break API

      • o11c 19 hours ago ago

        For the "all zeros" case, my concern is that you said you're forcing a reset every 1024 words. This implies that if you have N kilowords of zero data, then it takes N times as much space as a single kiloword of data.

        Good compression algorithms effectively use the same storage for highly-redundant data (not limited to all zeros or even all the same single word, though all zeros can sometimes be a bit smaller), whether it's 1 kiloword or 1 gigaword (there might be a couple bytes difference since they need to specify a longer variable-size integer).

        And this does not require giving up on random-access if you care about that - you can just separately include an "extent table" (works for large regular repeats - you will have to detect this anyway for other compression strategies, which normally give up on random-access), or (for small repeats only) use strides, or ...

        For reference, BTRFS uses 128KiB chunks for its compression to support mmap and seeking. Of course, the caller should make sure to keep decompressed chunks in cache.

        • ozgrakkurt 15 hours ago ago

          Makes sense. For rle and dictionary encodings I probably won’t use the 1024 block size to split the input.

          1024 for block size is just for being able to vectorize delta encoding and bit packing.

          I am using this library for compressing individual pages of columns in a file format so the page size will be determined there.

          I’m not using fastlanes to do in-memory compressed arrays like it is originally intended for. But I’ll export the fastlanes API in next version too, so someone can implement it themselves if needed

  • gus_massa 3 days ago ago

    Sorry for asking, but there are too many weird project in this field. Can compressed_size be bigger than input.len?

    • ozgrakkurt 3 days ago ago

      Yes because you have to have some metadata that describes how to decompress the compressed data. This is the case in all compression algorithms I know.

      As an example Lz4 and zstd also have a compressBound() function that calculates this.

      • wiml 18 hours ago ago

        I'm pretty sure it's the case in all compression algorithms, period — if you make some inputs smaller, you have to make other inputs larger. There's a pigeonhole style argument for it. The trick of course is to make the inputs you expect to actually encounter smaller, ideally while enlarging other inputs as little as possible.