How we made Python's packaging library 3x faster

(iscinumpy.dev)

86 points | by rbanffy 5 days ago ago

15 comments

  • djoldman 2 days ago ago

    > _canonicalize_table = str.maketrans( "ABCDEFGHIJKLMNOPQRSTUVWXYZ_.", "abcdefghijklmnopqrstuvwxyz--", )

    > ...

    > value = name.translate(_canonicalize_table)

    > while "--" in value:

    > value = value.replace("--", "-")

    translate can be wildly fast compared to some commonly used regexes or replacements.

    • est 2 days ago ago

      I am curious, why not .lower().translate('_.', '--')

      • fwip 2 days ago ago

        .lower() has to handle Unicode, right? I imagine the giant tables slow it down a bit.

        • mort96 a day ago ago

          It's so annoying how so many languages lack a basic "ASCII lowercase" and "ASCII uppercase" function. All the Unicode logic is not only unnecessary, but actively unwanted, when you e.g want to change the case of a hex encoded string or do normalization on some machine generated ASCII-only output.

          • tracker1 a day ago ago

            I'll say, C#'s .ToLowerInvariant, etc. are pretty nice when you need them.

          • est 19 hours ago ago

            > It's so annoying how so many languages lack a basic "ASCII lowercase" and "ASCII uppercase" function

            How about b''.lower() ?

            • mort96 4 hours ago ago

              What if I have a string and not a byte string?

    • teaearlgraycold 2 days ago ago

      I would expect however that a regex replacement would be much faster than your N^2 while loop.

      • dgrunwald a day ago ago

        That loop isn't N²: if there are long sequences of dashes, every iteration will cut the lengths of those sequences in half. So the loop has at most lg(N) iterations, for a O(N*lg(N)) total runtime.

      • notpushkin 2 days ago ago

        It would be, if it was a common situation.

        This loop handles cases like `eggtools._spam` → `eggtools-spam`, which is probably rare (I guess it’s for packages that export namespaced modules, and you probably don’t want to export _private modules; sorry in advance for non-pythonic terminology). Having more than two separator characters in a row is even more unusual.

  • zahlman 5 days ago ago
  • ltbarcly3 a day ago ago

    Misleading title, they didn't make the packaging library 3x faster, they made reading one attribute of a package 3x faster. The whole library is still very, very slow compared to alternatives.

  • imtringued a day ago ago

    Unrelated, but I personally am not satisfied with the performance of Panda's XLSX export. As you can see here [0], the code does really strange things. It takes cell.style and throws it into json.dumps() to generate a key for a dictionary so that they can cache the XlsxStyler.convert(cell.style) result. Except, the vast majority of cells do not have any styling whatsoever, so json.dumps is producing the string "null", which is then used to lookup None. The low hanging fruit are jaw dropping. You can easily speed up the code 10%+ by adding a simple check "if cell.style is not None or fmt is not None:" and switching from json.dumps(cell.style) to str(cell.style). If I wanted an easy weekend project that positively impacts many people this is what I'd work on.

    [0] https://github.com/pandas-dev/pandas/blob/main/pandas/io/exc...

    • rthz a day ago ago

      Have you tried opening an issue about it? Maybe someone would be happy to work on it. I concur that Excel parsing is rather slow.

  • YouAreWRONGtoo 2 days ago ago

    [dead]