Python 3.14.0 is now available

(blog.python.org)

20 points | by runningmike 12 hours ago ago

10 comments

  • runningmike 12 hours ago ago

    Python 3.14 has great new features like: - Free-threaded Python is now officially supported

    - Template string literals (t-strings) for custom string processing

    - Syntax highlighting in PyREPL and more!

    Great work , congratulations with this release!

    • dlojudice 12 hours ago ago

      + experimental JIT compiler

      this could be the beginning of something very promising

      • Neywiny 12 hours ago ago

        Problem is, it's too late. Most performant code I've seen and written isn't using numba, it's using numpy to vectorize. And sadly, there's a lot of wasted iteration when doing that just to be faster than scaler. My point being, that code won't speed up at all without a rewrite.

        • tkcranny 7 hours ago ago

          Introducing JIT features has a lot of opportunities beyond numerical numpy/numba vectorisation. There’s endless amounts of hot loops, data shuffling, garbage collection, and monomorphisation that could be done in real world python that would benefit a lot, much like V8 has done for JS.

          • Neywiny 7 hours ago ago

            I guess my point is that truly performant python code, at least for number crunching, uses vectorized numpy functions instead of loops, and the overhead on type checking for those is fairly minimal. I have a PR in on a compute heavy python program that I tried using numba to jit. Timing was within margins on using numpy and numba (even though the numba code could exit the loop early, which was why I was trying it) except with numba I'd be adding a dependency and it's more work to maintain the algorithms myself instead of relying on numpy.

            I think of the JS code I've seen, it's mostly written in JS. So making JS faster makes JS faster. With python, the fast code is written outside python. It's too late by like 20 years. The world won't rewrite itself into native python modules

            • zahlman 4 hours ago ago

              > and the overhead on type checking for those is fairly minimal

              Well, yeah; the underlying C code assumes the type that was described to it by the wrapper (via, generally, the .dtype of an array), so it's O(1).

              But I do wonder what the experience of Numpy has been like for the PyPy users.

  • zahlman 10 hours ago ago
  • ivanche 10 hours ago ago

    Shouldn't this version be called Pi-thon?

    I'll walk myself out.