Problem is, it's too late. Most performant code I've seen and written isn't using numba, it's using numpy to vectorize. And sadly, there's a lot of wasted iteration when doing that just to be faster than scaler. My point being, that code won't speed up at all without a rewrite.
Introducing JIT features has a lot of opportunities beyond numerical numpy/numba vectorisation. There’s endless amounts of hot loops, data shuffling, garbage collection, and monomorphisation that could be done in real world python that would benefit a lot, much like V8 has done for JS.
I guess my point is that truly performant python code, at least for number crunching, uses vectorized numpy functions instead of loops, and the overhead on type checking for those is fairly minimal. I have a PR in on a compute heavy python program that I tried using numba to jit. Timing was within margins on using numpy and numba (even though the numba code could exit the loop early, which was why I was trying it) except with numba I'd be adding a dependency and it's more work to maintain the algorithms myself instead of relying on numpy.
I think of the JS code I've seen, it's mostly written in JS. So making JS faster makes JS faster. With python, the fast code is written outside python. It's too late by like 20 years. The world won't rewrite itself into native python modules
Python 3.14 has great new features like: - Free-threaded Python is now officially supported
- Template string literals (t-strings) for custom string processing
- Syntax highlighting in PyREPL and more!
Great work , congratulations with this release!
+ experimental JIT compiler
this could be the beginning of something very promising
Problem is, it's too late. Most performant code I've seen and written isn't using numba, it's using numpy to vectorize. And sadly, there's a lot of wasted iteration when doing that just to be faster than scaler. My point being, that code won't speed up at all without a rewrite.
Introducing JIT features has a lot of opportunities beyond numerical numpy/numba vectorisation. There’s endless amounts of hot loops, data shuffling, garbage collection, and monomorphisation that could be done in real world python that would benefit a lot, much like V8 has done for JS.
I guess my point is that truly performant python code, at least for number crunching, uses vectorized numpy functions instead of loops, and the overhead on type checking for those is fairly minimal. I have a PR in on a compute heavy python program that I tried using numba to jit. Timing was within margins on using numpy and numba (even though the numba code could exit the loop early, which was why I was trying it) except with numba I'd be adding a dependency and it's more work to maintain the algorithms myself instead of relying on numpy.
I think of the JS code I've seen, it's mostly written in JS. So making JS faster makes JS faster. With python, the fast code is written outside python. It's too late by like 20 years. The world won't rewrite itself into native python modules
> and the overhead on type checking for those is fairly minimal
Well, yeah; the underlying C code assumes the type that was described to it by the wrapper (via, generally, the .dtype of an array), so it's O(1).
But I do wonder what the experience of Numpy has been like for the PyPy users.
https://www.python.org/downloads/release/python-3140/
Shouldn't this version be called Pi-thon?
I'll walk myself out.
The release has a pretty cute logo of the snakes eating a pie: https://www.python.org/downloads/release/python-3140/
I've been hearing this off and on for pretty much the entire development cycle and I'm kinda sick of it now, frankly.