Python 3.13.0 Is Released

(docs.python.org)

261 points | by Siecje 2 days ago ago

124 comments

  • pansa2 2 days ago ago

    Python versions 3.11, 3.12 and now 3.13 have contained far fewer additions to the language than earlier 3.x versions. Instead the newest releases have been focusing on implementation improvements - and in 3.13, the new REPL, experimental JIT & GIL-free options all sound great!

    The language itself is (more than) complex enough already - I hope this focus on implementation quality continues.

    • underdeserver 2 days ago ago

      For those interested in the REPL improvements:

      " Python now uses a new interactive shell by default, based on code from the PyPy project. When the user starts the REPL from an interactive terminal, the following new features are now supported:

      Multiline editing with history preservation.

      Direct support for REPL-specific commands like help, exit, and quit, without the need to call them as functions.

      Prompts and tracebacks with color enabled by default.

      Interactive help browsing using F1 with a separate command history.

      History browsing using F2 that skips output as well as the >>> and … prompts.

      “Paste mode” with F3 that makes pasting larger blocks of code easier (press F3 again to return to the regular prompt). "

      Sounds cool. Definitely need the history feature, for the few times I can't run IPython.

      • user070223 2 days ago ago

        Thanks, I just found out about .pythonstartup and setup writing history to a file and pretty printing with pprint / rich.

        https://www.bitecode.dev/p/happiness-is-a-good-pythonstartup or search for a gist

      • cmdlineluser 2 days ago ago

        No vi editing mode :-(

        > The new REPL will not be implementing inputrc support, and consequently there won't be a vi editing mode.

        https://github.com/python/cpython/issues/118840#issuecomment...

        • EasyMark a day ago ago

          iPython is a much better repl anyway, use that, it has a vi mode

      • formerly_proven 2 days ago ago

        Presumably this also means readline (GPL) is no longer required to have any line editing beyond what a canonical-mode terminal does by itself. It seems like there is code to support libedit (BSD), but I've never managed to make Python's build system detect it.

        • zvr 2 days ago ago

          I have managed to build Python with libedit instead of readline, but it was a custom build.

          If your assumption is correct, then I'm anxiously waiting for having the default Python executable in Ubuntu, for example, being licensed under a non-copyleft license. Then one would be able to build proprietary-licensed executables via PyInstaller much more easily.

      • mixmastamyk 2 days ago ago

        I hope it doesn’t break ptpython, and/or is worse than it. I’ve been using it for quite a while.

        • faizshah a day ago ago

          I don’t think they added autocomplete support so PtPython is still better.

    • the__alchemist 2 days ago ago

      I've love to see a revamp of the import system. It is a continuous source of pain points when I write Python. Circular imports all over unless I structure my program explicitly with this in mind. Using python path hacks with `sys` etc to go up a directory too.

      • int_19h 2 days ago ago

        The biggest problem with Python imports is that the resolution of non-relative module names always prioritizes local files, even when the import happens in stdlib. This means that, for any `foo` that is a module name in stdlib, having foo.py in your code can break arbitrary modules in stdlib. For example, this breaks:

           # bisect.py
           ...
        
           # main.py
           import random
        
        with:

           Traceback (most recent call last):
             File ".../foo.py", line 1, in <module>
               import random
             File "/usr/lib/python3.12/random.py", line 62, in <module>
               from bisect import bisect as _bisect
           ImportError: cannot import name 'bisect' from 'bisect'
        
        This is very frustrating because Python stdlib is still very large and so many meaningful names are effectively reserved. People are aware of things like "sys" or "json", but e.g. did you know that "wave", "cmd", and "grp" are also standard modules?

        Worse yet is that these errors are not consistent. You might be inadvertently reusing an stdlib module name without even realizing it just because none of the stdlib (or third-party) modules that you import have it in their import graphs. Then you move on to a new version of Python or some of your dependencies, and suddenly it breaks because they have added an import somewhere.

        But even if you are careful about checking every single module name against the list of standard modules, a new Python version can still break you by introducing a new stdlib module that happens to clash with one of yours. For example, Python 3.9 added "graphlib", which is a fairly generic name.

        • riezebos a day ago ago

          I agree, it's unreasonable to expect devs to know the whole standard library. The VSCode extension Pylance does give a warning when this happens. I thought linters might also check this. The one I use doesn't, maybe the issue[0] I just created will lead to it being implemented.

          [0]: https://github.com/astral-sh/ruff/issues/13676

      • baq 2 days ago ago

        For me 'from __future__ import no_cyclic_imports' would be good enough, if it's even possible

      • notatallshaw 2 days ago ago

        There was an attempt to make imports lazy: https://peps.python.org/pep-0690/

        It was ultimately rejected due to issues with how it would need to change the dict object.

        IMO all the rejection reasons could be overcome with a more focused approach and implementation, but I don't know if there is anyone wishing to give it another go.

      • derr1 2 days ago ago

        Pathlib is your friend

    • pjmlp 2 days ago ago

      Using Python on and off since version 1.6, I always like to point out that the language + standard library is quite complex, even more when taking into account all the variations across versions.

      Looking forward to JIT maturing from now onwards.

    • csdreamer7 2 days ago ago

      > The language itself is (more than) complex enough already - I hope this focus on implementation quality continues.

      As do I.

      • selimnairb 2 days ago ago

        Agreed. I still haven’t really started using the ‘match’ statement and structural pattern matching (which I would love to use) since I still have to support Python 3.8 and 3.9. I was getting tired of thinking, “gee this new feature will be nice to use in 4 years, if I remember to…”

        • wdroz 2 days ago ago

          After this month, Python 3.8 will be end-of-life, so maybe you can push internally to upgrade python.

    • formerly_proven 2 days ago ago

      The last couple years also saw a stringent approach to deprecations: If something is marked as deprecated, it WILL be removed in a minor release sooner than later.

      • kstrauser 2 days ago ago

        Yep. They’ve primarily (entirely?) involved removing ancient libraries from stdlib, usually with links to maintained 3rd party libraries. People who can’t/won’t upgrade to newer Pythons, perhaps because their old system that uses those old modules can’t run a newer one, aren’t affected. People using newer Pythons can replace those modules.

        There may be a person in the world panicking that they need to be on Python 3.13 and also need to parse Amiga IFF files, but it seems unlikely.

        • Doxin a day ago ago

          I mean the stdlib is open source too, so you could always vendor deprecated stdlib modules. Most of them haven't changed in eons either so the lack of official support probably doesn't change much.

    • tightbookkeeper 2 days ago ago

      Removing GIL only increases complexity.

      • bunderbunder 2 days ago ago

        But they've worked very hard at shielding most users from that complexity. And the end result - making multithreading a truly viable alternative to multiprocessing for typical use cases - will open up many opportunities for Python users to simplify their software designs.

        I suppose only time will tell if that effort succeeds. But the intent is promising.

        • dangom 2 days ago ago

          Do you have any references or examples that describe how this simplification would come about? Would love to learn more about it.

      • dagmx 2 days ago ago

        For the runtime, but not the language

      • pkkm 2 days ago ago

        It definitely does, but don't you think that it could be worth it if it makes multithreading usable for CPU-heavy tasks?

        • tightbookkeeper 2 days ago ago

          No. Python is orders of magnitude slower than even C# or Java. It’s doing hash table lookups per variable access. I would write a separate program to do the number crunching.

          Everyone must now pay the mental cost of multithreading for the chance that you might want to optimize something.

          • zbentley 2 days ago ago

            > It’s doing hash table lookups per variable access.

            That hasn't been true for many variable accesses for a very long time. LOAD_FAST, LOAD_CONST, and (sometimes) LOAD_DEREF provide references to variables via pointer offset + chasing, often with caches in front to reduce struct instantiations as well. No hashing is performed. Those access mechanisms account for the vast majority (in my experience; feel free to check by "dis"ing code yourself) of Python code that isn't using locals()/globals()/eval()/exec() tricks. The remaining small minority I've seen is doing weird rebinding/shadowing stuff with e.g. closures and prebound exception captures.

            https://github.com/python/cpython/blob/10094a533a947b72d01ed...

            https://github.com/python/cpython/blob/10094a533a947b72d01ed...

            So too for object field accesses; slotted classes significantly improve field lookup cost, though unlike LOAD_FAST users have to explicitly opt into slotting.

            Don't get me wrong, there are some pretty regrettably ordinary behaviors that Python makes much slower than they need to be (per-binding method refcounting comes to mind, though I hear that's going to be improved). But the old saw of "everything is a dict in python, even variable lookups use hashing!" has been incorrect for years.

            • tightbookkeeper 2 days ago ago

              Thanks for the correction and technical detail. I’m not saying this is bad, it’s just the nature of this kind of dynamic language. Productivity over performance.

          • pkkm 2 days ago ago

            > Everyone must now pay the mental cost of multithreading for the chance that you might want to optimize something.

            I'm assuming that by "everyone" you mean everyone who works on the Python implementation's C code? Because I don't see how that makes sense if you mean Python programmers in general. As far as I know, things will stay the same if your program is single-threaded or uses multiprocessing/asyncio. The changes only affect programs that start threads, in which case you need to take care of synchronization anyway.

          • int_19h 2 days ago ago

            Python doesn't do hash table lookups for local variable access. This only applies to globals and attributes of Python classes that don't use __slots__.

            The mental cost of multithreading is there regardless because GIL is usually at the wrong granularity for data consistency. That is, it ensures that e.g. adding or deleting a single element to a dict happens atomically, but more often than not, you have a sequence of operations like that which need to be locked. In practice, in any scenario where your data is shared across threads, the only sane thing is to use explicit locks already.

          • EasyMark a day ago ago

            If you’re in a primarily python coding house, your argument won’t mean anything when you bring up you’ll have to rewrite millions of lines of code in C# or Java, you might as well ask them to liquidate the company and start fresh.

          • kstrauser 2 days ago ago

            > No. Python is orders of magnitude slower than even C# or Java.

            That sounds like a fantastic reason to make it run faster on the multi-core CPUs we're commonly running it on today.

            • tightbookkeeper 2 days ago ago

              The cost to write and debug multithreaded code is high and not limited to the area you use it. And for all that you get a 2-8x speed up.

              So if you care about performance why are you writing that part in python?

              > multi-core CPUs we're commonly running it on today.

              If you spawn processes to do work you get multi core for free. Think of the whole system, not just your program.

              • kstrauser 2 days ago ago

                Pretend for a second that I'm in a setting where:

                1. The whole system is dedicated to running my one program, 2. I want to use multi threading to share large amounts of state between workers because that's appropriate to my specific use case, and 3. A 2-8x speedup without having to re-write parts of the code in another language would be fan-freaking-tastic.

                In other worse, I know what I'm doing, I've been doing this since the 90s, and I can imagine this improvement unlocking a whole lot of use cases that've been previously unviable.

                • tightbookkeeper 2 days ago ago

                  You can imagine that situation. But all python code is now impacted to support that case.

                  Having a “python only” ecosystem makes about as much sense as a “bash only” ecosystem. Your tech stack includes much more.

                  > In other worse, I know what I'm doing, I've been doing this since the 90s

                  ditto. So that’s not relevant.

                  • kstrauser 2 days ago ago

                    Sounds like a lot of speculation on your end because we don't have lots of evidence about how much this will affect anything, because until just now it's not been possible to get that information.

                    > ditto. So that’s not relevant.

                    Then I'm genuinely surprised you've never once stumbled across one of the many, many use cases where multithreaded CPU-intensive code would be a nice, obvious solution to a problem. You seem to think these are hypothetical and my experience has been that these are very real.

                    • tightbookkeeper 2 days ago ago

                      > Sounds like a lot of speculation on your end

                      This issue is discussed extensively in “the art of Unix programming” if we want to play the authority and experience game.

                      > multithreaded CPU-intensive code would be a nice, obvious solution to a problem

                      Processes are well supported in python. But if you’re maxing your CPU core with the right algorithm then python was probably the wrong tool.

                      > my experience has been that these are very real.

                      When you’re used to working one way it may seem impossible to frame the problem differently. Just to remind you this is a NEW feature in python. JavaScript, perl, and bash, also do not support multi threading for similar reasons.

                      One school of design says if you can think of a use case, add that feature. Another tries to maintain invariants of a system.

      • EasyMark a day ago ago

        “Make things as simple as possible, but no simpler”, I for one am glad they’ll be letting us use modern CPUs much more easily instead of it being designed around 1998 cpus

  • wdroz 2 days ago ago

    With the 3.13 TypeIs[0] and the 3.10 TypeGuard[1], we can achieve some of Rust's power (such as the 'if let' pattern) without runtime guarantees.

    This is a win for the DX, but this is not yet widely used. For example, "TypeGuard[" appears in only 8k Python files on GitHub.[2]

    [0] -- https://docs.python.org/3.13/library/typing.html#typing.Type...

    [1] -- https://docs.python.org/3.13/library/typing.html#typing.Type...

    [2] -- https://github.com/search?q=%22TypeGuard%5B%22+path%3A*.py&t...

    • silviogutierrez a day ago ago

      Big fan of typing improvements in Python. Any chance you can elaborate on the "if let" pattern in Rust and how it would look in Python now? Not sure I follow how it translates.

    • Buttons840 2 days ago ago

      What type checker do you recommend?

      • Ey7NFZ3P0nzAe 2 days ago ago

        Beartype is incredible. It is soo fast I put it as decorator on all functions of all my projects.

        It's day and night compared to typeguard.

        Also the dev is... Completely out of this world

        • HenriTEL 19 hours ago ago

          I don't get the point of a runtime type checker. It adds a lot of noise with those decorators everywhere and you need to call each section of the code to get full coverage, meaning 100% test coverage. At that point just use rust, or am I missing something?

          • Buttons840 17 hours ago ago

            It looks like you call a function near the beginning of your Python program / application that does all the type checking at startup time. IDK for sure, I haven't used the library.

            Someone using Python doesn't "just use Rust", there are very clear pros and cons and people already using Python are doing so for a reason. It is sometimes helpful to have type checks in Python though.

      • wdroz 2 days ago ago

        disclaimer: I don't work on big codebases.

        Pylance with pyright[0] while developing (with strict mode) and mypy[1] with pre-commit and CI.

        Previously, I had to rely on pyright in pre-commit and CI for a while because mypy didn’t support PEP 695 until its 1.11 release in July.

        [0] -- https://github.com/microsoft/pyright

        [1] -- https://github.com/python/mypy

  • boarush 2 days ago ago

    Python version from 3.10 have had a very annoying bug with the SSLContext (something related only to glibc) where there are memory leaks when opening new connections to new hosts and eventually causes any service (dockerized in my case) to crash due to OOM. Can still see that the issues have not been resolved in this release which basically makes it very difficult to deploy any production grade service difficult.

  • rkwz 2 days ago ago

    > Free-threaded execution allows for full utilization of the available processing power by running threads in parallel on available CPU cores. While not all software will benefit from this automatically, programs designed with threading in mind will run faster on multi-core hardware.

    Would be nice to see performance improvements for libraries like FastAPI, NetworkX etc in future.

    • v3ss0n 2 days ago ago

      They are not threaded at all.

      • cr125rider 2 days ago ago

        Correct. Async stuff in Python is based on libuv like event loops similar to how Nodejs and others operate, not full threads.

        • v3ss0n a day ago ago

          Yeah, they have their own asyncio python runner

  • CJefferson 2 days ago ago

    Good to get advanced notice, if I read all the way down, that they will silently completely change the behavior of multiprocessing in 3.14 (only on Unix/Linux, in case other people wonder what’s going on), which is going to break a bunch of programs I work with.

    I really like using Python, but I can’t keep using it when they just keep breaking things like this. Most people don’t read all the release notes.

    • icegreentea2 2 days ago ago

      Not defending their specific course of action here, but you should probably try to wade into the linked discussion (https://github.com/python/cpython/issues/84559). Looks like the push to disable warnings (in 3.13) is mostly coming from one guy.

      • CJefferson 2 days ago ago

        I think should have a dig.

        While it’s not perfect, I know a few other people people who do “set up lots of data structures, including in libraries, then make use of the fact multiprocessing uses fork to duplicate them”. While fork always has sharp edges, it’s also long been clearly documented that’s the behavior on Linux.

        • int_19h 2 days ago ago

          I'm pretty sure that significantly more people were burned by fork being the default with no actual benefit to their code, whether because of the deadlocks etc that it triggers in multithreaded non-fork-aware code, or because their code wouldn't work correctly on other platform. Keeping it there as an option that one can explicitly enable for those few cases where it's actually useful and with full understanding of consequences is surely the better choice for something as high-level as Python.

          • CJefferson a day ago ago

            I agree that fork was an awful default.

            However, changing the default silently just means people's code is going to change behaviour between versions, or silently break if someone with an older version runs their code. At this point, it's probably better to just require people give an explicit choice (they can even make one of the choice names be 'default' or something, to make life easy for people who don't really care).

        • nyrikki 2 days ago ago

          > posix_spawn() now accepts None for the env argument, which makes the newly spawned process use the current process environment

          That is the thing about fork(), spawn(), and even system() being essential wrappers around clone() in glibc and musl.

          You can duplicate the behavior of fork() without making the default painful for everyone else.

          In musl systems() calls posix_spawn() which calls clone().

          All that changes is replacing a legacy call fork() that is nothing more than a legacy convenience alias with real issues and foot guns with multiple threads.

    • nyrikki 2 days ago ago

      You are complaining about spawn()?

      both fork() and spawn() are just wrappers around clone() on most libc types anyway.

      spawn() was introduced to POSIX in the last century to address some of the problems with fork() especially related to multi threading, so I an curious how your code is so dependent on UTM, yet multi threading.

      • CJefferson a day ago ago

        My code isn't dependant on multi-threading at all.

        It use fork in Python multiprocess, because many packages can't be "pickled" (the standard way of copying data structures between processes), so instead my code looks like:

        * Set up big complicated data-structures.

        * Use fork to make a bunch of copies of my running program, and all my datastructures

        * Use multiprocessing to make all those python programs talk to each other and share work, thereby using all my CPU cores.

        • nyrikki 19 hours ago ago

          'Threading' is an overload term. And while I didn't know, I was wondering if at the library level, the fact that posix_spawn() pauses the parent, while fork() doesn't, that is what you were leveraging.

          The python multiprocessing module has been problematic for a while, as the platform abstractions are leaky and to be honest the POSIX version of spawn() was poorly implemented and mostly copied the limits of Windows.

          I am sure that some of the recent deadlocks are due to this pull request as an example that calls out how risky this is.

          https://github.com/python/cpython/pull/114279

          Personally knowing the pain of fork() in the way you are using it, I have moved on.

          But I would strongly encourage you to look into how clone() and the CLONE_VM and CLONE_VFORK options interact, document your use case and file an actionable issue against the multiprocessing module.

          Go moved away from fork in 1.9 which may explain the issues with it better than the previous linked python discussion.

          But looking at the git blame, all the 'fixes' have been about people trading known problems and focusing on the happy path.

          My reply was intended for someone to address that tech debt and move forward with an intentional designed refactoring.

          As I just focus on modern Linux, I avoid the internal submodule and just call clone() in a custom module or use python as glue to languages that have better concurrency.

          • nyrikki 17 hours ago ago

            I found where subprocess moved to posix_spawn() that may help.

            https://bugs.python.org/issue35537

            My guess is that threads in Cython are an end goal. While setting execution context will get you past this release, fork() has to be removed if the core interpreter is threaded.

            The delta between threads and fork/exec has narrowed.

            While I don't know if that is even an option for you, I am not seeing any real credible use cases documented to ensure that model is supported.

            Note, I fully admit this is my own limits of imagination. I am 100% sure there are valid reasons to use fork() styles.

            Someone just needs to document them and convince someone to refactor the module.

            But as it is not compatible with threads, has a ton of undefined behavior and security issues, fork() will be removed without credible documented use cases that people can weigh when considering the tradeoffs.

    • mixmastamyk 2 days ago ago

      Unfortunately, they learned the wrong lesson from the 2->3 transition. Break things constantly instead of all at once. :p

      Still, this one doesn’t seem too bad. Add method=FORK now and forget about it.

    • almostgotcaught 2 days ago ago

      > I really like using Python, but I can’t keep using it when they just keep breaking things like this.

      So much perl clutching. Just curious, since I guess you've made up your mind, what's your plan to migrate away? Or are you hoping maintainers see your comment and reconsider the road-map?

      • CJefferson a day ago ago

        Rust. I've already re-written several of my decent sized research programs to Rust, and plan to finish converting what's left soon.

        Rust isn't perfect (no language is), but they do seem to try much harder to not break backwards compatability.

      • fmajid 2 days ago ago

        Not the person you are responding to, but my Python 3 migration plan was to move to Go for all new projects.

        • almostgotcaught 2 days ago ago

          I'm sure your departure from the community will be the tectonic shift that'll finally get the PSF to change their course.

  • ajay-d 2 days ago ago

    Still in prerelease (RC3), no? At least at time of writing

  • causal 2 days ago ago

    Any rule of thumb when it comes to adopting Python releases? Is it usually best to wait for the first patch version before using in production?

    • oebs 2 days ago ago

      We follow this rule (about two dozen services with in total ~100k loc of Python): By default, use the version 1 release below the latest.

      I.e. we currently run 3.11 and will now schedule work to upgrade to 3.12, which is expected to be more or less trivial for most services.

      The rationale is that some of the (direct and transitive) dependencies will take a while to be compatible with the latest release. And waiting roughly a year is both fast enough to not get too much behind, and slow enough to expect that most dependencies have caught up with the latest release.

      • jnwatson 2 days ago ago

        Yeah some deprecated C API stuff just got removed, so it might take me, a package maintainer, to catch up.

    • BiteCode_dev 2 days ago ago

      I follow this: https://www.bitecode.dev/p/installing-python-the-bare-minimu...

      Which is mostly latest_major - 1, adjusted to production constraints, obviously. And play with latest for fun.

      I stopped using latest even for non serious projects, the ecosystem really needs time to catch up.

    • user070223 2 days ago ago

      When should you upgrade to Python 3.13? https://pythonspeed.com/articles/upgrade-python-3.13/

      Python libraries support https://pyreadiness.org/3.13/

    • instig007 2 days ago ago

      Have a rubust CI and tests, and deploy as early as you can.

      • kstrauser 2 days ago ago

        Yup. At my last gig, upgrading to a new version meant setting the Docker tag to the new one and running `make test`. If that passed, we were 99% certain it was safe for prod. The other 1% was covered by running in pre-prod for a couple days.

    • zenonu 2 days ago ago

      I'm constrained by libraries with guaranteed version compatibility. Unless you're operating in NIH universe, I bet you are as well.

    • zahlman 2 days ago ago

      >Any rule of thumb when it comes to adopting Python releases?

      No, because it varies widely depending on your use case and your motivations.

      >Is it usually best to wait for the first patch version before using in production?

      This makes it sound like you're primarily worried about a situation where you host an application and you're worried about Python itself breaking. On the one hand, historically Python has been pretty good about this sort of thing. The bugfixes in patches are usually quite minor, throughout the life cycle of a minor version (despite how many of them there are these days - a lot of that is just because of how big the standard library is). 3.13 has already been through alpha, beta and multiple RCs - they know what they're doing by now. The much greater concern is your dependencies - they aren't likely to have tested on pre-release versions of 3.13, and if they have any non-Python components then either you or they will have to rebuild everything and pray for no major hiccups. And, of course, that applies transitively.

      On the other hand, unless you're on 3.8 (dropping out of support), you might not have any good reason to update at all yet. The new no-GIL stuff seems a lot more exciting for new development (since anyone for whom the GIL caused a bottleneck before, will have already developed an acceptable workaround), and I haven't heard a lot about other performance improvements - certainly that hasn't been talked up as much as it was for 3.11 and 3.12. There are a lot of quality-of-implementation improvements this time around, but (at least from what I've paid attention to so far, at least) they seem more oriented towards onboarding newer programmers.

      And again, it will be completely different if that isn't your situation. Hobbyists writing new code will have a completely different set of considerations; so will people who primarily maintain mature libraries (for whom "using in production" is someone else's problem); etc.

    • dagw a day ago ago

      Rule 1 is wait until there are built wheels for that version of python for all the libraries that you need. In most cases that can take a month or two, depending on exactly what libraries you use and how obscure they are.

    • ilc 2 days ago ago

      n-1 is the rule I follow. So if asked today I'd look at 3.12.

      • rantingdemon a day ago ago

        So simple, yet so effective.

        Old Systems Admins like me have been following this simple rule for decades. It's the easiest way at scale.

    • HelloNurse 2 days ago ago

      Wait until they are actually released rather than RC3. What's the point of posting prematurely?

    • yn6n767m76m 2 days ago ago

      Scream into a pillow because even PHP manages to have less breaking releases. Python is a dumpster fire that no one wants to admit or have an honest conversation about. If python 2 is still around my advice is don't upgrade unless you have a clear reason for the new features.

  • stevesimmons 2 days ago ago

    And Azure Functions still doesn't support Python 3.12, released more than a year ago!

    • benrutter 2 days ago ago

      It was similar last year when 3.12 came out and 3.11 still wasn't supported. I'm really curious what makes azure functions so slow to upgrade available run times, or if it's just that they figure demand for the latest python version isn't there.

    • cr125rider 2 days ago ago

      We have switched to exclusively using Docker Images in Lambda on AWS cause their runtime team constantly breaks things and is behind with a bunch of releases.

    • cozzyd 2 days ago ago

      I wonder if FreeCAD supports 3.12 yet. Really annoying that FreeCAD got dropped from the latest Fedora due to breaking python changes...

  • mg 2 days ago ago

    When I'm in a docker container using the Python 3 version that comes with Debian - is there an easy way to swap it out for this version so I can test how my software behaves under 3.13?

    • ebb_earl_co 2 days ago ago

      This[0] is the Docker Python using Debian Bookworm, so as soon as 3.13.0 (not the release candidate I've linked to) is released, there will be an image.

      Otherwise, there's always the excellent `pyenv` to use, including this person's docker-pyenv project [1]

      [0] https://hub.docker.com/layers/library/python/3.13.0rc3-slim-... [1] https://github.com/tzenderman/docker-pyenv?tab=readme-ov-fil...

      • mg 2 days ago ago

        Hmm.. I think this is a misunderstanding.

        What I meant is: While I am already inside a container running Debian, can I ...

            1: ./myscript.py
            2: some_magic_command
            3: ./myscript.py
        
        So 1 runs it under 3.11 (which came with Debian) and 2 runs it under 3.13.

        I don't need to preserve 3.11. some_magic_command can wrack havoc in the container as much as it wants. As soon as I exit it, it will be gone anyhow.

        The in a sense, the question is not related to Docker at all. I just mentioned that I would do it inside a container to emphasize that I don't need to preserve anything.

        • maleldil 2 days ago ago

          You can use pyenv to create multiple virtual environments with different Python versions, so you'd run your script with (eg) venv311/bin/python and venv313/bin/python

        • kstrauser 2 days ago ago

          The magic command in other settings would be pyenv. It lets you have as many different Python versions installed as you wish.

          Pro tip: outside Docker, don’t ever use the OS’s own Python if you can avoid it.

          • maleldil 2 days ago ago

            > don’t ever use the OS’s own Python if you can avoid it.

            This includes Homebrew's Python installation, which will update by itself and break things.

            • kstrauser 2 days ago ago

              Yep. I only have Homebrew's Python installed because some other things in Homebrew depend on it. I use pyenv+virtualenv exclusively when developing my own code.

              (Technically, I use uv now, but to the same ends.)

          • cuu508 2 days ago ago

            > Pro tip: outside Docker, don’t ever use the OS’s own Python if you can avoid it.

            Why not?

            • kstrauser 2 days ago ago

              It's unlikely that the OS's version of Python, and the Python packages available through the OS, are going to be the ones you'd install of your own volition. And on your workstation, it's likely you'll have multiple projects with different requirements.

              You almost always want to develop in a virtualenv so you can install the exact versions of things you need without conflicting with the ones the OS itself requires. If you're abstracting out the site-packages directory anyway, why not take one more step and abstract out Python, too? Things like pyenv and uv make that trivially easy.

              For instance, this creates a new project using Python 3.13.

                $ uv init -p python3.13 foo
                $ cd foo
                $ uv sync
                $ .venv/bin/python --version                                                                         
                Python 3.13.0rc2
              
              I did not have Python 3.13 installed before I ran those commands. Now I do. It's so trivially easy to have per-project versions that this is my default way of using Python.

              You can get 95% of the same functionality by installing pyenv and using it to install the various versions you might want. It's also an excellent tool. Python's own built-in venv module (https://docs.python.org/3/library/venv.html) makes it easy to create virtualenvs anytime you want to use them. I like using uv to combine that and more into one single tool, but that's just my preference. There are many tools that support this workflow and I highly recommend you find one you like and use it. (But not pipenv. Don't pick that one.)

              • mixmastamyk 2 days ago ago

                This is the conventional wisdom these days, and a real thing, but unless you are admin challenged, running your local scripts with the system Python is fine. Been doing it two decades plus now.

                Yes, make a venv for each work project.

                • odie5533 a day ago ago

                  If you're just writing tiny scripts for yourself, sure use the system Python.

                  If you're doing work on a large Python app for production software, then using system Python isn't going to cut it.

        • orf 2 days ago ago

          Ignore the talk below about pyenv, it’s not even slightly suitable for this task.

          You want precompiled Python binaries. Use “uv” for this, rather than hacking it together with pyenv.

          • kstrauser 2 days ago ago

            Use either one. `pyenv install 3.x` is slower than `uv python install 3x`, but that's not the most common operation I use either of those tools for. Uv is also comparatively brand new, and while I like and use it, I'm sure plenty of shops aren't racing to switch to it.

            If you already have pyenv, use it. If you don't have pyenv or uv, install uv and use that. Either one is a huge upgrade over using the default Python from your OS.

            • orf 2 days ago ago

              This makes sense for a desktop environment, but for a disposable testing container there is no way that building and compiling each version of Python like that is a sensible use of time/resources.

              • kstrauser a day ago ago

                For that kind of thing I'd always either used the tagged Python images in Docker Hub or put the build step in an early layer that didn't have to re-run each time.

                One other advantage is that you know the provenance of the python executable when you build it yourself. Uv downloads a prebuilt exe from https://gregoryszorc.com/docs/python-build-standalone/main/ who is a very reputable, trusted source, but it's not the official version from python.org. If you have very strict security requirements, that may be something to consider. If you use an OS other than Linux/Mac/Windows on x86/ARM, you'll have to build your own version. If you want to use readline instead of libedit, you'll have to build your own version.

                I am personally fine with those limitations. All of the OSes I regularly use are covered. I'm satisfied with the indygreg source security. The libedit version works fine for me. I like that I can have a new Python version a couple seconds after asking for uv. There are still plenty of reasons why you might want to use pyenv to build it yourself.

    • silveraxe93 2 days ago ago

      Install `uv` then run `uv run --python 3.13 my_script.py`

  • lumpa 2 days ago ago
  • BiteCode_dev 2 days ago ago

    I still see Python 3.12.7 being the latest one, with 3.13 delayed because of the GC perf regression. The link, for me, points to the 3.13 RC.

    Am I seeing a cached version and you see 3.13 ? Cause I can't see it on the homage page download link either.

    • fmajid 2 days ago ago

      No, they jumped the gun.

  • SubiculumCode 2 days ago ago

    What I've been surprised about is the number of python packages that require specific python versions(e.g., works on 3.10, but not 3.11. Package versioning is already touchy enough without the language itself causing it in minor upgrades.

    And will python 3.14 be named pi-thon 3.14. I will see myself out.

    • ericfrederich 18 hours ago ago

      Today someone's pipeline broke because they were using python:3 from Dockerhub and got an unexpected upgrade ;-)

      Specifically, pendulum hasn't released a wheel yet for 3.13 so it tried to build from source but it uses Rust and the Python docker image obviously doesn't have Rust installed.

    • mixmastamyk 2 days ago ago

      Also, Py2 ended approaching Euler’s number, 2.7.18.

  • gjvc 2 days ago ago

    looking forward to the GraalVM version

  • a day ago ago
    [deleted]
  • bun_terminator 2 days ago ago

    I appreciate the effort to leave out the "And now for something completely different" section (on https://www.python.org/downloads/release/python-3130/) after the previous drama.

  • rhnamec 2 days ago ago

    [flagged]