A hobby audio and text analysis application I've written, with no specific concern for low level performance other than algorithmically, runs 4x as fast in .net10 vs .net8. Pretty much every optimization discussed here applies to that app. Great work, kudos to the dotnet team. C# is, imo, the best cross platform GC language. I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
Very mixed feelings about this as there’s a strong case for the decisions made here but it also moves .NET further away from WASMGC, which makes using it in the client a complete non-starter for whole categories of web apps.
It’s a missed opportunity and I can’t help but feel that if the .NET team had gotten more involved in the proposals early on then C# in the browser could have been much more viable.
Those changes affect the .NET runtime, designed for real computers. This does not preclude the existence of a special runtime designed for Wasm with WasmGC support.
The .NET team appears to be aware of WasmGC [0], and they have provided their remarks when WasmGC was being designed [1].
.NET was already incompatible with WASM GC from the start [1]. The changes in .NET 10 are nothing in comparison to those. AFAIK WASM GC was designed with only JavaScript in mind so that's what everyone is stuck with.
One limitation of the stack is that it needs to be contiguous virtual addresses, so it was often limited when devices just didn't have the virtual address space to "waste" on a large stack for every thread in a process.
But 64 bits of virtual address space is large enough that you can keep the stacks far enough apart that even for pretty extreme numbers of threads you'll run out of physical memory before they start clashing. So you can always just allocate more physical pages to the stack as needed, similar to the heap.
I don't know if the .net runtime actually does this, though.
> Won't this potentially cause stack overflows in programs that ran fine in older versions though?
That's certainly a possibility, and one that's come up before even between .net framework things migrated to .net core. Though usually it's a sign that something is awry in the first place. Thankfully the default stack sizes can be overridden with config or environment variables.
I am surprised that they didn't already do a lot of optimizations informed by escape analysis, even though they have had value types from the beginning. Hotspot is currently hampered by only having primitive and reference types, which Project Valhalla is going to rectify.
I think that DATAS also has more knobs to tune it than the old GC. I plan to set the Throughput Cost Percentage (TCP) via System.GC.DTargetTCP to some low value so that is has little impact on latency.
> You may not disclose the results of any benchmark test of the .NET Framework component of
the Software to any third party without Microsoft’s prior written approval.
I seem to vaguely recall such a thing from way back in the early days, but the only copy[1] of the .Net Framework EULA I could readily find says it's OK as long as you publish all the details.
It's because you aren't looking at 20 year old EULA's
>3.4 Benchmark Testing. The Software may contain the Microsoft .NET Framework. You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
This person is not likely familiar with the history of the .net framework and .net core because they decided a long time ago they were never going to use it.
It works just fine out of the box. The articles/manuals are just if you want to really understand how it works and get the most out of it. What's the issue with that?
Dr. Dobbs and The C/C++ Users Journal archives are full of articles and ads for special memory allocators, because the ones on the standard library for C or C++ also don't work in many cases, they are only good enough as general purpose allocation.
You need these settings when you drive your application hard into circumstances where manual memory allocation arguably starts making sense again. Like humongous heaps, lots of big, unwieldy objects, or tight latency (or tail latency) requirements. But unless you're using things like Rust or Swift, the price of memory management is the need to investigate segmentation faults. I'd prefer to spend developer time on feature development and benchmarking instead.
A hobby audio and text analysis application I've written, with no specific concern for low level performance other than algorithmically, runs 4x as fast in .net10 vs .net8. Pretty much every optimization discussed here applies to that app. Great work, kudos to the dotnet team. C# is, imo, the best cross platform GC language. I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
Very mixed feelings about this as there’s a strong case for the decisions made here but it also moves .NET further away from WASMGC, which makes using it in the client a complete non-starter for whole categories of web apps.
It’s a missed opportunity and I can’t help but feel that if the .NET team had gotten more involved in the proposals early on then C# in the browser could have been much more viable.
Those changes affect the .NET runtime, designed for real computers. This does not preclude the existence of a special runtime designed for Wasm with WasmGC support.
The .NET team appears to be aware of WasmGC [0], and they have provided their remarks when WasmGC was being designed [1].
[0] https://github.com/dotnet/runtime/issues/94420
[1] https://github.com/WebAssembly/gc/issues/77
.NET was already incompatible with WASM GC from the start [1]. The changes in .NET 10 are nothing in comparison to those. AFAIK WASM GC was designed with only JavaScript in mind so that's what everyone is stuck with.
[1] https://github.com/dotnet/runtime/issues/94420
Webassembly taking off on the browser is wishful thinking.
There are a couple unicorns like Figma and that is it.
Performance is much better option with WebGPU compute, and not everyone hates JavaScript.
Whereas on the server it is basically a bunch of companies trying to replicate application servers, been there done that.
Interesting, I mostly work in JVM, and am always impressed how much more advanced feature-wise the .NET runtime is.
Won't this potentially cause stack overflows in programs that ran fine in older versions though?
I don't think the runtime is "much more advanced", the JVM has had most of these optimizations for years.
One limitation of the stack is that it needs to be contiguous virtual addresses, so it was often limited when devices just didn't have the virtual address space to "waste" on a large stack for every thread in a process.
But 64 bits of virtual address space is large enough that you can keep the stacks far enough apart that even for pretty extreme numbers of threads you'll run out of physical memory before they start clashing. So you can always just allocate more physical pages to the stack as needed, similar to the heap.
I don't know if the .net runtime actually does this, though.
> Won't this potentially cause stack overflows in programs that ran fine in older versions though?
That's certainly a possibility, and one that's come up before even between .net framework things migrated to .net core. Though usually it's a sign that something is awry in the first place. Thankfully the default stack sizes can be overridden with config or environment variables.
I am surprised that they didn't already do a lot of optimizations informed by escape analysis, even though they have had value types from the beginning. Hotspot is currently hampered by only having primitive and reference types, which Project Valhalla is going to rectify.
On the topic of DATAS, there was a discussion here recently: https://news.ycombinator.com/item?id=45358527
DATAS has been great for us. Literally no effort, upgrade the app to net8 and flip it on. Huge reduction in memory.
TieredCompilation on the other hand caused a bunch of esoteric errors.
I think that DATAS also has more knobs to tune it than the old GC. I plan to set the Throughput Cost Percentage (TCP) via System.GC.DTargetTCP to some low value so that is has little impact on latency.
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
Are you now allowed to benchmark the .Net runtime / GC?
Edit: Looks like you are allowed to benchmark the runtime now. I was able to locate an ancient EULA which forbade this (see section 3.4): https://download.microsoft.com/documents/useterms/visual%20s...
> You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
Yes, you probably mixed it with SQL Server.
> Publishing SQL Server benchmarks without prior written approval from Microsoft is generally prohibited by the standard licensing agreements.
Yes.
why wouldn't you be?
...Were you not before?
IIRC the EULA forbids it. This is why you don't see .net v/s Java GC comparisons for example.
I seem to vaguely recall such a thing from way back in the early days, but the only copy[1] of the .Net Framework EULA I could readily find says it's OK as long as you publish all the details.
[1]: https://docs.oracle.com/en/industries/food-beverage/micros-w...
I can't find mention of anything resembling this. The .NET runtime is under the MIT license.
https://download.microsoft.com/documents/useterms/visual%20s...
It's because you aren't looking at 20 year old EULA's
>3.4 Benchmark Testing. The Software may contain the Microsoft .NET Framework. You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
This person is not likely familiar with the history of the .net framework and .net core because they decided a long time ago they were never going to use it.
Yeah, you got me there. I have moved on to Linux development since then. Haven't kept up with Microsoft developer tools.
[delayed]
.net core on Linux works great btw.
In recent versions (i.e. since .NET 5 in 2020) ".NET core" is just called ".NET"
The cross-platform version is mainstream, and this isn't new any more.
.NET on Linux works fine for services. Our .NET services are deployed to Linux hosts, and it's completely unremarkable.
Wow didn't know that. Can you provide some links?
What are you talking about?
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Use managed language, it will handle memory stuff for you, you don’t have to care.
But also read these 400 articles to understand our GC. If you are lucky, we will let you change 3 settings.
You can provide your own GC implementation if you really wanted to:
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
https://github.com/dotnet/runtime/blob/main/src/coreclr/gc/g...
It works just fine out of the box. The articles/manuals are just if you want to really understand how it works and get the most out of it. What's the issue with that?
Dr. Dobbs and The C/C++ Users Journal archives are full of articles and ads for special memory allocators, because the ones on the standard library for C or C++ also don't work in many cases, they are only good enough as general purpose allocation.
You need these settings when you drive your application hard into circumstances where manual memory allocation arguably starts making sense again. Like humongous heaps, lots of big, unwieldy objects, or tight latency (or tail latency) requirements. But unless you're using things like Rust or Swift, the price of memory management is the need to investigate segmentation faults. I'd prefer to spend developer time on feature development and benchmarking instead.