Zig certainly has a lot of interesting features and good ideas, but I honestly don't see the point of starting a major project with it. With alternatives like Rust and Swift, memory safety is simply table stakes these days.
Yes, I know Zig does a lot of things to help the programmer avoid mistakes. But the last time I looked, it was still possible to make mistakes.
The only time I would pick something like C, C++, or Rust is if I am planning to build a multi-million line, performance sensitive project. In which case, I want total memory safety. For most "good enough" use cases, garbage collectors work fine and I wouldn't bother with a system's programming language at all.
That leaves me a little bit confused about the value proposition of Zig. I suppose it's a "better C". But like I said, for serious industry projects starting in 2025, memory safety is just tablestakes these days.
This isn't meant to be a criticism of Zig or all of the hard work put into the language. I'm all for interesting projects. And certainly there are a lot of interesting ideas in Zig. I'm just not going to use them until they're present in a memory safe language.
I am actually a bit surprised by the popularity of Zig on this website, given the strong dislike towards Go. From my perspective, both languages are very similar, from the perspective that they decided to "unsolve already solved problems". Meaning, we know how to guarantee memory safety. Multiple programming languages have implemented this in a variety of ways. Why would I use a new language that takes a problem a language like Rust, Java, or Swift already solved for me, and takes away features (memory safety) that I already have?
And also, this is black and white thinking, implying that "swift and rust" are completely memory "safe" and zig is completely "unsafe". It's a spectrum.
The real underlying comparison statement here is far more subjective. It's along the lines of: "I find it easier to write solid code in rust than in zig". This is a more accurate and fair way to state the semantics of what you are saying.
Saying things like "rust is memory safe. Zig is not memory safe" is reductionist and too absolutist.
If decades of experience shows us anything it is that discipline and skill is not enough to achieve memory safety.
Developers simply aren’t as good at dealing with these problems as they think they are. And even if a few infallible individuals would be truly flawless, their co-workers just aren’t.
I'm not convinced that on average Zig is any less safe, or produces software that is any less stable, then Rust.
Zig embraces reality in its design. Allocation exist, hardware exists, our entire modern infrastructure is built on C. When you start to work directly with those things, there is going to be safety issues. That's just how it is. Zig tries to give you as many tools as possible to make good decisions at every turn, and help you catch mistakes. Like it's testing allocator detecting memory leaks.
Rust puts you in a box, where the outside world doesn't exist. As long as you play by its rules everything will be fine. But it eventually has to deal with this stuff, so it has unsafe. I suspect if Rust programmers went digging through all their dependencies, especially when they are working on low level stuff, they would be surprised by how much of it actually exists.
Zig tried to be more safe on average and make developers aware if pitfalls. Rust tried to be 100% safe where it can, and then not safe at all where it can't. Obviously Rusts approach has worked for it, but I don't think that invalidates Zigs. Especially when you start to get into projects where a lot of unsafe operations are needed.
Zig also has an advantage in that it simplifies memory management through its use of allocators. If you read Richard Feldman's write up on the Roc compilers rewtire in Zig, he talks about how he realized their memory allocation patterns were simple enough in Zig that they just didn't need the complexity of Rust.
To be clear, Rust encourages the development of safe abstractions around unsafe code, so that the concern goes from proportion of unsafe to encapsulation of unsafe. Whether you trust some library author to encapsulate their unsafe is, I think, reducible to whether you trust a library author to write a good library. Unsafe is not all-or-nothing. Thus, as with all languages, good general programming practices come before language features.
That's kind of my point. Because it's isolated and abstracted I wouldn't be surprised if most Rust devs have no idea how much unsafe code is actually out there.
Rust does not want you to think about memory management. You play by its rules and let it worry about allocations/deallocation. Frankly in that regard Rust has more in common with GC languages than it does Zig or C. Zig chooses to give the developer full control and provides tools to make writing correct/safe code easier.
Although not a comprehensive report, people tend to count the source lines of unsafe in a Rust codebase as a metric. Moreover, reputable libraries worth using typically take care to reduce unsafe, and where it is used, encapsulate it well. I don't think you have a substantive point on the matter. Unsafe certainly can be abused, but it's not a bogeyman that people scarcely catch glimpses of. Unsafe doesn't demote the safety of Rust to that of C, or something like that.
Your comments on Rust's philosophy towards memory management are off base. Rust is unlike GC languages, even Swift, in that it makes allocations and deallocations explicit. For example, I know that one approach to implementing async functions in trait objects was rejected because it would've made implicit heap allocations. Granted, Rust is far behind on reified and custom allocators. Rust has functionality to avoid the default alloc crate, which is the part of libstd that does heap allocations, and a library ecosystem for alternate data structures. Rust doesn't immediately give you total access, but it's only a few steps away. Could it be easier to work with? Absolutely. The same goes for unsafe.
Thank you for the thoughtful reply, but I think you missed my point.
I'm not saying Rust isn't substantially safer than C. When people like Greg Kroah-Hartman say that Rust by its design eliminates a lot of the memory bugs he's been fighting for 40 years, I believe him.
My point is that people tend to talk about it as an all or nothing proposition. That Rust is memory safe. Period. And any language that can't put that on the tin is immediately disqualified, that somehow their approach to solving similar problems is invalid.
By the very nature of the system no language that wants to interact with the hardware can be entirely memory safe, not even Rust. It has chosen a specific solution, and a pretty damn interesting one as far as that goes, but it still has to deal with unsafe. And the more directly your program has to deal with the hardware the more unsafe code it's going to have to deal with.
Zig has also chosen an approach to deal with the problem. Their's is one that gives far more direct control to the programmer. Every single memory allocation is explicit in that you have to directly interact with an allocator and you have to free that memory. It's not hidden behind constructors/destructors and controlled via RAII patterns (side note, there are managed data structures that you give an allocator to via an init and free via a deinit, but you still have to pass in the allocator and those are being largely replaced).
If you are only dealing with problems where you can interact with Rust's abstractions I'm sure it is more safe then Zig, but I don't think it's as big a difference as people think. And when you start digging down into systems level programming where the amount of unsafe code you have to write grows, Rusts advantage starts to diminish significantly.
To my point about Rust not wanting you to think about memory, take Vector as an example. You and I know that's doing heap allocations, but I guarantee you a not insignificant number of Rust devs just don't even think about it. And they certainly don't think about all the allocations/deallocations that have to happen to grow and shrink it dynamically.
Compare that to Zigs ArrayList. When you create it you have to explicitly hand it an allocator you created. It could be a general purpose allocator, but it could just as easily be an arena allocator, or even an allocator backed by a buffer you pre-allocated specifically for it. As the programmer you have to directly deal with the fact that thing allocats and deallocates.
Thats what I mean when I say Rust has more in common with GC languages in some ways. When I type "new" in Java I know I'm heap allocating a new object, just Java doesn't want me to think about that because the GC will deal with it. When you create a vector in Rust, it doesn't want you to think about the memory, it just wants you to follow it's borrow checker rules. Which is very different then thinking about allocation/deallocation patterns.
Because it's a stepping stone to other kinds of safety. Memory safety isn't the be-all and end-all, but it gets us to where we can focus other important things.
And turns out in this particular case we don't even have to pay much for it in terms of performance.
> The real underlying comparison statement here is far more subjective. It's along the lines of: "I find it easier to write solid code in rust than in zig".
Agreed! But also how about "We can get pretty close to memory safety with the tools we provide! Mostly at runtime! If you opt-in!" ~~ signed, people (Zig compiler itself, Bun, Ghostty, etc) who ship binaries built with -Doptimize=ReleaseFast
Memory bugs are hard to debug, potentially catastrophic (particularly concerning security) and in large systems software tend to constitute the majority of issues.[1]
It is true that Rust is not absolutely memory safe and Zig provides some more features than C but directionally it is correct that Rust (or languages with a similar design philosophy) eliminate billion dollar mistakes. And you can take that literally rather than metaphorically. We live in a world where vulnerable software can take a country's infrastructure out.
Zig has a pretty great type system, and sometimes languages like Rust and C++ are not great with preventing accidental heap allocations. Zig and C make this very explicit, and it's great to be able to handle allocation failures in robust software.
That is the usual fallacy, because it assumes everyone has full access to whole source code and is tracking down all the places where heap is being used.
It also assumes that the OS doesn't lie to the application when allocations fail.
Zig make allocations extremely explicit (even more than C) by having you pass around the allocator to every function that allocates to the heap. Even third-party libraries will only use the allocator you provide them. It's not a fallacy, you're in total control.
You can if you want. You can write your own allocator that never actually touches the heap and just distributes memory from a big chunk on the stack if you want to. The point is you have fine grained (per function) control over the allocation strategy not only in your codebase but also your dependencies.
Not at all, rather there is no guarantee that the C abstract machine described on ISO C, actually returns NULL on memory allocation failures as some C advocates without ISO C legalese expertise seem to advocate.
If you are asking that question you should not use a new language. Stick with what works for you. You need to feel that something is unsatisfactory with what you are using now in order to consider changing.
To me the argument is that memory errors are just one type of logic error that can lead to serious bugs. You want a language that reduces logic errors generally, not just memory safety ones, and zig's focus on simplicity and being explicit might be the way to accomplish that.
For large performant systems, what makes sense to me is memory safety by default, with robust, fine-grained levers available to opt in to performance over safety (or to achieve both at once, where that's possible).
Zig isn't that, but it's at least an open question to me. It has some strong safe-by-default constructs yet also has wide open safety holes. It does have those fine-grained levers, plus simplicity and explicitness, so not that far away. Perhaps they'll get there by 1.0?
Logical errors and Memory errors aren’t even close to being in the same ballpark.
Memories errors are deterministic errors with non-deterministic consequences. Logical errors are mostly non-deterministic (subjective and domain dependent) but with deterministic consequences.
> ...memory safety is simply table stakes these days.
Is there like a mailing list Rust folks are on where they send out talking points every few months? I have never seen a community so in sync on how to talk about a language or project. Every few months there's some new phrase or talking point I see all over the place, often repeated verbatim. This is just the most recent one.
> For most "good enough" use cases, garbage collectors work fine and I wouldn't bother with a system's programming language at all.
It's not just about performance, it's about reusability. There is a huge amount of code written in languages like Java, JS, Go, and Python that cannot be reused in other contexts because they depend on heavy runtimes. A library written in Zig or Rust can be used almost anywhere, including on the web by compiling to wasm.
Yes, we know how to offer memory safety; we just don't know how to offer it without exacting a price that, in some situations, may not be worth it. Memory safety always has cost.
Rust exists because the cost of safety, offered in other languages, is sometimes too high to pay, and likewise, the cost Rust exacts for its memory safety is sometimes too high to pay (and may even adversely affect correctness).
I completely agree with you that we reach for low-level languages like C, C++, Rust, or Zig, in "special" circumstances - those that, for various reasons, require precise control over hardware resource, and or focuses more on the worst case rather than average case - and most software has increasingly been written in high-level languages (and there's no reversal in this trend). But just as different factors may affect your decisions on whether to use a low-level language, different factors may affect your decision on which low-level language to choose (many of those factors may be subjective, and some are extrinsic to the language's design). Of course, if, like the vast majority of programmers, you don't do low-level programming, then none of these languages are for you.
As a long-time low-level programmer, I can tell you that all of these low-level languages suffer from very serious problems, but they suffer from different problems and require making different tradeoffs. Different projects and different people may reasonably want different tradeoffs, and just as we don't have one high-level language that all programmers like, we also don't have one low-level language that all programmers like. However, preferences are not necessarily evenly distributed, and so some languages, or language-design approaches, end up more popular than others. Which languages or approaches end up more popular in the low-level space remains to be seen.
Memory safety is clearly not "table stakes" for new software written in a low level language for the simple reason that most new software written in low level languages uses languages with significantly less memory safety than Zig offers (Zig offers spatial memory safety, but not temporal memory safety; C and C++ offer neither, and most new low level software written in 2025 is written in C or C++).
I can't see a strong similarity between Go, a high-level language, and Zig, a low-level language, other than that both - each in its own separate domain - values language simplicity. Also, I don't see Zig as being "a better C", because Zig is as different (or as similar) from C as it is from C++ or Rust, albeit on different axes. I find Zig so different from any existing language that it's hard to compare it to anything. As far as I know, it is the first "industry language" that's designed almost entirely around the concept of partial evaluation.
It depends on the purpose. If the objective is maximum scale and performance then Zig. The low-level mechanics of userspace I/O and execution scheduling in top-end database architectures strongly recommends a language comfortable expressing complex relationships in contexts where ownership and lifetimes are unavoidably ambiguous. Zig is designed to enable precise and concise control in these contexts with minimal overhead.
If performance/scale-maxxing isn't on the agenda and you are just trying to crank out features then Rust probably brings more to the table.
The best choice is quite arguably C++20 or later. It has a deep set of somewhat unique safety features among systems languages that are well-suited to this specific use case.
First, I would avoid using any low-level language if at all possible, because no matter what you pick, the maintenance and evolution costs are going to be significantly higher than for a high-level language. It's a very costly commitment, so I'd want to be sure it's worth it. But let's suppose I decided that I must use a low-level language (perhaps because worst-case behaviour is really important or I may want to run in a low-memory device or the DB was a "pure overhead" software that aims to minimise memory consumption that's needed for a co-located resource heavy application).
Then, if this were an actual product that people would depend on for a long time, the obvious choice would be C++, because of its maturity and good prospects. But say this is hypothetical or something more adventurous that allows for more risk, then I would say it entirely depends on the aesthetic preferences of the team, as neither language has some clear intrinsic advantage over the other. Personally, I would prefer Zig, because it more closely aligns with my subjective aesthetic preferences, but others may like different things and prefer Rust. It's just a matter of taste.
> the DB was a "pure overhead" software that aims to minimise memory consumption that's needed for a co-located resource heavy application)
Thanks pron for the reply. This describes it the best. To minimize resource consumption in a "pure overhead" software. It's currently written in Java and we are planning a rewrite in a systems PL.
Please provide some documentation of how to use c libraries without such interop layer in rust. And while bindgen does most of the work it can be pretty tedious to get running.
Tried to use Swift outside Xcode and it’s a pain.
Especially when writing CLI apps the Swift compiler chocked and says there is an error, mentioning no line number. Good luck with that.
Also the Swift tooling outside Xcode is miserable.
Even Rust tooling is better than that, and Swift has a multi billion dollar company behind it.
What a shame…
I have no hard data. I have seen comments to this effect in HN. Somewhat famously Primagen threw in the towel on it. I would love to hear from others with 4+ years of Rust experience though.
Avoid it and you're good, you just have to accept that a big part of the language is not worth its weight. I guess at that point a lot of people get disillusioned and abandon it whole, when in reality you can just choose to ignore that part of the language.
(I'm rewriting my codex-rs fork to remove all traces of async as we speak.)
If "any amount" means millions of concurrent connections maybe. But in reality we've build thread based concurrency, event loops and state machines for decades before automatic state machine creation from async code came along.
Async doesn't have access to anything that sync rust doesn't have access to, it just provides syntactic sugar for an opinionated way of working with it.
On the contrary, async is very incompatible with MMAP for example, because a page fault can pause the thread but will block the entire executor or executor thread.
I'd even argue that once you hit that scale you want more control than async offers, so it's only good at that middle ground where you have a lot of stuff, but you also don't really care enough to architect the thing properly.
None of all those memory-safe languages allows you to work without a heap. And I don't mean "avoid allocations in that little critical loop". I mean "no dynamic allocation, never ever". A lot of tasks doesn't actually require dynamic allocation, for some it's highly undesirable (e. g. embedded with limited memory and long uptimes), for some it's not even an option (like when you are writing an allocator). Rust has some support for zero-runtime, but a lot of it's features is either useless of outright in the way when you are not using a heap. Swift and others don't even bother.
Unpopular opinion: safety is a red herring. A language shouldn't prevent the programmer from doing the unsafe thing, rather it should provide an ergonomic way to do things in a safe way. If there is no such way - that's on language designer, not the programmer. Rust being the worst offender: there is still no way to do parent links, other than ECS/"data oriented" which, while it has it's advantages, is both quite unergonomic and provides memory safety by flaying it, stuffing the skin with cow dung and throwing the rest out of the window.
>strong dislike towards Go.
Go unsolves problem without unlocking any new possibilities. Zig unsolves problem before it aims towards niches where the "solution" doesn't work.
Genuinely curious because I don't know: when you group Swift with Rust here, do you mean in terms of memory safety guarantees or in the sense of being used for systems-level projects? I've always thought of Swift as having runtime safety (via ARC), not the same compile-time model as Rust, and mostly confined to Apple platforms.
I'm surprised to see them mentioned alongside each other, but I may very well be missing something basic.
As a developer interested in zig it is nice to see books being release, but I'd be a bit resitant to buy anything before zig reaches 1.0 or at least when they settle on a stable api. Both the builder and the std keep changing. Just 0.15 is a huge breakage. Does anyone knows how those changes would affect this book?
People are doing real companies with Zig (e.g. Tiger Beetle) and important projects (e.g. Ghostty) without the 1.0 backward compatibility guarantee. Zig must be doing something right. Maybe this also keeps opinionated 9-5er, enterprise type devs out of it as well which might be a good thing :).
Are there other notable commercial companies/projects than Tiger Beetle relying on Zig? According to public information, Tiger Beetle are about eight employees with a single customer, isn't it?
I'm not sure how many customers that Tiger Beetle have but I really hope they are successful. It would be great to see such a quality focused engineering org make it. They are basically doing what many devs really want to do - make the highest quality and fastest stuff possible - instead of banging out random features that usually no one actually cares about. I don't have a use case for their tech right now, but the moment I have a need for anything in the vicinity I'll be checking it out..
We’re doing pretty well as a business already, contrary to Rochus’ comment, which is not accurate.
Our team is 16, we have $30M in investment, and already some of the largest brokerages, exchanges, and wealth managements, in their respective jurisdictions are customers of TigerBeetle.
We have a saying:
“Good engineering is good business, and good business is good engineering.”
At least in TigerBeetle’s experience, the saying is proving true. We really appreciate your support and kind words!
Thanks for the hint. According to public sources, Bun's runtime infrastructure and bindings are written in Zig, but it uses Apple's JavaScriptCore engine written in C++ (i.e. Zig is essentially used in a thin wrapper layer around a large C++ engine). Bun itself apparently struggles with stability and production readiness. Oven (Bun's company) has around 2-10 employees according to LinkedIn.
Lightpanda looks interesting, thanks for the hint. According to public sources, behind the development is a company of 2-10 employees (like Bun). The engine uses parts of the Netsurf browser (written in C) and the V8 engine (written in C++), and - as far a I can tell - is in an early development stage.
I guess almost every single language out there have at least one "real" company using it, so yeah, Zig is still mostly hype and blog post praising its awesomeness.
As a Zig adopter I was rooting for this because even though there is the official documentation and some great community content, it's nice to just kick back and read a well written book. I'm also very happy with other Manning titles; they have built a high level of trust with me.
To answer your question is it too early. My expectation is that the core language, being quite small by design, will not change that much before 1.0. Some things will, especially async support.
What I think we will see is something like a comprehensive book on C++14 . The language is not much changed between now and then, but there will be new sections to add and some sections to rework with changes to the interface. The book would still be useful today.
Not a perfect analogy because C++ maintains backwards compatibility but I think it is close.
Do you have a history we can look at to see how good you are at predicting this for programming languages? Like say, some 2020 predictions you had for languages which would or would not ship 1.0 by 2025 ?
I made a set of predictions for Rust in 2022, nearly all of which turned out to be correct. And I was publicly confident Go and Rust would be massive when they reached 1.0. I was right on both counts.
But I will also admit I don’t follow developments in zig as closely as Rust. I’ve never written any Zig. And in any case, past performance isn’t indicative of future performance.
I could be wrong about this prediction, but I don’t think I will be. From what I’ve seen Andy Kelley is a perfectionist who could work on point releases forever. But his biggest users (tigerbeetle and bun especially) will only be taken seriously once Zig is 1.0. They’ll nudge him towards 1.0. They can wait a few years, but not forever. That’s why I guessed 4 years.
> But his biggest users (tigerbeetle and bun especially) will only be taken seriously once Zig is 1.0.
TB is only 5 years old but already migrating some of the largest brokerages, exchanges and wealth managements in their respective jurisdictions.
Zig’s quality for us here holds up under some pretty extreme fuzzing (a fleet of 1000 dedicated CPU cores), Deterministic Simulation Testing and Jepsen auditing (TB did 4x the typical audit engagement duration), and is orthogonal to 1.0 backwards compatibility.
Zig version upgrades for our team are no big deal, compared to the difficulty of the consensus and local storage engine challenges we work on, and we vendor most of our std lib usage in stdx.
> They’ll nudge him towards 1.0.
On the contrary, we want Andrew to take his time and get it right on the big decisions, because the half life of these projects can be decades.
We’re in no rush. For example, TigerBeetle is designed to power the next 30 years of transaction processing and Zig’s trajectory here is what’s important.
That said, Zig and Zig’s toolchain today, is already better, at least for our purposes, than anything else we considered using.
If you don’t mind my asking, did TB add support for transaction metadata? I’ve seen this anti-pattern of map<string, string> associated with each transaction. Far from ideal, but useful. Last I checked TB didn’t support that because it would need dynamic memory allocation. Does it support it now or will it in future?
It’s not that it would need dynamic memory allocation (it could be done with static), but rather it’s not essential to performance—you could use any KV or OLGP for additional “user data”, it’s not the hard contended part.
To keep consistency, the principle is:
- write your data dependencies if any, then “write to TB last as your system of record”,
- “read from TB first”, and if it’s there your data dependencies will be too.
Excellent way to stay busy producing revised editions over the next few years.
I really enjoy writing Zig and I think it's going to be an important language in the future. I do not enjoy porting my code between versions of the language. But early adopters are important for exploring the problem space, and I would have loved to find a canonical source (aside from the docs, which are mostly nice) for learning the language when I did. A text that evolves with the language has a better chance of becoming that canonical onboarding source.
I think this is great, gives those wanting a good formal foundation a guide to getting organized and making something happen.
With time this is also going to be great for the author with new iterations of the book, but getting in early like this can set the author and the language up for success long term.
Well the MEAP just started, 3 chapters are complete and the rest will follow probably next year.
IMO it’s a bet: Zig stays stable enough and this will be _the_ Zig book for a while. Should the bet not pay off it still cements the author as an authority on the language and gets them a foot in the door.
Manning doesn't pay that much to first time authors, and it looks like it's the first book for the author, Garrison Hinson-Hasty.
My guess is it's about $2k upfront and the author owes it back if they don't deliver.
Teiva Harsanyi did a good writeup recently about working with Manning as a first-time author. He got $2k upfront and $2k after delivering the first 1/3rd.[0]
When I wrote Elm in Action for Manning, I talked with them explicitly about the language being pre-1.0 and what would happen if there were breaking changes. The short answer was that it was something they dealt with all the time; if there were small changes, we could issue errata online, and if there were sufficiently large changes, we could do a second edition.
I did get an advance but I don't remember a clause about having to return any of it if I didn't earn it out. (I did earn it out, but I don't remember any expectation that I'd have to give them money back if I hadn't.) Also my memory was that it was more than $2k but it was about 10 years ago so I might be misremembering!
When I wrote Nim in Action[1] for Manning, it was prior to 1.0 as well. It was definitely a bit akward, but breaking changes to the stuff covered in the book was relatively minor.
having read through a few of their books, Manning has a pretty good record of producing a lot of good content. nostarch an pragprog are arguably better in terms of writing quality (slightly) but they dont' publish nearly as many books.
Pakt certainly has more books but the quality is absolute garbage.
whenever a new language comes out, I usually take a weekend to at least dabble in it to see if its worth getting into. Most of the languages end up in my "cool story" bucket but its also how I found elixir and I've been working in it fulltime for the last 7 years every since.
In the case of manning, they presell the pdf. that costs them nothing while expanding their catalog in a way that doesnt feel like a ripoff for the subscribers. I'm not expecting a MEAP title to have the same level of polish as a completed book. Rather, I appreciate having close to bleeding age info thats been somewhat curated for my consumption.
Not a comment on the book but I hope a slightly off topic thread is OK...
As a systems programmer, what is the selling point of Zig? For me, memory safety is the number 1, 2, 3, 4 and 5 problem with C. A new language that doesn't solve those problems seems very strange to me. And as a counterpoint, I will tolerate massive number of issues with Rust because it gives us memory safety.
I work in security so that may give me a bias. But maybe instead of "bias" it's more like awareness. If you don't deal with it day in, day out, it seems impossible to really internalise how completely fucking broken something like the Linux kernel is. Continuing to write e.g. driver code in an unsafe language is just not acceptable for our civilisation.
But, also maybe you don't need full complete memory safety, if a language has well designed abstractions maybe it can just be "safe enough in practice". I've worked on C++ code that feels that way. Is this the deal with Zig?
1. Lack of memory safety is a big problem particularly because it's a common cause of dangerous vulnerabilities, but spatial safety is more important than temporal safety for that reason, and Zig does offer spatial safety.
2. Other important issues of concern when it comes to security/correctness are language complexity and compilation speed. A complicated language with lots of implicitness can obscure bugs and make reviews slower. Slow compilation reduces the iteration speed and harms testing. Zig focuses on these two. The language is simple, with no implicitness - overloading of any kind is not allowed and neither are any kind of hidden calls. Even if security were the only concern, it's not clear at all how much complexity and compilation speed should be sacrificed for temporal safety. Remember that it's easy to classify bugs by technical causes, but more diffuse aspects, like testing and review, are also very important, and Zig tries to find a good balance.
3. Nevertheless, the language is still very expressive [1]. In any language, you want the algorithm to be clear and with neither extraneous nor important hidden details for that domain. I think Zig gets that about right for low-level programming (and C++ doesn't).
For me, the #1 problem with C is lack of safety and the #2 problem is lack of expressivity. With C++, the main problem for me is language complexity and implicitness, with slow compilation and lack of safety tied for number 2. So Zig is as expressive as C++. but not only safer, but also much simpler, and compiles faster (and improving compilation speed further is an important goal).
[1]: I say that two languages are equally expressive if, over all algorithms, idiomatic implementations in both languages (that are roughly equally clear) differ in length by no more than a linear relation with a small constant.
I think that's just a matter of syntax habits, presumably because you're already familiar with C++ syntax. The syntax in your example is especially "cryptic" simply because it's an FFI signature (of a function that's not written in Zig and doesn't use the normal Zig data representations).
I guess this must vary by usecase, but in the Linux kernel work I do, expressiveness is just not an issue. C is a dumb language, it's fine. Coding is just such a small part of the engineering task. Schlepping to write nontrivial stuff in C can be a drag but it's just such a minor thing compared to the task of getting OS software designed correctly that I am not really excited about trying to improve on that axis.
Whereas the memory safety issue is totally fundamental and totally terrifying. There's no other way to solve it than with a new language. The cost of changing languages is staggering but we have no choice. Yet... Paying that cost and not eking out the maximum safety benefit that's practical... Seems a bit dodgy to me.
> There's no other way to solve it than with a new language.
That's not true, though. If memory safety is the only thing that you need and C doesn't have, there are products that offer memory-safety proofs for C (e.g. https://www.trust-in-soft.com). You do need to add some lifetime annotations and may want to change your code here and there, but it's overall much cheaper than a rewrite in a new language. I believe such solutions are more popular, too; few companies - even and especially those that care primarily about correctness - are crazy enough to justify a rewrite of an existing product just for memory safety.
> Paying that cost and not eking out the maximum safety benefit that's practical... Seems a bit dodgy to me.
The cost is not the same, though (and if you only consider rewrites, as I said, there are better options). It sounds like you're saying that any cost is worth any added safety. I don't think that's true, but if it were, then something like ATS is probably what you're after (it isn't, though, because it's practically nobody's choice). Anything short of that is some compromise between cost and safety. Rust's compromise is just different from Zig's. They're both significantly safer than C and a far way off from ATS. Everyone seems to agree that the sweet spot is somewhere on that spectrum - not C, not ATS - but there's no agreement on where.
Spatial memory safety means that a language (at least in its subset that's designated "safe") doesn't allow you to manufacture pointers into memory that may contain data of a different type than what's expected by the pointer (we'll call such pointers "invalid"). The classic examples of spatial memory safety is guaranteeing that arrays are never accessed out of bounds (hence "spatial", as in pointers are safely constrained in the address space). Zig guarantees (except when using delineated "unsafe" code) such spatial safety.
Temporal memory safety is the guarantee that you never access pointers that have been valid at some point in time after they've become invalid due to reallocation of memory; we call such pointers "dangling" (hence "temporal", as in pointers are safely constrained in time). The classic example of this is use-after-free. Zig does not guarantee temporal safety, and you can accidentally have a dangling pointer (i.e. access a one-time valid pointer after it's become invalid).
Invalid pointers are especially dangerous because, in languages where they can occur, they've been a very common source of exploitable security vulnerabilities. However, violating spatial memory bounds is more dangerous, as the result is more easily exploited by attackers in the case of a vulnerability, as it's more predictable.
1. you get spatial memory safety which iirc is more common for security problems than temporal.
2. honestly for high performance applications you might reach for an ECS type system anyways (i think dom engines do this) at which point you would be getting around the borrow checker anyways.
> you get spatial memory safety which iirc is more common for security problems than temporal.
No, at least in the Linux kernel use-after-free is the biggest memory safety issue by a long way.
Maybe this isn't true in some other system software? Like if you are dealing with parsing. But, you don't really need memory safety for that usecase, fuzzing is unreasonably effective.
(Obviously for greenfield dev, safe languages are the way to go. But setting up a dank fuzzing pipeline is easier than migrating to a new language).
Maybe in the future, if the language has very good AI support, security guarantees of the language won't be as important, as it (ai) will find potential bugs well enough. This may be the case with Zig, as the language is simple and consistent, and the lack of macros will make it easier for LLMs to understand the code.
AI is not remotely capable enough to be trusted with that. Perhaps in the future it will be, but I'm not betting on it with the lack of improvement we have seen thus far.
You touched on the idea in your last paragraph. Comparing Zig to C using memory safety as a binary threshold instead of a continuous scale is apples to oranges.
For one, Rust isn't even at the far right of the memory safety scale, so any argument against Zig due to some epsilon of safety has to reckon with the epsilon Rust is missing as well. That scale exists, and in choosing a language we're choosing (among other things) where on that scale we want to be. Maybe Rust is "good enough", and maybe not.
For two, there's a lot more to software design than memory safety _even when your goals revolve around the safety and stability of the end product_.
So long as you get the basics right (like how Zig uses slices by default and has defer/errdefer statements), you won't have an RCE from a memory safety bug, or at least not one that you wouldn't likely see in Rust anyway (e.g., suppose you're writing a new data structure in both languages; suppose that it involves back-references or something else where the common pattern in Rust would involve creating backing memory and passing indices around, combined with an unsafe block on the actual cell access to satisfy the borrow checker; the fact that you had to do something in an unsafe block opens you up to safety issues like index wraparound causing OOB reads).
If memory safety is "good enough" from an RCE perspective, what are we buying with the rest of Rust's memory safety? Why is it so important? For everything else, you're just preventing normal application bugs -- leaking user data, incorrect computations, random crashes, etc. The importance of normal application bugs varies from project to project, but in the context of systems programming of any kind I'll argue that these are absolutely as critical as RCEs and the other "big" things you're trying to prevent via memory safety. The whole reason you try to prevent an RCE is to ensure that you have access to your files/data/tools and that an attacker doesn't, but if logic bugs corrupt data, crash your tools, and leak PII then you're still fucked. Memory safety didn't save you.
That shift in perspective is important. If our goal is preventing ordinary application bugs on top of memory-safety bugs it becomes blatantly obvious that you'd be willing to trade a little safety for some bug prevention.
Does Zig actually help reduce bugs though? IME, yes. The big thing it has going for it is that it's a small, cohesive language that gives you the power you need to do the things you're trying to do. Being small makes it possible to create features that compose well and are correct by default, and having the power to do what you're trying to do means you don't have to create convoluted (expensive, bug-prone) workarounds. Some examples:
1. `let` rebindings in Rust are a great feature, and I like them a lot. Better than 90% of the time I use them correctly. The other 10% I'm instead trying to create a new variable and accidentally shadowing an existing one. That's often not the end of the world, but if there exists any code after my `let` rebinding expecting the old value then it's totally incorrect (assuming compatible types so that you don't have a compilation failure). Zig yells loudly about the error.
2. The error-handling system in Zig is easy to get right because of largely compatible return types. By contrast, I see an awful lot of `unwrap` in Rust code in places that really shouldn't have it, and in codebases that try to use `?` or other more-likely-to-be-correct strategies the largely incompatible return types force most people into slapping `anyhow` around everything and calling it a day.
3. Going back to error-handling, Rust allows intentional discards of return values, but if you start by discarding the return value of a non-error function and the signature is later updated to potentially return errors Rust still lets you discard the value. Zig forces you to do something explicitly acknowledging that there was an error. 100% of the time the compiler has yelled at me for a potential logic bug it was correct to do so.
It really is just a bunch of little things, but those little things add up and make for a pleasant language with shockingly few bugs. Maybe my team is just even more exceptional than I already think or something and I'm over-indexing on the language, but in the last couple years I haven't seen a single memory safety issue, or much in the way of other bugs in the Zig part of the company.
1. I guess by "let rebinding" you mean shadowing. Clippy has lints for shadowing which are default off, you might find that some patterns of shadowing which are legal in Rust are too often problematic in your software and so you should tell Clippy to warn you about them or even deny them by default.
For example you might decide to warn for clippy::shadow_unrelated which says if I have a variable named "goose" and then later in the function I just make another variable that's not obviously derived from the first variable but it is called "goose" anyway, warn me I did that.
2. I don't see any advantage of "largely compatible" error handling over anyhow, maybe you could be more explicit about what you want here or give an example.
3. What you're claiming for Zig here is a semantic check. This could exist but be fallible (it can't be infallible because Rice's Theorem), but I suspect the reality is that it's not a semantic check and you've misattributed your experience since I don't see any such check in Zig. Maybe an example would clarify.
I really didn't mean to make this about Zig vs Rust. That was a mistake. My real point is that Zig is also a great language, even when you care about safety.
That said, the things you asked about:
1. I suppose you might get away with linting all of shadow_reuse, shadow_same, and shadow_unrelated. Would that sufficiently disable the feature?
2. The biggest problems with overusing `anyhow` to circumvent the language's error system are performance and type safety. Perf is probably fine enough for most applications (though, we _are_ talking about systems programming, and this is yet another way in which allocation-free Rust is hard to actually do), but not being able to match on the returned error makes appropriately handling errors clunkier than it should be, leading to devs instead bubbling things up the call stack even more than normal or "handling" the error by ignoring any context and using some cudgel like killing connections regardless of what the underlying error was.
3. Rice's Theorem is only tangentially related. Your goal, and one that Rust also strives for, is to make it so that changes to the type signature of a function require compatible changes at call sites, which you can often handle syntactically. Rust allows `let _ = foo()` to also silence errors, and where Zig normally allows `_ = foo()` to silence unhandled return values it additionally requires `try`, `catch`, or `return` (or similar control flow). You have options like `_ = foo() catch {}` or `_ = foo() catch @panic("WTF")` if you really do want to silence the error, but you'll never be caught off-guard by an error being added to a function's return type.
> So long as you get the basics right (like how Zig uses slices by default and has defer/errdefer statements), you won't have an RCE from a memory safety bug
This the exact same argument C/C++ uses. Just don't do anything wrong, and nothing wrong will happen.
Zig certainly has a lot of interesting features and good ideas, but I honestly don't see the point of starting a major project with it. With alternatives like Rust and Swift, memory safety is simply table stakes these days.
Yes, I know Zig does a lot of things to help the programmer avoid mistakes. But the last time I looked, it was still possible to make mistakes.
The only time I would pick something like C, C++, or Rust is if I am planning to build a multi-million line, performance sensitive project. In which case, I want total memory safety. For most "good enough" use cases, garbage collectors work fine and I wouldn't bother with a system's programming language at all.
That leaves me a little bit confused about the value proposition of Zig. I suppose it's a "better C". But like I said, for serious industry projects starting in 2025, memory safety is just tablestakes these days.
This isn't meant to be a criticism of Zig or all of the hard work put into the language. I'm all for interesting projects. And certainly there are a lot of interesting ideas in Zig. I'm just not going to use them until they're present in a memory safe language.
I am actually a bit surprised by the popularity of Zig on this website, given the strong dislike towards Go. From my perspective, both languages are very similar, from the perspective that they decided to "unsolve already solved problems". Meaning, we know how to guarantee memory safety. Multiple programming languages have implemented this in a variety of ways. Why would I use a new language that takes a problem a language like Rust, Java, or Swift already solved for me, and takes away features (memory safety) that I already have?
> memory safety is simply table stakes
Why?
And also, this is black and white thinking, implying that "swift and rust" are completely memory "safe" and zig is completely "unsafe". It's a spectrum.
The real underlying comparison statement here is far more subjective. It's along the lines of: "I find it easier to write solid code in rust than in zig". This is a more accurate and fair way to state the semantics of what you are saying.
Saying things like "rust is memory safe. Zig is not memory safe" is reductionist and too absolutist.
If decades of experience shows us anything it is that discipline and skill is not enough to achieve memory safety.
Developers simply aren’t as good at dealing with these problems as they think they are. And even if a few infallible individuals would be truly flawless, their co-workers just aren’t.
I'm not convinced that on average Zig is any less safe, or produces software that is any less stable, then Rust.
Zig embraces reality in its design. Allocation exist, hardware exists, our entire modern infrastructure is built on C. When you start to work directly with those things, there is going to be safety issues. That's just how it is. Zig tries to give you as many tools as possible to make good decisions at every turn, and help you catch mistakes. Like it's testing allocator detecting memory leaks.
Rust puts you in a box, where the outside world doesn't exist. As long as you play by its rules everything will be fine. But it eventually has to deal with this stuff, so it has unsafe. I suspect if Rust programmers went digging through all their dependencies, especially when they are working on low level stuff, they would be surprised by how much of it actually exists.
Zig tried to be more safe on average and make developers aware if pitfalls. Rust tried to be 100% safe where it can, and then not safe at all where it can't. Obviously Rusts approach has worked for it, but I don't think that invalidates Zigs. Especially when you start to get into projects where a lot of unsafe operations are needed.
Zig also has an advantage in that it simplifies memory management through its use of allocators. If you read Richard Feldman's write up on the Roc compilers rewtire in Zig, he talks about how he realized their memory allocation patterns were simple enough in Zig that they just didn't need the complexity of Rust.
To be clear, Rust encourages the development of safe abstractions around unsafe code, so that the concern goes from proportion of unsafe to encapsulation of unsafe. Whether you trust some library author to encapsulate their unsafe is, I think, reducible to whether you trust a library author to write a good library. Unsafe is not all-or-nothing. Thus, as with all languages, good general programming practices come before language features.
That's kind of my point. Because it's isolated and abstracted I wouldn't be surprised if most Rust devs have no idea how much unsafe code is actually out there.
Rust does not want you to think about memory management. You play by its rules and let it worry about allocations/deallocation. Frankly in that regard Rust has more in common with GC languages than it does Zig or C. Zig chooses to give the developer full control and provides tools to make writing correct/safe code easier.
Although not a comprehensive report, people tend to count the source lines of unsafe in a Rust codebase as a metric. Moreover, reputable libraries worth using typically take care to reduce unsafe, and where it is used, encapsulate it well. I don't think you have a substantive point on the matter. Unsafe certainly can be abused, but it's not a bogeyman that people scarcely catch glimpses of. Unsafe doesn't demote the safety of Rust to that of C, or something like that.
Your comments on Rust's philosophy towards memory management are off base. Rust is unlike GC languages, even Swift, in that it makes allocations and deallocations explicit. For example, I know that one approach to implementing async functions in trait objects was rejected because it would've made implicit heap allocations. Granted, Rust is far behind on reified and custom allocators. Rust has functionality to avoid the default alloc crate, which is the part of libstd that does heap allocations, and a library ecosystem for alternate data structures. Rust doesn't immediately give you total access, but it's only a few steps away. Could it be easier to work with? Absolutely. The same goes for unsafe.
Thank you for the thoughtful reply, but I think you missed my point.
I'm not saying Rust isn't substantially safer than C. When people like Greg Kroah-Hartman say that Rust by its design eliminates a lot of the memory bugs he's been fighting for 40 years, I believe him.
My point is that people tend to talk about it as an all or nothing proposition. That Rust is memory safe. Period. And any language that can't put that on the tin is immediately disqualified, that somehow their approach to solving similar problems is invalid.
By the very nature of the system no language that wants to interact with the hardware can be entirely memory safe, not even Rust. It has chosen a specific solution, and a pretty damn interesting one as far as that goes, but it still has to deal with unsafe. And the more directly your program has to deal with the hardware the more unsafe code it's going to have to deal with.
Zig has also chosen an approach to deal with the problem. Their's is one that gives far more direct control to the programmer. Every single memory allocation is explicit in that you have to directly interact with an allocator and you have to free that memory. It's not hidden behind constructors/destructors and controlled via RAII patterns (side note, there are managed data structures that you give an allocator to via an init and free via a deinit, but you still have to pass in the allocator and those are being largely replaced).
If you are only dealing with problems where you can interact with Rust's abstractions I'm sure it is more safe then Zig, but I don't think it's as big a difference as people think. And when you start digging down into systems level programming where the amount of unsafe code you have to write grows, Rusts advantage starts to diminish significantly.
To my point about Rust not wanting you to think about memory, take Vector as an example. You and I know that's doing heap allocations, but I guarantee you a not insignificant number of Rust devs just don't even think about it. And they certainly don't think about all the allocations/deallocations that have to happen to grow and shrink it dynamically.
Compare that to Zigs ArrayList. When you create it you have to explicitly hand it an allocator you created. It could be a general purpose allocator, but it could just as easily be an arena allocator, or even an allocator backed by a buffer you pre-allocated specifically for it. As the programmer you have to directly deal with the fact that thing allocats and deallocates.
Thats what I mean when I say Rust has more in common with GC languages in some ways. When I type "new" in Java I know I'm heap allocating a new object, just Java doesn't want me to think about that because the GC will deal with it. When you create a vector in Rust, it doesn't want you to think about the memory, it just wants you to follow it's borrow checker rules. Which is very different then thinking about allocation/deallocation patterns.
>> memory safety is simply table stakes
> Why?
Because it's a stepping stone to other kinds of safety. Memory safety isn't the be-all and end-all, but it gets us to where we can focus other important things.
And turns out in this particular case we don't even have to pay much for it in terms of performance.
> The real underlying comparison statement here is far more subjective. It's along the lines of: "I find it easier to write solid code in rust than in zig".
Agreed! But also how about "We can get pretty close to memory safety with the tools we provide! Mostly at runtime! If you opt-in!" ~~ signed, people (Zig compiler itself, Bun, Ghostty, etc) who ship binaries built with -Doptimize=ReleaseFast
>Why?
Memory bugs are hard to debug, potentially catastrophic (particularly concerning security) and in large systems software tend to constitute the majority of issues.[1]
It is true that Rust is not absolutely memory safe and Zig provides some more features than C but directionally it is correct that Rust (or languages with a similar design philosophy) eliminate billion dollar mistakes. And you can take that literally rather than metaphorically. We live in a world where vulnerable software can take a country's infrastructure out.
[1] https://www.zdnet.com/article/microsoft-70-percent-of-all-se...
Zig has a pretty great type system, and sometimes languages like Rust and C++ are not great with preventing accidental heap allocations. Zig and C make this very explicit, and it's great to be able to handle allocation failures in robust software.
What's great about its type system? I find it severely limited and not actually useful for conveying and checking invariants.
That is the usual fallacy, because it assumes everyone has full access to whole source code and is tracking down all the places where heap is being used.
It also assumes that the OS doesn't lie to the application when allocations fail.
Zig make allocations extremely explicit (even more than C) by having you pass around the allocator to every function that allocates to the heap. Even third-party libraries will only use the allocator you provide them. It's not a fallacy, you're in total control.
> pass around the allocator to every function that allocates to the heap.
what prevents a library from taking an allocator, saving it hidden somewhere and using it silently?
authors of the library
Why, are you going to abort if too many calls to the allocator take place?
You can if you want. You can write your own allocator that never actually touches the heap and just distributes memory from a big chunk on the stack if you want to. The point is you have fine grained (per function) control over the allocation strategy not only in your codebase but also your dependencies.
Allocation strategy isn't the same as knowing exactly exactly when allocations take place.
You missed the point that libraries can have their own allocators and don't expose customisation points.
sure they can. but why would they choose to?
Because the language doesn't prevent them, and they own their library.
> It also assumes that the OS doesn't lie to the application when allocations fail.
Gotta do the good ol'
and maybe adjust overcommit_ratio as well to make sure the memory you allocated is actually available.OS specific hack and unrelated to C.
Your comment was also OS-specific because Windows doesn't lie to applications about failed allocations.
Not at all, rather there is no guarantee that the C abstract machine described on ISO C, actually returns NULL on memory allocation failures as some C advocates without ISO C legalese expertise seem to advocate.
>> Why would I use a new language...
If you are asking that question you should not use a new language. Stick with what works for you. You need to feel that something is unsatisfactory with what you are using now in order to consider changing.
To me the argument is that memory errors are just one type of logic error that can lead to serious bugs. You want a language that reduces logic errors generally, not just memory safety ones, and zig's focus on simplicity and being explicit might be the way to accomplish that.
For large performant systems, what makes sense to me is memory safety by default, with robust, fine-grained levers available to opt in to performance over safety (or to achieve both at once, where that's possible).
Zig isn't that, but it's at least an open question to me. It has some strong safe-by-default constructs yet also has wide open safety holes. It does have those fine-grained levers, plus simplicity and explicitness, so not that far away. Perhaps they'll get there by 1.0?
Logical errors and Memory errors aren’t even close to being in the same ballpark.
Memories errors are deterministic errors with non-deterministic consequences. Logical errors are mostly non-deterministic (subjective and domain dependent) but with deterministic consequences.
> ...memory safety is simply table stakes these days.
Is there like a mailing list Rust folks are on where they send out talking points every few months? I have never seen a community so in sync on how to talk about a language or project. Every few months there's some new phrase or talking point I see all over the place, often repeated verbatim. This is just the most recent one.
Conspiracy nonsense. GP is advocating for GC languages, not Rust.
It was a joke dude. Relax.
Also, literally the first language he mentioned was Rust, and it's the only one he mentioned that would be in the same class as Zig.
> I am actually a bit surprised by the popularity of Zig on this website
Maybe this just indicates that memory safety is table stakes for you, but not for every programmer on Earth?
> For most "good enough" use cases, garbage collectors work fine and I wouldn't bother with a system's programming language at all.
It's not just about performance, it's about reusability. There is a huge amount of code written in languages like Java, JS, Go, and Python that cannot be reused in other contexts because they depend on heavy runtimes. A library written in Zig or Rust can be used almost anywhere, including on the web by compiling to wasm.
Yes, we know how to offer memory safety; we just don't know how to offer it without exacting a price that, in some situations, may not be worth it. Memory safety always has cost.
Rust exists because the cost of safety, offered in other languages, is sometimes too high to pay, and likewise, the cost Rust exacts for its memory safety is sometimes too high to pay (and may even adversely affect correctness).
I completely agree with you that we reach for low-level languages like C, C++, Rust, or Zig, in "special" circumstances - those that, for various reasons, require precise control over hardware resource, and or focuses more on the worst case rather than average case - and most software has increasingly been written in high-level languages (and there's no reversal in this trend). But just as different factors may affect your decisions on whether to use a low-level language, different factors may affect your decision on which low-level language to choose (many of those factors may be subjective, and some are extrinsic to the language's design). Of course, if, like the vast majority of programmers, you don't do low-level programming, then none of these languages are for you.
As a long-time low-level programmer, I can tell you that all of these low-level languages suffer from very serious problems, but they suffer from different problems and require making different tradeoffs. Different projects and different people may reasonably want different tradeoffs, and just as we don't have one high-level language that all programmers like, we also don't have one low-level language that all programmers like. However, preferences are not necessarily evenly distributed, and so some languages, or language-design approaches, end up more popular than others. Which languages or approaches end up more popular in the low-level space remains to be seen.
Memory safety is clearly not "table stakes" for new software written in a low level language for the simple reason that most new software written in low level languages uses languages with significantly less memory safety than Zig offers (Zig offers spatial memory safety, but not temporal memory safety; C and C++ offer neither, and most new low level software written in 2025 is written in C or C++).
I can't see a strong similarity between Go, a high-level language, and Zig, a low-level language, other than that both - each in its own separate domain - values language simplicity. Also, I don't see Zig as being "a better C", because Zig is as different (or as similar) from C as it is from C++ or Rust, albeit on different axes. I find Zig so different from any existing language that it's hard to compare it to anything. As far as I know, it is the first "industry language" that's designed almost entirely around the concept of partial evaluation.
Would you say writing something like a database (storage and query engine) from scratch is better done in Rust or Zig?
It depends on the purpose. If the objective is maximum scale and performance then Zig. The low-level mechanics of userspace I/O and execution scheduling in top-end database architectures strongly recommends a language comfortable expressing complex relationships in contexts where ownership and lifetimes are unavoidably ambiguous. Zig is designed to enable precise and concise control in these contexts with minimal overhead.
If performance/scale-maxxing isn't on the agenda and you are just trying to crank out features then Rust probably brings more to the table.
The best choice is quite arguably C++20 or later. It has a deep set of somewhat unique safety features among systems languages that are well-suited to this specific use case.
First, I would avoid using any low-level language if at all possible, because no matter what you pick, the maintenance and evolution costs are going to be significantly higher than for a high-level language. It's a very costly commitment, so I'd want to be sure it's worth it. But let's suppose I decided that I must use a low-level language (perhaps because worst-case behaviour is really important or I may want to run in a low-memory device or the DB was a "pure overhead" software that aims to minimise memory consumption that's needed for a co-located resource heavy application).
Then, if this were an actual product that people would depend on for a long time, the obvious choice would be C++, because of its maturity and good prospects. But say this is hypothetical or something more adventurous that allows for more risk, then I would say it entirely depends on the aesthetic preferences of the team, as neither language has some clear intrinsic advantage over the other. Personally, I would prefer Zig, because it more closely aligns with my subjective aesthetic preferences, but others may like different things and prefer Rust. It's just a matter of taste.
> the DB was a "pure overhead" software that aims to minimise memory consumption that's needed for a co-located resource heavy application)
Thanks pron for the reply. This describes it the best. To minimize resource consumption in a "pure overhead" software. It's currently written in Java and we are planning a rewrite in a systems PL.
Since most system level api provide a C interface and the c interoperability of zig is top notch you don’t require a marshaling/interop layer.
That's true of Rust as well, so it's not really an advantage unique to Zig.
Is it? Most of the time I read you have to create a wrapper, like here: https://docs.rust-embedded.org/book/interoperability/c-with-...
Please provide some documentation of how to use c libraries without such interop layer in rust. And while bindgen does most of the work it can be pretty tedious to get running.
Tried to use Swift outside Xcode and it’s a pain. Especially when writing CLI apps the Swift compiler chocked and says there is an error, mentioning no line number. Good luck with that. Also the Swift tooling outside Xcode is miserable. Even Rust tooling is better than that, and Swift has a multi billion dollar company behind it. What a shame…
rust has really high learning curve
Perhaps worse is the fatigue curve that some people claim sets in after a few years of using it.
Do you have links on people’s experience with the fatigue curve?
I’ve heard of “hard to learn, easy to forget” but I haven’t seen people document it for career reasons.
I have no hard data. I have seen comments to this effect in HN. Somewhat famously Primagen threw in the towel on it. I would love to hear from others with 4+ years of Rust experience though.
I think that's mostly async fatigue.
Avoid it and you're good, you just have to accept that a big part of the language is not worth its weight. I guess at that point a lot of people get disillusioned and abandon it whole, when in reality you can just choose to ignore that part of the language.
(I'm rewriting my codex-rs fork to remove all traces of async as we speak.)
That does seem like a lot to give up however if doing any amount of I/O. No?
If "any amount" means millions of concurrent connections maybe. But in reality we've build thread based concurrency, event loops and state machines for decades before automatic state machine creation from async code came along.
Async doesn't have access to anything that sync rust doesn't have access to, it just provides syntactic sugar for an opinionated way of working with it.
On the contrary, async is very incompatible with MMAP for example, because a page fault can pause the thread but will block the entire executor or executor thread.
I'd even argue that once you hit that scale you want more control than async offers, so it's only good at that middle ground where you have a lot of stuff, but you also don't really care enough to architect the thing properly.
I guess lack of job positions could be one kind of fatigue curve.
None of all those memory-safe languages allows you to work without a heap. And I don't mean "avoid allocations in that little critical loop". I mean "no dynamic allocation, never ever". A lot of tasks doesn't actually require dynamic allocation, for some it's highly undesirable (e. g. embedded with limited memory and long uptimes), for some it's not even an option (like when you are writing an allocator). Rust has some support for zero-runtime, but a lot of it's features is either useless of outright in the way when you are not using a heap. Swift and others don't even bother.
Unpopular opinion: safety is a red herring. A language shouldn't prevent the programmer from doing the unsafe thing, rather it should provide an ergonomic way to do things in a safe way. If there is no such way - that's on language designer, not the programmer. Rust being the worst offender: there is still no way to do parent links, other than ECS/"data oriented" which, while it has it's advantages, is both quite unergonomic and provides memory safety by flaying it, stuffing the skin with cow dung and throwing the rest out of the window.
>strong dislike towards Go.
Go unsolves problem without unlocking any new possibilities. Zig unsolves problem before it aims towards niches where the "solution" doesn't work.
> Rust has some support for zero-runtime, but a lot of it's features is either useless of outright in the way when you are not using a heap.
Could you give some examples?
Genuinely curious because I don't know: when you group Swift with Rust here, do you mean in terms of memory safety guarantees or in the sense of being used for systems-level projects? I've always thought of Swift as having runtime safety (via ARC), not the same compile-time model as Rust, and mostly confined to Apple platforms.
I'm surprised to see them mentioned alongside each other, but I may very well be missing something basic.
Swift is mostly runtime-enforced now but there are a lot of cultural affinities (for lack of a better term) between Swift and Rust and there’s a proposal to add ownership https://github.com/swiftlang/swift/blob/main/docs/OwnershipM...
As a developer interested in zig it is nice to see books being release, but I'd be a bit resitant to buy anything before zig reaches 1.0 or at least when they settle on a stable api. Both the builder and the std keep changing. Just 0.15 is a huge breakage. Does anyone knows how those changes would affect this book?
People are doing real companies with Zig (e.g. Tiger Beetle) and important projects (e.g. Ghostty) without the 1.0 backward compatibility guarantee. Zig must be doing something right. Maybe this also keeps opinionated 9-5er, enterprise type devs out of it as well which might be a good thing :).
Are there other notable commercial companies/projects than Tiger Beetle relying on Zig? According to public information, Tiger Beetle are about eight employees with a single customer, isn't it?
I'm not sure how many customers that Tiger Beetle have but I really hope they are successful. It would be great to see such a quality focused engineering org make it. They are basically doing what many devs really want to do - make the highest quality and fastest stuff possible - instead of banging out random features that usually no one actually cares about. I don't have a use case for their tech right now, but the moment I have a need for anything in the vicinity I'll be checking it out..
Thanks! Joran, CEO from TigerBeetle here!
We’re doing pretty well as a business already, contrary to Rochus’ comment, which is not accurate.
Our team is 16, we have $30M in investment, and already some of the largest brokerages, exchanges, and wealth managements, in their respective jurisdictions are customers of TigerBeetle.
We have a saying:
“Good engineering is good business, and good business is good engineering.”
At least in TigerBeetle’s experience, the saying is proving true. We really appreciate your support and kind words!
Thanks for the clarifications.
May I ask what made you use Zig instead of e.g. Rust or C++ (or even Ada/SPARK)? I assume Go would be too undeterministic for real-time applications?
Who is that single customer? I'd have thought by now there would be more.
And there are.
cf. https://news.ycombinator.com/item?id=45478804
Bun.sh uses zig for large portions of native code.
Thanks for the hint. According to public sources, Bun's runtime infrastructure and bindings are written in Zig, but it uses Apple's JavaScriptCore engine written in C++ (i.e. Zig is essentially used in a thin wrapper layer around a large C++ engine). Bun itself apparently struggles with stability and production readiness. Oven (Bun's company) has around 2-10 employees according to LinkedIn.
Been using bun from years, it's as stable as alternatives from a long time.
lightpanda and bun
Lightpanda looks interesting, thanks for the hint. According to public sources, behind the development is a company of 2-10 employees (like Bun). The engine uses parts of the Netsurf browser (written in C) and the V8 engine (written in C++), and - as far a I can tell - is in an early development stage.
https://news.ycombinator.com/item?id=42817439
> The main idea is to avoid any graphical rendering and just work with data manipulation
Which makes the engine much smaller and less complex than a "normal" browser.
I guess almost every single language out there have at least one "real" company using it, so yeah, Zig is still mostly hype and blog post praising its awesomeness.
I disagree, since there's Bun also to exemplify real, popular, tools are being created.
https://bun.com
Is this really the first Zig book since the development startet in 2016? Maybe the language and library is just too volatile for a book.
Or it evolves smoothly at another pace.
Author links to 0.15.1 Zig docs in several places, at least for now.
Outstanding issue on Zig side for 1.0 release: https://github.com/ziglang/zig/issues/16270
As a Zig adopter I was rooting for this because even though there is the official documentation and some great community content, it's nice to just kick back and read a well written book. I'm also very happy with other Manning titles; they have built a high level of trust with me.
To answer your question is it too early. My expectation is that the core language, being quite small by design, will not change that much before 1.0. Some things will, especially async support.
What I think we will see is something like a comprehensive book on C++14 . The language is not much changed between now and then, but there will be new sections to add and some sections to rework with changes to the interface. The book would still be useful today.
Not a perfect analogy because C++ maintains backwards compatibility but I think it is close.
With how its going I feel Zig 1.0 wont be a thing until my retirement in 37 years
I’m willing to bet $5 it happens in 4 years or fewer.
That’s a pretty low confidence if we measure confidence in dollars willing to risk
It’s 100% of my annual betting budget though!
Do you have a history we can look at to see how good you are at predicting this for programming languages? Like say, some 2020 predictions you had for languages which would or would not ship 1.0 by 2025 ?
I made a set of predictions for Rust in 2022, nearly all of which turned out to be correct. And I was publicly confident Go and Rust would be massive when they reached 1.0. I was right on both counts.
But I will also admit I don’t follow developments in zig as closely as Rust. I’ve never written any Zig. And in any case, past performance isn’t indicative of future performance.
I could be wrong about this prediction, but I don’t think I will be. From what I’ve seen Andy Kelley is a perfectionist who could work on point releases forever. But his biggest users (tigerbeetle and bun especially) will only be taken seriously once Zig is 1.0. They’ll nudge him towards 1.0. They can wait a few years, but not forever. That’s why I guessed 4 years.
> But his biggest users (tigerbeetle and bun especially) will only be taken seriously once Zig is 1.0.
TB is only 5 years old but already migrating some of the largest brokerages, exchanges and wealth managements in their respective jurisdictions.
Zig’s quality for us here holds up under some pretty extreme fuzzing (a fleet of 1000 dedicated CPU cores), Deterministic Simulation Testing and Jepsen auditing (TB did 4x the typical audit engagement duration), and is orthogonal to 1.0 backwards compatibility.
Zig version upgrades for our team are no big deal, compared to the difficulty of the consensus and local storage engine challenges we work on, and we vendor most of our std lib usage in stdx.
> They’ll nudge him towards 1.0.
On the contrary, we want Andrew to take his time and get it right on the big decisions, because the half life of these projects can be decades.
We’re in no rush. For example, TigerBeetle is designed to power the next 30 years of transaction processing and Zig’s trajectory here is what’s important.
That said, Zig and Zig’s toolchain today, is already better, at least for our purposes, than anything else we considered using.
I stand corrected. I fear I may lose my $5 now.
If you don’t mind my asking, did TB add support for transaction metadata? I’ve seen this anti-pattern of map<string, string> associated with each transaction. Far from ideal, but useful. Last I checked TB didn’t support that because it would need dynamic memory allocation. Does it support it now or will it in future?
Haha! You could double down and up the stakes.
It’s not that it would need dynamic memory allocation (it could be done with static), but rather it’s not essential to performance—you could use any KV or OLGP for additional “user data”, it’s not the hard contended part.
To keep consistency, the principle is:
- write your data dependencies if any, then “write to TB last as your system of record”,
- “read from TB first”, and if it’s there your data dependencies will be too.
Wouldn’t you think AI advances in 4 years would make this a safe bet?
No, unless you're riding the hype train so hard that you are excited to go to colonize Mars thanks to AI advances.
Excellent way to stay busy producing revised editions over the next few years.
I really enjoy writing Zig and I think it's going to be an important language in the future. I do not enjoy porting my code between versions of the language. But early adopters are important for exploring the problem space, and I would have loved to find a canonical source (aside from the docs, which are mostly nice) for learning the language when I did. A text that evolves with the language has a better chance of becoming that canonical onboarding source.
I think this is great, gives those wanting a good formal foundation a guide to getting organized and making something happen.
With time this is also going to be great for the author with new iterations of the book, but getting in early like this can set the author and the language up for success long term.
I’m very excited for Zig personally, but calling it “ultra reliable” feels very premature.
The language isn’t even stable, which is pretty much the opposite of something you can rely on.
We’ll know in many years if it was something worth relying on.
What is the point of this book this early? Zig is in too much flux. Language is 0.x for a reason.
Just to gauge interest?
Well the MEAP just started, 3 chapters are complete and the rest will follow probably next year.
IMO it’s a bet: Zig stays stable enough and this will be _the_ Zig book for a while. Should the bet not pay off it still cements the author as an authority on the language and gets them a foot in the door.
Looks like a win-win for the author? Why would the publisher take the bet
Manning doesn't pay that much to first time authors, and it looks like it's the first book for the author, Garrison Hinson-Hasty.
My guess is it's about $2k upfront and the author owes it back if they don't deliver.
Teiva Harsanyi did a good writeup recently about working with Manning as a first-time author. He got $2k upfront and $2k after delivering the first 1/3rd.[0]
[0] https://www.thecoder.cafe/p/100-go-mistakes
When I wrote Elm in Action for Manning, I talked with them explicitly about the language being pre-1.0 and what would happen if there were breaking changes. The short answer was that it was something they dealt with all the time; if there were small changes, we could issue errata online, and if there were sufficiently large changes, we could do a second edition.
I did get an advance but I don't remember a clause about having to return any of it if I didn't earn it out. (I did earn it out, but I don't remember any expectation that I'd have to give them money back if I hadn't.) Also my memory was that it was more than $2k but it was about 10 years ago so I might be misremembering!
Manning does this all the time, I presume the cost-to-print is so low it doesn't hurt.
When I wrote Nim in Action[1] for Manning, it was prior to 1.0 as well. It was definitely a bit akward, but breaking changes to the stuff covered in the book was relatively minor.
1 - https://book.picheta.me/
I bought a copy through my subscription.
having read through a few of their books, Manning has a pretty good record of producing a lot of good content. nostarch an pragprog are arguably better in terms of writing quality (slightly) but they dont' publish nearly as many books.
Pakt certainly has more books but the quality is absolute garbage.
whenever a new language comes out, I usually take a weekend to at least dabble in it to see if its worth getting into. Most of the languages end up in my "cool story" bucket but its also how I found elixir and I've been working in it fulltime for the last 7 years every since.
In the case of manning, they presell the pdf. that costs them nothing while expanding their catalog in a way that doesnt feel like a ripoff for the subscribers. I'm not expecting a MEAP title to have the same level of polish as a completed book. Rather, I appreciate having close to bleeding age info thats been somewhat curated for my consumption.
Language doesn’t really change that much, it is mainly stdlib and build apis changing
Garrison Hinson-Hasty is writing a book about a programming language that is still in flux seems a little bit… hasty. Sorry, couldn’t resist…
Selling a system programming book to the thousands of people interested in system programming with zig.
Not a comment on the book but I hope a slightly off topic thread is OK...
As a systems programmer, what is the selling point of Zig? For me, memory safety is the number 1, 2, 3, 4 and 5 problem with C. A new language that doesn't solve those problems seems very strange to me. And as a counterpoint, I will tolerate massive number of issues with Rust because it gives us memory safety.
I work in security so that may give me a bias. But maybe instead of "bias" it's more like awareness. If you don't deal with it day in, day out, it seems impossible to really internalise how completely fucking broken something like the Linux kernel is. Continuing to write e.g. driver code in an unsafe language is just not acceptable for our civilisation.
But, also maybe you don't need full complete memory safety, if a language has well designed abstractions maybe it can just be "safe enough in practice". I've worked on C++ code that feels that way. Is this the deal with Zig?
1. Lack of memory safety is a big problem particularly because it's a common cause of dangerous vulnerabilities, but spatial safety is more important than temporal safety for that reason, and Zig does offer spatial safety.
2. Other important issues of concern when it comes to security/correctness are language complexity and compilation speed. A complicated language with lots of implicitness can obscure bugs and make reviews slower. Slow compilation reduces the iteration speed and harms testing. Zig focuses on these two. The language is simple, with no implicitness - overloading of any kind is not allowed and neither are any kind of hidden calls. Even if security were the only concern, it's not clear at all how much complexity and compilation speed should be sacrificed for temporal safety. Remember that it's easy to classify bugs by technical causes, but more diffuse aspects, like testing and review, are also very important, and Zig tries to find a good balance.
3. Nevertheless, the language is still very expressive [1]. In any language, you want the algorithm to be clear and with neither extraneous nor important hidden details for that domain. I think Zig gets that about right for low-level programming (and C++ doesn't).
For me, the #1 problem with C is lack of safety and the #2 problem is lack of expressivity. With C++, the main problem for me is language complexity and implicitness, with slow compilation and lack of safety tied for number 2. So Zig is as expressive as C++. but not only safer, but also much simpler, and compiles faster (and improving compilation speed further is an important goal).
[1]: I say that two languages are equally expressive if, over all algorithms, idiomatic implementations in both languages (that are roughly equally clear) differ in length by no more than a linear relation with a small constant.
> So Zig is as expressive as C++. but not only safer, but also much simpler, and compiles faster
Tbh syntax-wise Zig feels more cryptic[1] at first than C++.
[1] e.g. `extern "user32" fn MessageBoxA(?win.HWND, [*:0]const u8, [*:0]const u8, u32) callconv(win.WINAPI) i32;` from https://ziglang.org/learn/samples/
To be clear the equivalent C++ code is:
It's not exactly a stellar improvement.Lol, no this is the equivalent
I think that's just a matter of syntax habits, presumably because you're already familiar with C++ syntax. The syntax in your example is especially "cryptic" simply because it's an FFI signature (of a function that's not written in Zig and doesn't use the normal Zig data representations).
I guess this must vary by usecase, but in the Linux kernel work I do, expressiveness is just not an issue. C is a dumb language, it's fine. Coding is just such a small part of the engineering task. Schlepping to write nontrivial stuff in C can be a drag but it's just such a minor thing compared to the task of getting OS software designed correctly that I am not really excited about trying to improve on that axis.
Whereas the memory safety issue is totally fundamental and totally terrifying. There's no other way to solve it than with a new language. The cost of changing languages is staggering but we have no choice. Yet... Paying that cost and not eking out the maximum safety benefit that's practical... Seems a bit dodgy to me.
> There's no other way to solve it than with a new language.
That's not true, though. If memory safety is the only thing that you need and C doesn't have, there are products that offer memory-safety proofs for C (e.g. https://www.trust-in-soft.com). You do need to add some lifetime annotations and may want to change your code here and there, but it's overall much cheaper than a rewrite in a new language. I believe such solutions are more popular, too; few companies - even and especially those that care primarily about correctness - are crazy enough to justify a rewrite of an existing product just for memory safety.
> Paying that cost and not eking out the maximum safety benefit that's practical... Seems a bit dodgy to me.
The cost is not the same, though (and if you only consider rewrites, as I said, there are better options). It sounds like you're saying that any cost is worth any added safety. I don't think that's true, but if it were, then something like ATS is probably what you're after (it isn't, though, because it's practically nobody's choice). Anything short of that is some compromise between cost and safety. Rust's compromise is just different from Zig's. They're both significantly safer than C and a far way off from ATS. Everyone seems to agree that the sweet spot is somewhere on that spectrum - not C, not ATS - but there's no agreement on where.
Can you say more about spatial security v.s. temporal security and how Zig does them?
Spatial memory safety means that a language (at least in its subset that's designated "safe") doesn't allow you to manufacture pointers into memory that may contain data of a different type than what's expected by the pointer (we'll call such pointers "invalid"). The classic examples of spatial memory safety is guaranteeing that arrays are never accessed out of bounds (hence "spatial", as in pointers are safely constrained in the address space). Zig guarantees (except when using delineated "unsafe" code) such spatial safety.
Temporal memory safety is the guarantee that you never access pointers that have been valid at some point in time after they've become invalid due to reallocation of memory; we call such pointers "dangling" (hence "temporal", as in pointers are safely constrained in time). The classic example of this is use-after-free. Zig does not guarantee temporal safety, and you can accidentally have a dangling pointer (i.e. access a one-time valid pointer after it's become invalid).
Invalid pointers are especially dangerous because, in languages where they can occur, they've been a very common source of exploitable security vulnerabilities. However, violating spatial memory bounds is more dangerous, as the result is more easily exploited by attackers in the case of a vulnerability, as it's more predictable.
1. you get spatial memory safety which iirc is more common for security problems than temporal.
2. honestly for high performance applications you might reach for an ECS type system anyways (i think dom engines do this) at which point you would be getting around the borrow checker anyways.
> you get spatial memory safety which iirc is more common for security problems than temporal.
No, at least in the Linux kernel use-after-free is the biggest memory safety issue by a long way.
Maybe this isn't true in some other system software? Like if you are dealing with parsing. But, you don't really need memory safety for that usecase, fuzzing is unreasonably effective.
(Obviously for greenfield dev, safe languages are the way to go. But setting up a dank fuzzing pipeline is easier than migrating to a new language).
Maybe in the future, if the language has very good AI support, security guarantees of the language won't be as important, as it (ai) will find potential bugs well enough. This may be the case with Zig, as the language is simple and consistent, and the lack of macros will make it easier for LLMs to understand the code.
AI is not remotely capable enough to be trusted with that. Perhaps in the future it will be, but I'm not betting on it with the lack of improvement we have seen thus far.
You touched on the idea in your last paragraph. Comparing Zig to C using memory safety as a binary threshold instead of a continuous scale is apples to oranges.
For one, Rust isn't even at the far right of the memory safety scale, so any argument against Zig due to some epsilon of safety has to reckon with the epsilon Rust is missing as well. That scale exists, and in choosing a language we're choosing (among other things) where on that scale we want to be. Maybe Rust is "good enough", and maybe not.
For two, there's a lot more to software design than memory safety _even when your goals revolve around the safety and stability of the end product_.
So long as you get the basics right (like how Zig uses slices by default and has defer/errdefer statements), you won't have an RCE from a memory safety bug, or at least not one that you wouldn't likely see in Rust anyway (e.g., suppose you're writing a new data structure in both languages; suppose that it involves back-references or something else where the common pattern in Rust would involve creating backing memory and passing indices around, combined with an unsafe block on the actual cell access to satisfy the borrow checker; the fact that you had to do something in an unsafe block opens you up to safety issues like index wraparound causing OOB reads).
If memory safety is "good enough" from an RCE perspective, what are we buying with the rest of Rust's memory safety? Why is it so important? For everything else, you're just preventing normal application bugs -- leaking user data, incorrect computations, random crashes, etc. The importance of normal application bugs varies from project to project, but in the context of systems programming of any kind I'll argue that these are absolutely as critical as RCEs and the other "big" things you're trying to prevent via memory safety. The whole reason you try to prevent an RCE is to ensure that you have access to your files/data/tools and that an attacker doesn't, but if logic bugs corrupt data, crash your tools, and leak PII then you're still fucked. Memory safety didn't save you.
That shift in perspective is important. If our goal is preventing ordinary application bugs on top of memory-safety bugs it becomes blatantly obvious that you'd be willing to trade a little safety for some bug prevention.
Does Zig actually help reduce bugs though? IME, yes. The big thing it has going for it is that it's a small, cohesive language that gives you the power you need to do the things you're trying to do. Being small makes it possible to create features that compose well and are correct by default, and having the power to do what you're trying to do means you don't have to create convoluted (expensive, bug-prone) workarounds. Some examples:
1. `let` rebindings in Rust are a great feature, and I like them a lot. Better than 90% of the time I use them correctly. The other 10% I'm instead trying to create a new variable and accidentally shadowing an existing one. That's often not the end of the world, but if there exists any code after my `let` rebinding expecting the old value then it's totally incorrect (assuming compatible types so that you don't have a compilation failure). Zig yells loudly about the error.
2. The error-handling system in Zig is easy to get right because of largely compatible return types. By contrast, I see an awful lot of `unwrap` in Rust code in places that really shouldn't have it, and in codebases that try to use `?` or other more-likely-to-be-correct strategies the largely incompatible return types force most people into slapping `anyhow` around everything and calling it a day.
3. Going back to error-handling, Rust allows intentional discards of return values, but if you start by discarding the return value of a non-error function and the signature is later updated to potentially return errors Rust still lets you discard the value. Zig forces you to do something explicitly acknowledging that there was an error. 100% of the time the compiler has yelled at me for a potential logic bug it was correct to do so.
It really is just a bunch of little things, but those little things add up and make for a pleasant language with shockingly few bugs. Maybe my team is just even more exceptional than I already think or something and I'm over-indexing on the language, but in the last couple years I haven't seen a single memory safety issue, or much in the way of other bugs in the Zig part of the company.
1. I guess by "let rebinding" you mean shadowing. Clippy has lints for shadowing which are default off, you might find that some patterns of shadowing which are legal in Rust are too often problematic in your software and so you should tell Clippy to warn you about them or even deny them by default.
For example you might decide to warn for clippy::shadow_unrelated which says if I have a variable named "goose" and then later in the function I just make another variable that's not obviously derived from the first variable but it is called "goose" anyway, warn me I did that.
2. I don't see any advantage of "largely compatible" error handling over anyhow, maybe you could be more explicit about what you want here or give an example.
3. What you're claiming for Zig here is a semantic check. This could exist but be fallible (it can't be infallible because Rice's Theorem), but I suspect the reality is that it's not a semantic check and you've misattributed your experience since I don't see any such check in Zig. Maybe an example would clarify.
I really didn't mean to make this about Zig vs Rust. That was a mistake. My real point is that Zig is also a great language, even when you care about safety.
That said, the things you asked about:
1. I suppose you might get away with linting all of shadow_reuse, shadow_same, and shadow_unrelated. Would that sufficiently disable the feature?
2. The biggest problems with overusing `anyhow` to circumvent the language's error system are performance and type safety. Perf is probably fine enough for most applications (though, we _are_ talking about systems programming, and this is yet another way in which allocation-free Rust is hard to actually do), but not being able to match on the returned error makes appropriately handling errors clunkier than it should be, leading to devs instead bubbling things up the call stack even more than normal or "handling" the error by ignoring any context and using some cudgel like killing connections regardless of what the underlying error was.
3. Rice's Theorem is only tangentially related. Your goal, and one that Rust also strives for, is to make it so that changes to the type signature of a function require compatible changes at call sites, which you can often handle syntactically. Rust allows `let _ = foo()` to also silence errors, and where Zig normally allows `_ = foo()` to silence unhandled return values it additionally requires `try`, `catch`, or `return` (or similar control flow). You have options like `_ = foo() catch {}` or `_ = foo() catch @panic("WTF")` if you really do want to silence the error, but you'll never be caught off-guard by an error being added to a function's return type.
> So long as you get the basics right (like how Zig uses slices by default and has defer/errdefer statements), you won't have an RCE from a memory safety bug
This the exact same argument C/C++ uses. Just don't do anything wrong, and nothing wrong will happen.
Not in the slightest. Here it's the language getting the basics right rather than the user.
the far right of the security scale is C (+ isabelle?) again (seL4)