From what I can see in the codegen, defer is not implemented "properly": the deferred statements are only executed when the block exits normally; leaving the block via "return", "break", "continue" (including their labelled variants! those interact subtly with outer defers), or "goto" skips them entirely. Which, arguably, should not happen:
var f = fopen("file.txt", "r");
defer fclose(f);
if fread(&ch, 1, 1, f) <= 0 { return -1; }
return 0;
would not close file if it was empty. In fact, I am not sure how it works even for normal "return 0": it looks like the deferred statements are emitted after the "return", textually, so they only properly work in void-returning function and internal blocks.
> By default, variables are mutable. You can enable Immutable by Default mode using a directive.
> //> immutable-by-default
> var x = 10;
> // x = 20; // Error: x is immutable
> var mut y = 10;
> y = 20; // OK
Wait, but this means that if I’m reading somebody’s code, I won’t know if variables are mutable or not unless I read the whole file looking for such directive. Imagine if someone even defined custom directives, that doesn’t make it readable.
Given an option that is configurable, why would the default setting be the one that increases probability of errors?
For some niches the answer is "because the convenience is worth it" (e.g. game jams). But I personally think the error prone option should be opt in for such cases.
Or to be blunt: correctness should not be opt-in. It should be opt-out.
I have considered such a flag for my future language, which I named #explode-randomly-at-runtime ;)
> Or to be blunt: correctness should not be opt-in. It should be opt-out.
One can perfectly fine write correct programs using mutable variables. It's not a security feature, it's a design decision.
That being said, I agree with you that the author should decide if Zen-C should be either mutable or immutable by default, with special syntax for the other case. As it is now, it's confusing when reading code.
> Given an option that is configurable, why would the default setting be the one that increases probability of errors?
They're objecting to the "given", though. They didn't comment either way on what the default should be.
Why should it be configurable? Who benefits from that? If it's to make it so people don't have to type "var mut" then replace that with something shorter!
Well, arguably if it's immutable, then it's not a variable so "var" doesn't make sense. The corollary is if it's a variable it should be mutable so "var mut" is a tautology.
I think "const x = something();" would be logical but they've used const already for compile-time constants. There's probably a sensible way of overloading that use though, depending if the expression would be constant at compile-time or not, but I've not considered it enough to think about edge cases (as it basically reduces to the halting problem unless any functions called are also explicitly marked up as compile time or not).
Trying to pick a good name for the keyword is valid but it's bikeshedding. Either way the keywords should be consistent and a config option is more trouble than it's worth.
And "variables" in math are almost always immutable within a single invocation. It's not a particularly bad word to use. But there's plenty of options. const/var. let/var. let/mut. var/mut I guess. let/set from a sibling comment.
It's not ideal but it seems like something an LSP could tell you on a hover event. I didn't see an LSP (I didn't look that hard either) but presumably that's within the scope of their mission statement to deliver modern language ergonomics. (But I agree with sibling comments that this should be a keyword. Another decent alternative would be that it's only global in scope.)
Other languages also have non-local qays of influencing compiler behavior, for example attributes in rust (standard) or compiler pragmas in C (non-standard).
When reading working code, it doesn't matter whether the language mode allows variable reassignment. It only matters when you want to change it. And even then, the compiler will yell at you when you do the wrong thing. Testing it out is probably much faster than searching the codebase for a directive. It doesn't seem like a big deal to me.
There are two distinct constructs that are referred to using the name variable in computer science:
1) A ‘variable’ is an identifier which is bound to a fixed value by a definition;
2) a ‘variable’ is a memory location, or a higher level approximation abstracting over memory locations, which is set to and may be changed to a value by an assignment;
Both of the above are acceptable uses of the word. I am of the mindset that the non-independent existence of these two meanings in both languages and in discourse are a large and fundamental problem.
I take the position that, inspired by mathematics, a variable should mean #1. Thereby making variables immutably bound to a fixed value. Meaning #2 should have some other name and require explicit use thereof.
From the PLT and Maths background, a mutable variable is somewhat oxymoronic. So, I agree let’s not copy JavaScript, but let’s also not be dismissive of the usage of terminology that has long standing meanings (even when the varied meanings of a single term are quite opposite).
I think you are confused by terminology here and not by behavior, "immutable variable" is a normal terminology in all languages and could be says to be distinct from constants.
In Rust if you define with "let x = 1;" it's an immutable variable, and same with Kotlin "val x = 1;"
Lore and custom made "immutable variable" some kind of frequent idiomatic parlance, but it’s still an oxymoron in their general accepted isolated meanings.
Neither "let" nor "val[ue]" implies constancy or vacillation in themselves without further context.
Words only have the meaning we give them, and "variable" already has this meaning from mathematics in the sense of x+1=2, x is a variable.
Euler used this terminology, it's not new fangled corruption or anything. I'm not sure it makes too much sense to argue they new languages should use a different terminology than this based on a colloquial/nontechnical interpretation of the word.
I get your point on how the words meanings evolves.
Also it’s fine that anyone name things as it comes to their mind — as long as the other side get what is meant at least, I guess.
On the other it doesn’t hurt much anyone to call an oxymoron thus, or exchange in vacuous manner about terminology or its evolution.
On the specific example you give, I’m not an expert, but it seems dubious to me. In x+1=2, terms like x are called unknowns. Prove me wrong, but I would rather bet that Euler used unknown (quantitas incognita) unless he was specifically discussing variable quantities (quantitas variabilis) to describe, well, quantities that change. Probably he used also French and German equivalents, but if Euler spoke any English that’s not reflected in his publications.
"Damit wird insbesondere zu der interessanten Aufgabe, eine quadratische Gleichung beliebig vieler Variabeln mit algebraischen Zahlencoeffizienten in solchen ganzen oder gebrochenen Zahlen zu lösen, die in dem durch die Coefficienten bestimmten algebraischen Rationalitätsbereiche gelegen sind." - Hilbert, 1900
The use of "variable" to denote an "unknown" is a very old practice that predates computers and programming languages.
“Immutable” and “variable” generally refers to two different aspects of a variable’s lifetime, and they’re compatible with each other.
In a function f(x), x is a variable because each time f is invoked, a different value can be provided for x. But that variable can be immutable within the body of the function. That’s what’s usually being referred to by “immutable variable”.
This terminology is used across many different languages, and has nothing to do with Javascript specifically. For example, it’s common to describe pure functional languages by saying something like “all variables are immutable” (https://wiki.haskell.org/A_brief_introduction_to_Haskell).
Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location". Probably holder, keeper* or warden would make a more accurate terms using ordinary parlance. Or to be very on point and dropping the ordinariness, there is mneme[1] or mnemon[2].
Good luck propagating ideas, as sound as it might, to a general audience once something is established in some jargon.
> Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location".
No, the term came directly from mathematics, where it had been firmly established by 1700 by people like Fermat, Newton, and Leibniz.
The confusion was introduced when programming languages decided to allow a variable's value to vary not just when a function was called, but during the evaluation of a function. This then creates the need to distinguish between a variable whose value doesn't change during any single evaluation of a function, and one that does change.
As I mentioned, the terms apply to two different aspects of the variable lifecycle, and that's implicitly understood. Saying it's an "oxymoron" is a version of the etymological fallacy that's ignoring the defined meanings of terms.
It's odd that the async/await syntax _exclusively_ uses threads under the hood. I guess it makes for a straightforward implementation, but in every language I've seen the point of async/await is to use an event loop/cooperative multitasking.
I’d say that the point of async/await is to create a syntax demarcation between functions which may suspend themselves (or be suspended by a supervisory system) and those functions that process through completely and cannot be suspended (particularly by a supervisory system). The means to enable the suspension of computation and allow other computations to proceed following that suspension are implementation details.
So, having an async function run on a separate thread from those functions that are synchronous seems a viable way to achieve the underlying goal of continuous processing in the face of computations that involve waiting for some resource to become available.
I will agree that inspired by C#’s originating and then JavaScripts popularization of the syntax, it is not a stretch to assume async/await is implemented with an event loop (since both languages use such for implementation).
Noob question: if it just compiles to threads, is there any need for special syntax in the first place? My understanding was that no language support should be required for blocking on a thread.
One advantage is that it gives you the opportunity to move to a more sophisticated implementation later without breaking backwards compatibility (assuming the abstraction does not leak).
Async/await should do a little more under the hood than what the typical OS threading APIs provide, for example forwarding function parameters and return values automatically instead of making the user write their own boilerplate structs for that.
Syntax aside, how does this compare to Nim? Nim does similar, I think Crystal does as well? Not entirely sure about Crystal tbh. I guess Nim and Vala, since I believe both transpile to C, so you really get "like C" output from both.
From what I see, Zen-C aims to be "C with super-powers". It still uses C pointers for arrays and strings. It transpiles to single human-readable C file without symbol mangling. No safety. Not portable (yet?).
Nim is a full, independent modern language that uses C as one of its backends. It has its own runtime, optional GC, Unicode strings, bounds checking, and a huge stdlib. You write high-level Nim code and it spits out optimized C you usually don't touch.
Here’s a little comparison I put together from what I can find in the readme and code:
Comparison ZenC Nim
written in C Self-Hosted
targets C C, C++, ObjC, JS, LLVM (via nlvm), Native (in-progress)
platforms POSIX Linux, Windows, MacOS, POSIX, baremetal
mm strategy manual/RAII ARC, ORC(ARC with cycle collector), multiple gc, manual
generated code human-readable optimized
mangling no yes
stdlib bare extensive/batteries-included
compile-time code yes yes
macros comptime? AST manipulation
arrays C arrays type and size is retained at all times
strings C strings have capacity and length, support Unicode
bounds-checking no yes (optional)
Nim (Python-like) and Crystal (Ruby-like) are not C-like languages. Arguably, those languages are targeting a different audience. There are other C family and C style syntax languages that compile directly to C or has it as one of its backends.
Quite easy to make apps with it and GNOME Builder makes it really easy to package it for distribution (creates a proper flatpak environment, no need to make all the boilerplate). It's quite nice to work with, and make stuff happen. Gtk docs and awful deprecation culture (deprecate functions without any real alternative) are still a PITA though.
There's a surprising number of GUI apps built using Vala, if you've used Linux long enough, there's a chance you may have used a Vala based GUI and not even known you were. It's just such a nice language, it's a shame it's not more prevalent since Gnome libraries can compile basically anywhere.
Crystal compiles directly to object code, using LLVM. It does provide the ability to interoperate with C code; as an example, I use this feature to call ncursesw functions from Crystal.
Surprisingly theres a shocking number of GUI programs for Linux made with Vala, and ElementaryOS is built using Vala, and all their custom software uses Vala. So it's not dead, just a little known interesting language. :)
An interesting bit to me is that it compiles to (apparently) readable C, I'm not sure how one would use that to their advantage
I am not too familiar with C - is the idea that it's easier to incrementally have some parts of your codebase in this language, with other parts being in regular C?
one benefit is that a lot of tooling e.g. for verification etc. is built around C.
another is that it only has C runtime requirement, so no weird runtime stuff to impelement if youd say want to run on bare metal..you could output the C code and compile it to your target.
C2 (http://c2lang.org) similarly compiles to C, but arguably more readable C code from what I can see. The benefits are (1) easy access to pretty much any platform with little extra work (2) significantly less long term work compared to integrating with LLVM or similar (3) if it's readable enough, it might be submitted as "C code" in working environments which mandate C.
i think so. The biggest hurdle with new languages is that you are cut off from a 3rdparty library ecosystem. Being compatible with C 3rd party libraries is a big win.
It’s not, it’s just how hackernews works. You’ll see new projects hit 1k-10k stars in a matter of a day. You can have the best project, best article to you but if everyone else doesn’t think so it’ll always be at the bottom. Some luck involved too. Bots upvoting a post not organically I doubt is gonna live long on first page.
So, the point of this language is to be able to write code with high productivity, but with the benefit of compiling it to a low level language? Overall it seems like the language repeats what ZIG does, including the C ABI support, manual memory management with additional ergonomics, comptime feature. The biggest difference that comes to mind quickly is that the creator of Zen-C states that it can allow for the productivity of a high level language.
It has stringly typed macros. It's not comparable to Zig's comptime, even if it calls it comptime:
fn main() {
comptime {
var N = 20;
var fib: long[20];
fib[0] = (long)0;
fib[1] = (long)1;
for var i=2; i<N; i+=1 {
fib[i] = fib[i-1] + fib[i-2];
}
printf("// Generated Fibonacci Sequence\n");
printf("var fibs: int[%d] = [", N);
for var i=0; i<N; i+=1 {
printf("%ld", fib[i]);
if (i < N-1) printf(", ");
}
printf("];\n");
}
print "Compile-time generated Fibonacci sequence:\n";
for i in 0..20 {
print f"fib[{i}] = {fibs[i]}\n";
}
}
It just literally outputs characters, not even tokens like rust's macros, into the compiler's view of the current source file. It has no access to type information, as Zig's does, and can't really be used for any sort of reflection as far as I can tell.
The Zig equivalent of the above comptime block just be:
I wonder, how can a programming language have the productivity of a high-level language ("write like a high-level language"), if it has manual memory management? This just doesn't add up in my view.
I'm writing my own programming language that tries "Write like a high-level language, run like C.", but it does not have manual memory management. It has reference counting with lightweight borrowing for performance sensitive parts: https://github.com/thomasmueller/bau-lang
Seriously, in the discussion happening in this thread C is clearly not a high-level language in context.
I get your statement and even agree with it in certain contexts. But in a discussion where high-level languages are presumed (in context) to not have memory management, looping constructs are defined over a semantics inferred range of some given types, overloading of functions (maybe even operators), algebraic datatypes, and other functional language mixins: C most certainly IS NOT a high level language.
This is pedantic to the point of being derailing and in some ways seemed geared to end the discussion occurring by sticking a bar in the conversations spokes.
Thanks, my parent’s comment is almost a thought-terminating cliche in this kind of discussion. However, Chisnall’s now classic ‘C is not a low level language’ article is one of my favorite papers on language theory and potential hardware design. A discussion about the shortcomings of viewing C as a low level language can/could be profitable, deep, and interesting; but context is king.
Vlang compiles to human readable C too, like Nim, not Odin and Jai. Here's a post to read on V's rational for doing so[1]. Incredibly, some vocal competitors mocked V's developers for its decision, then years later, have been quietly trying to copy or "steal" other ideals without giving credit (that they previously made fun of).
V's approach is to have various backends, in addition to native (to be focused on from 0.6); C, JavaScript, WASM, etc...
No, Odin does not compile to C. It is a standalone programming language that compiles directly to machine code. It primarily uses LLVM as its backend for compiling to machine code, like you said.
Jai does not compile to C. It has a bytecode representation that is used primarily for compile time execution of code, a native backend used mostly for iteration speed and debug builds, and a LLVM target for optimized release builds.
chicken scheme compiles to c as well. it's a pretty convenient compilation target, you get to use all the compilers and tool chains out there and you don't add a dependency on llvm
I love CHICKEN Scheme! Nice to see it mentioned. Though I think it's worth pointing out it compiles to something pretty far from handwritten C, to my understanding. I think this is true of both performance and semantics; for example you can return a pointer to a stack allocated struct from a foreign lambda (this is because chicken's generated C code here doesn't really "return", I think. Not an expert).
Of course you can always drop to manually written C yourself and it's still a fantastic language to interop with C. And CHICKEN 6 (still pre-release) improves upon that! E.g structs and Unions can be returned/passed directly by/to foreign functions, and the new CRUNCH extension/subset is supposed to compile to something quite a bit closer to handwritten C; there are even people experimenting with it on embedded devices.
Chicken indeed interoperates with C quite easily and productively. You're right that the generated C code is mostly incomprehensible to humans, but compiles without difficulty.
The Chicken C API has functions/macros that return values and those that don't return. The former include the fabulous embedded API (crunch is an altogether different beast) which I've used in "mixed language" programming to good effect. In such cases Scheme is rather like the essential "glue" that enables the parts written in other languages to work as a whole.
Of course becoming proficient in Scheme programming takes time and effort. I believe it's true that some brains have an affinity for Lispy languages while others don't. Fortunately, there are many ways to write programs to accomplish a given task.
I am working on mine as well. I think it is very sane to have some activity in this field. I hope we will have high level easy to write code that is fully optimized with very little effort.
I has been served for several decades, however since the late-90's many decided reducing to only C and C++ was the way going forward, now the world is rediscovering it doesn't have to be like that.
They're are certainly going to be lots of languages because now with LLMs it's easier (trivial?) to make one + library (case in point: just within last month there're have been posted here ~20 new langs with codebases 20k~100k LOC) but don't really see them competing. Rust and Zig brought actual improvements and are basically replacing usecases that C++/C had limiting the space available to others.
Uhm, no? There is barely enough space for Rust, which happens to have a unique feature/value proposition that raises it above the vast majority of its competitors. If you're fine with UB or memory unsafe code, then you go with C simply because its deeply entrenched.
In that sense Zen-C changed too many things at once for no good reason. If it was just C with defer, there would have been an opportunity to include defer in the next release of the C standard.
This is so nice. It's crazy how other low-level langs don't have it.
I know Dlang and Rust have it. Maybe Swift too?
The way Dlang does it is nice because you can do a lot of stuff with them at compile time.
i really like this project. fot me its the next level to your own custom C lib.
first your write 'tutorial C'. then after enough segfaults and double frees you start every project with a custom allocator because you've become obsessed with not having that again..., then you implement a library with a custom more generic one as you learn how to implement them, and add primitives you commonly build that lean on that allocator, it will have your refcouters, maybe ctors, dtors etc etc.. (this atleast is my learning path i guess? still have a loooong way to go as always!)
i dont see myself going for a language like this, but i think its inspirational to see where your code can evolve to with enough experience and care
Oh, it _does_ have string interpolation, my bad. Sadly, not by default -- you still have to go back and add an "f" before the string once you've started typing it and then realize that you want an interpolated string. Also, it doesn't always work -- if I define two interpolated string variables in one function, GCC chokes in a way I'm not understanding. And every interpolated string variable consumes 4K of global memory.
Impressive repos. I've been toying with the ideas myself but it's hard to stay on track with this sort of extremely demanding task. I am however not exporting to C but to low level jit.
A lot of the ideas in there are worth being inspired by.
Very similar to any other C-like languages compiling to C (like nim, V, and many smaller hobbyist ones), but I love the keyword "embed". It looks like unlimited potential for fast debbuging, and testing the code without writing boilerplate to read the file and so on.
The author includes some easter-eggs (printing random facts about Zen and various C constructs) which trigger randomly -- check out the file src/zen/zen_facts.c in the repository...
Same, is the memory layout deterministic (and optimized)?
> 2 | 3 => print("Two or Three")
Any reason not to use "2 || 3"?
> Traits
What if I want to remove or override the "trait Drawing for Circle" because the original implementation doesn't fit my constraints?
As long as traits are not required to be in a totally different module than the struct I will likely never welcome them in a programming language.
The whole language examples seem pretty rational, and I'm especially pleased / shocked by the `loop / repeat 5` examples. I love the idea of having syntax support for "maximum number of iterations", eg:
...obviously not trying to start any holy wars around exceptions (which don't seem supported) or exponential backoff (or whatever), but I guess I'm kindof shocked that I haven't seen any other languages support what seems like an obvious syntax feature.
I guess you could easily emulate it with `for x in range(3): ...break`, but `repeat 3: ...break` feels a bit more like that `print("-"*80)` feature but for loops.
There is an entire world in Rust where you never have to touch the borrow-checker or lifetimes at all. You can just clone or move everything, or put everything in an Arc (which is what most other languages are doing anyway). It's very easy to not fight the compiler if you don't want to.
Maybe the real fix for Rust (for people that don't want to care), is just a compiler mode where everything is Arc-by-default?
It might or might not be a toy project, I'm not sure, but one advantage of subtracting the borrow checking is that the compiler avoids a lot of complex machinery.
Borrow checking in Rust isn't sound AFAIK, even after all these years, so some of the problems with designing and implementing lifetimes, region checking, and borrow checking algorithms, aren't trivial.
Not just that, it would also mean that every single Rust application is riddled with very noticeable miscompilation bugs due to the fact that Rust makes heavy use of strict aliasing rules for optimization. This isn't something that can easily be sweeped under the rug without people noticing.
I'm reading it as charitably as possible as in "Maybe some lifetime in some arcane combination is unsound." I did heard that 'static lifetime shouldn't be permitted in some cases.
The premise is too ridiculous to engage seriously. Apparently Google/AWS/MSFT/C++ engineers all missed a huge gaping hole in Rust borrow checker, something that random commenter could pick up.
I thought the same and felt it looked really out of place to have I8 and F32 instead of i8 and f32 when so much else looks just like Rust. Especially when the rest of the types are all lower case.
Agreed, that really stood out as a ... questionable design decision, and felt extremely un-ergonomic which seems to go against the stated goals of the language.
Every language is apparently required to make one specific version of these totally arbitrary choices, like whether to call the keyword function, func, fun, fn, or def. Once they do, it’s a foolish inconsistency with everything else. What if the language supported every syntax?
Assembly requires way more work than compiling to, say C. Clang and gcc do a lot of the heavy lifting regarding optimisation, spilling values to the stack, etc
I have a couple interpreters I've been poking at and one uses 'musttail' while the other uses a trampoline to get around blowing up the C stack when dispatching the operators. As for the GC, the trampoline VM has a full-blown one (with bells-and-whistles like an arena for short lived intermediate results which get pushed/popped by the compiled instructions) while the other (a peg parser VM) just uses an arena as the 'evaluation' is short lived and the output is the only thing really needing tracking so uses reference counting to be (easily) compatible with the Python C-API. No worries about the C stack at all.
I mean, I could have used the C stack as the VM's stack but then you have to worry about blowing up the stack, not having access (without a bunch of hackery, looking at you scheme people) to the values on the stack for GC and whatnot and, I imagine, all the other things you have issues with but it's not needed at all, just make your own (or, you know, tail call) and pretend the C one doesn't exist.
And I've started on another VM which does the traditional stack thing but it's constrained (by the spec) to have a maximum stack depth so isn't too much trouble.
From what I can see in the codegen, defer is not implemented "properly": the deferred statements are only executed when the block exits normally; leaving the block via "return", "break", "continue" (including their labelled variants! those interact subtly with outer defers), or "goto" skips them entirely. Which, arguably, should not happen:
would not close file if it was empty. In fact, I am not sure how it works even for normal "return 0": it looks like the deferred statements are emitted after the "return", textually, so they only properly work in void-returning function and internal blocks.Did you manage to compile this example?
Yes, actually:
> Mutability
> By default, variables are mutable. You can enable Immutable by Default mode using a directive.
> //> immutable-by-default
> var x = 10; > // x = 20; // Error: x is immutable
> var mut y = 10; > y = 20; // OK
Wait, but this means that if I’m reading somebody’s code, I won’t know if variables are mutable or not unless I read the whole file looking for such directive. Imagine if someone even defined custom directives, that doesn’t make it readable.
Given an option that is configurable, why would the default setting be the one that increases probability of errors?
For some niches the answer is "because the convenience is worth it" (e.g. game jams). But I personally think the error prone option should be opt in for such cases.
Or to be blunt: correctness should not be opt-in. It should be opt-out.
I have considered such a flag for my future language, which I named #explode-randomly-at-runtime ;)
> Or to be blunt: correctness should not be opt-in. It should be opt-out.
One can perfectly fine write correct programs using mutable variables. It's not a security feature, it's a design decision.
That being said, I agree with you that the author should decide if Zen-C should be either mutable or immutable by default, with special syntax for the other case. As it is now, it's confusing when reading code.
But why put it as a global metaswitcher instead of having different type infered from initial assignation qualifier?
Example:
Or with the more esoglyphomaniac fashion> I have considered such a flag for my future language, which I named #explode-randomly-at-runtime ;)
A classic strategy!
https://p-nand-q.com/programming/languages/java2k/
> Given an option that is configurable, why would the default setting be the one that increases probability of errors?
They're objecting to the "given", though. They didn't comment either way on what the default should be.
Why should it be configurable? Who benefits from that? If it's to make it so people don't have to type "var mut" then replace that with something shorter!
(Also neither one is more 'correct')
Well, arguably if it's immutable, then it's not a variable so "var" doesn't make sense. The corollary is if it's a variable it should be mutable so "var mut" is a tautology.
I think "const x = something();" would be logical but they've used const already for compile-time constants. There's probably a sensible way of overloading that use though, depending if the expression would be constant at compile-time or not, but I've not considered it enough to think about edge cases (as it basically reduces to the halting problem unless any functions called are also explicitly marked up as compile time or not).
Trying to pick a good name for the keyword is valid but it's bikeshedding. Either way the keywords should be consistent and a config option is more trouble than it's worth.
And "variables" in math are almost always immutable within a single invocation. It's not a particularly bad word to use. But there's plenty of options. const/var. let/var. let/mut. var/mut I guess. let/set from a sibling comment.
Yes, sadly, the one billion dollar mistake… I don’t know why the author took this path. it’s so confusing and immediately raised an issue: https://github.com/z-libs/Zen-C/issues/19 And of course, confusion: https://github.com/z-libs/Zen-C/pull/13#issuecomment-3739722... Life is already hard, but apparently not for developers, as I see
It's not ideal but it seems like something an LSP could tell you on a hover event. I didn't see an LSP (I didn't look that hard either) but presumably that's within the scope of their mission statement to deliver modern language ergonomics. (But I agree with sibling comments that this should be a keyword. Another decent alternative would be that it's only global in scope.)
Other languages also have non-local qays of influencing compiler behavior, for example attributes in rust (standard) or compiler pragmas in C (non-standard).
When reading working code, it doesn't matter whether the language mode allows variable reassignment. It only matters when you want to change it. And even then, the compiler will yell at you when you do the wrong thing. Testing it out is probably much faster than searching the codebase for a directive. It doesn't seem like a big deal to me.
It would be interesting to hear the motivation for it.
Yeah, immutability should probably use a `let` keyword and compiler analysis should enforce value semantics on those declarations.
Agreed, using `var` keyword for something that is non-var-ying (aka immutable) is not very intuitive.
Mutability is distinct from variability. In Javascript only because it's a pretty widely known syntax:
y is an immutable variable. In f(3), y is 4, and in f(7), y is 8.I've only glanced at this Zen-C thing but I presume it's the same story.
"immutable variable" is an oxymoron. Just because Javascript did it does not mean every new language has to do it the same way.
There are two distinct constructs that are referred to using the name variable in computer science:
1) A ‘variable’ is an identifier which is bound to a fixed value by a definition;
2) a ‘variable’ is a memory location, or a higher level approximation abstracting over memory locations, which is set to and may be changed to a value by an assignment;
Both of the above are acceptable uses of the word. I am of the mindset that the non-independent existence of these two meanings in both languages and in discourse are a large and fundamental problem.
I take the position that, inspired by mathematics, a variable should mean #1. Thereby making variables immutably bound to a fixed value. Meaning #2 should have some other name and require explicit use thereof.
From the PLT and Maths background, a mutable variable is somewhat oxymoronic. So, I agree let’s not copy JavaScript, but let’s also not be dismissive of the usage of terminology that has long standing meanings (even when the varied meanings of a single term are quite opposite).
I think you are confused by terminology here and not by behavior, "immutable variable" is a normal terminology in all languages and could be says to be distinct from constants.
In Rust if you define with "let x = 1;" it's an immutable variable, and same with Kotlin "val x = 1;"
Lore and custom made "immutable variable" some kind of frequent idiomatic parlance, but it’s still an oxymoron in their general accepted isolated meanings.
Neither "let" nor "val[ue]" implies constancy or vacillation in themselves without further context.
Words only have the meaning we give them, and "variable" already has this meaning from mathematics in the sense of x+1=2, x is a variable.
Euler used this terminology, it's not new fangled corruption or anything. I'm not sure it makes too much sense to argue they new languages should use a different terminology than this based on a colloquial/nontechnical interpretation of the word.
I get your point on how the words meanings evolves.
Also it’s fine that anyone name things as it comes to their mind — as long as the other side get what is meant at least, I guess.
On the other it doesn’t hurt much anyone to call an oxymoron thus, or exchange in vacuous manner about terminology or its evolution.
On the specific example you give, I’m not an expert, but it seems dubious to me. In x+1=2, terms like x are called unknowns. Prove me wrong, but I would rather bet that Euler used unknown (quantitas incognita) unless he was specifically discussing variable quantities (quantitas variabilis) to describe, well, quantities that change. Probably he used also French and German equivalents, but if Euler spoke any English that’s not reflected in his publications.
"Damit wird insbesondere zu der interessanten Aufgabe, eine quadratische Gleichung beliebig vieler Variabeln mit algebraischen Zahlencoeffizienten in solchen ganzen oder gebrochenen Zahlen zu lösen, die in dem durch die Coefficienten bestimmten algebraischen Rationalitätsbereiche gelegen sind." - Hilbert, 1900
The use of "variable" to denote an "unknown" is a very old practice that predates computers and programming languages.
Yes sure, I didn't mean otherwise, but I just wanted to express doubts about Euler already doing so. Hilbert is already one century forward.
Haskell, then.
Same deal. In (f 3), y = 4, in (f 7), y = 8. y varies but cannot mutate. Should be a true enough Scottsman.“Immutable” and “variable” generally refers to two different aspects of a variable’s lifetime, and they’re compatible with each other.
In a function f(x), x is a variable because each time f is invoked, a different value can be provided for x. But that variable can be immutable within the body of the function. That’s what’s usually being referred to by “immutable variable”.
This terminology is used across many different languages, and has nothing to do with Javascript specifically. For example, it’s common to describe pure functional languages by saying something like “all variables are immutable” (https://wiki.haskell.org/A_brief_introduction_to_Haskell).
Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location". Probably holder, keeper* or warden would make a more accurate terms using ordinary parlance. Or to be very on point and dropping the ordinariness, there is mneme[1] or mnemon[2].
Good luck propagating ideas, as sound as it might, to a general audience once something is established in some jargon.
[1] https://en.wiktionary.org/wiki/mneme [2] https://www.merriam-webster.com/medical/mnemon
> Probably variable is initially coming out of an ellipsis for something like "(possibly) variable* value stored in some dedicated memory location".
No, the term came directly from mathematics, where it had been firmly established by 1700 by people like Fermat, Newton, and Leibniz.
The confusion was introduced when programming languages decided to allow a variable's value to vary not just when a function was called, but during the evaluation of a function. This then creates the need to distinguish between a variable whose value doesn't change during any single evaluation of a function, and one that does change.
As I mentioned, the terms apply to two different aspects of the variable lifecycle, and that's implicitly understood. Saying it's an "oxymoron" is a version of the etymological fallacy that's ignoring the defined meanings of terms.
[dead]
It's odd that the async/await syntax _exclusively_ uses threads under the hood. I guess it makes for a straightforward implementation, but in every language I've seen the point of async/await is to use an event loop/cooperative multitasking.
I’d say that the point of async/await is to create a syntax demarcation between functions which may suspend themselves (or be suspended by a supervisory system) and those functions that process through completely and cannot be suspended (particularly by a supervisory system). The means to enable the suspension of computation and allow other computations to proceed following that suspension are implementation details.
So, having an async function run on a separate thread from those functions that are synchronous seems a viable way to achieve the underlying goal of continuous processing in the face of computations that involve waiting for some resource to become available.
I will agree that inspired by C#’s originating and then JavaScripts popularization of the syntax, it is not a stretch to assume async/await is implemented with an event loop (since both languages use such for implementation).
Noob question: if it just compiles to threads, is there any need for special syntax in the first place? My understanding was that no language support should be required for blocking on a thread.
One advantage is that it gives you the opportunity to move to a more sophisticated implementation later without breaking backwards compatibility (assuming the abstraction does not leak).
Async/await should do a little more under the hood than what the typical OS threading APIs provide, for example forwarding function parameters and return values automatically instead of making the user write their own boilerplate structs for that.
Syntax aside, how does this compare to Nim? Nim does similar, I think Crystal does as well? Not entirely sure about Crystal tbh. I guess Nim and Vala, since I believe both transpile to C, so you really get "like C" output from both.
From what I see, Zen-C aims to be "C with super-powers". It still uses C pointers for arrays and strings. It transpiles to single human-readable C file without symbol mangling. No safety. Not portable (yet?).
Nim is a full, independent modern language that uses C as one of its backends. It has its own runtime, optional GC, Unicode strings, bounds checking, and a huge stdlib. You write high-level Nim code and it spits out optimized C you usually don't touch.
Here’s a little comparison I put together from what I can find in the readme and code:
Nim (Python-like) and Crystal (Ruby-like) are not C-like languages. Arguably, those languages are targeting a different audience. There are other C family and C style syntax languages that compile directly to C or has it as one of its backends.
man I haven't heard anything about Vala in ages. is it still actively developed/used? how is it?
Yes, it is actively being developed.
Quite easy to make apps with it and GNOME Builder makes it really easy to package it for distribution (creates a proper flatpak environment, no need to make all the boilerplate). It's quite nice to work with, and make stuff happen. Gtk docs and awful deprecation culture (deprecate functions without any real alternative) are still a PITA though.
There's a surprising number of GUI apps built using Vala, if you've used Linux long enough, there's a chance you may have used a Vala based GUI and not even known you were. It's just such a nice language, it's a shame it's not more prevalent since Gnome libraries can compile basically anywhere.
Vala is still being developed and used in the GNOME ecosystem. Boo, on the other hand, is pretty dead.
Crystal compiles directly to object code, using LLVM. It does provide the ability to interoperate with C code; as an example, I use this feature to call ncursesw functions from Crystal.
I was also going to mention this reminds me of Vala, which I haven't seen or heard from in 10+ years.
Surprisingly theres a shocking number of GUI programs for Linux made with Vala, and ElementaryOS is built using Vala, and all their custom software uses Vala. So it's not dead, just a little known interesting language. :)
An interesting bit to me is that it compiles to (apparently) readable C, I'm not sure how one would use that to their advantage
I am not too familiar with C - is the idea that it's easier to incrementally have some parts of your codebase in this language, with other parts being in regular C?
one benefit is that a lot of tooling e.g. for verification etc. is built around C.
another is that it only has C runtime requirement, so no weird runtime stuff to impelement if youd say want to run on bare metal..you could output the C code and compile it to your target.
C2 (http://c2lang.org) similarly compiles to C, but arguably more readable C code from what I can see. The benefits are (1) easy access to pretty much any platform with little extra work (2) significantly less long term work compared to integrating with LLVM or similar (3) if it's readable enough, it might be submitted as "C code" in working environments which mandate C.
i think so. The biggest hurdle with new languages is that you are cut off from a 3rdparty library ecosystem. Being compatible with C 3rd party libraries is a big win.
Makes it easy to "try before you buy", too. If you decide it's not for you, you can "step out" and keep the generated C code and go from there.
This isn't a very sane plan. The ~300 LOC example mini_grep (https://github.com/z-libs/Zen-C/blob/main/examples/tools/min...) compiles to a ~3.3k LOC monstrosity (https://pastebin.com/raw/6FBSpt1z). It's easier to rewrite the whole thing than going from the generated code.
At least for now, generated code shouldn't be considered something you're ever supposed to interact with.
I looked at it primed for the worst by your comment, but it’s honestly not so bad. A lot of setup, data type code and what looks like overloads.
Very good point that I never considered! Thanks.
Initial commit was 24h ago, 363 stars, 20 forks already. Man, this goes fast.
man has been posting a lot before the initial commit about his library. following the guy on linkedin.
Could be bots.
It’s not, it’s just how hackernews works. You’ll see new projects hit 1k-10k stars in a matter of a day. You can have the best project, best article to you but if everyone else doesn’t think so it’ll always be at the bottom. Some luck involved too. Bots upvoting a post not organically I doubt is gonna live long on first page.
The stars are on GitHub, they can come from somewhere else, e.g. the author himself buying stars.
Hi, I'm the developer's father. Trust me, he hasn't bought a single star in his life—not even in Super Mario :p
This is hella common. Companies have too much money to spend.
That's not how it works. My publication with (subjectively) better language barely had a couple of comments and github stars.
Definitely could be, but the dev has been posting updates on Twitter for a while now. It could be just some amount of hype they have built.
Basically C2/C3 but Rust influenced. Missed chance to call it C4.
I didn't see any similarities to C3, quite the opposite.
So, the point of this language is to be able to write code with high productivity, but with the benefit of compiling it to a low level language? Overall it seems like the language repeats what ZIG does, including the C ABI support, manual memory management with additional ergonomics, comptime feature. The biggest difference that comes to mind quickly is that the creator of Zen-C states that it can allow for the productivity of a high level language.
It has stringly typed macros. It's not comparable to Zig's comptime, even if it calls it comptime:
It just literally outputs characters, not even tokens like rust's macros, into the compiler's view of the current source file. It has no access to type information, as Zig's does, and can't really be used for any sort of reflection as far as I can tell.The Zig equivalent of the above comptime block just be:
Notice that there's no code generation step, the value is passed seamlessly from compile time to runtime code.I wonder, how can a programming language have the productivity of a high-level language ("write like a high-level language"), if it has manual memory management? This just doesn't add up in my view.
I'm writing my own programming language that tries "Write like a high-level language, run like C.", but it does not have manual memory management. It has reference counting with lightweight borrowing for performance sensitive parts: https://github.com/thomasmueller/bau-lang
C is literally a high level language.
Seriously, in the discussion happening in this thread C is clearly not a high-level language in context.
I get your statement and even agree with it in certain contexts. But in a discussion where high-level languages are presumed (in context) to not have memory management, looping constructs are defined over a semantics inferred range of some given types, overloading of functions (maybe even operators), algebraic datatypes, and other functional language mixins: C most certainly IS NOT a high level language.
This is pedantic to the point of being derailing and in some ways seemed geared to end the discussion occurring by sticking a bar in the conversations spokes.
glad you bring up context in this note. i find C high level too but u are right, in a comparisson you can still say its really low level.
C was coined originally as high level because the alternatives were things like assembler. a term rooted in comparisson more than anything.
Thanks, my parent’s comment is almost a thought-terminating cliche in this kind of discussion. However, Chisnall’s now classic ‘C is not a low level language’ article is one of my favorite papers on language theory and potential hardware design. A discussion about the shortcomings of viewing C as a low level language can/could be profitable, deep, and interesting; but context is king.
It has autofree and drop traits.
Nim is a high-level language as well and compiles to C.
Odin and Jai are others.
Vlang compiles to human readable C too, like Nim, not Odin and Jai. Here's a post to read on V's rational for doing so[1]. Incredibly, some vocal competitors mocked V's developers for its decision, then years later, have been quietly trying to copy or "steal" other ideals without giving credit (that they previously made fun of).
V's approach is to have various backends, in addition to native (to be focused on from 0.6); C, JavaScript, WASM, etc...
[1] https://github.com/vlang/v/discussions/7849
Does Odin compile to C? I thought it only uses LLVM as a backend
No, Odin does not compile to C. It is a standalone programming language that compiles directly to machine code. It primarily uses LLVM as its backend for compiling to machine code, like you said.
Same question but for Jai.
Jai does not compile to C. It has a bytecode representation that is used primarily for compile time execution of code, a native backend used mostly for iteration speed and debug builds, and a LLVM target for optimized release builds.
chicken scheme compiles to c as well. it's a pretty convenient compilation target, you get to use all the compilers and tool chains out there and you don't add a dependency on llvm
I love CHICKEN Scheme! Nice to see it mentioned. Though I think it's worth pointing out it compiles to something pretty far from handwritten C, to my understanding. I think this is true of both performance and semantics; for example you can return a pointer to a stack allocated struct from a foreign lambda (this is because chicken's generated C code here doesn't really "return", I think. Not an expert).
Of course you can always drop to manually written C yourself and it's still a fantastic language to interop with C. And CHICKEN 6 (still pre-release) improves upon that! E.g structs and Unions can be returned/passed directly by/to foreign functions, and the new CRUNCH extension/subset is supposed to compile to something quite a bit closer to handwritten C; there are even people experimenting with it on embedded devices.
Chicken indeed interoperates with C quite easily and productively. You're right that the generated C code is mostly incomprehensible to humans, but compiles without difficulty.
The Chicken C API has functions/macros that return values and those that don't return. The former include the fabulous embedded API (crunch is an altogether different beast) which I've used in "mixed language" programming to good effect. In such cases Scheme is rather like the essential "glue" that enables the parts written in other languages to work as a whole.
Of course becoming proficient in Scheme programming takes time and effort. I believe it's true that some brains have an affinity for Lispy languages while others don't. Fortunately, there are many ways to write programs to accomplish a given task.
> this is because chicken's generated C code here doesn't really "return", I think. Not an expert.
not an expert either, but you're right about that, it uses cps transformations so that functions never return. there's a nice write up here: https://wiki.call-cc.org/chicken-compilation-process#a-guide...
I am working on mine as well. I think it is very sane to have some activity in this field. I hope we will have high level easy to write code that is fully optimized with very little effort.
There are going to be lots of languages competing with Rust and Zig. It's a popular, underserved market. They'll all have their unique angle.
I has been served for several decades, however since the late-90's many decided reducing to only C and C++ was the way going forward, now the world is rediscovering it doesn't have to be like that.
They're are certainly going to be lots of languages because now with LLMs it's easier (trivial?) to make one + library (case in point: just within last month there're have been posted here ~20 new langs with codebases 20k~100k LOC) but don't really see them competing. Rust and Zig brought actual improvements and are basically replacing usecases that C++/C had limiting the space available to others.
Uhm, no? There is barely enough space for Rust, which happens to have a unique feature/value proposition that raises it above the vast majority of its competitors. If you're fine with UB or memory unsafe code, then you go with C simply because its deeply entrenched.
In that sense Zen-C changed too many things at once for no good reason. If it was just C with defer, there would have been an opportunity to include defer in the next release of the C standard.
> String Interpolation (F-strings)
This is so nice. It's crazy how other low-level langs don't have it. I know Dlang and Rust have it. Maybe Swift too? The way Dlang does it is nice because you can do a lot of stuff with them at compile time.
i really like this project. fot me its the next level to your own custom C lib.
first your write 'tutorial C'. then after enough segfaults and double frees you start every project with a custom allocator because you've become obsessed with not having that again..., then you implement a library with a custom more generic one as you learn how to implement them, and add primitives you commonly build that lean on that allocator, it will have your refcouters, maybe ctors, dtors etc etc.. (this atleast is my learning path i guess? still have a loooong way to go as always!)
i dont see myself going for a language like this, but i think its inspirational to see where your code can evolve to with enough experience and care
This illustrates Greenspun's tenth rule very well.
Nice! Compiles in 2s on my unexceptional hardware. But it lacks my other main desiderata in a new language: string interpolation and kebab-case.
Oh, it _does_ have string interpolation, my bad. Sadly, not by default -- you still have to go back and add an "f" before the string once you've started typing it and then realize that you want an interpolated string. Also, it doesn't always work -- if I define two interpolated string variables in one function, GCC chokes in a way I'm not understanding. And every interpolated string variable consumes 4K of global memory.
Impressive repos. I've been toying with the ideas myself but it's hard to stay on track with this sort of extremely demanding task. I am however not exporting to C but to low level jit.
A lot of the ideas in there are worth being inspired by.
Very similar to any other C-like languages compiling to C (like nim, V, and many smaller hobbyist ones), but I love the keyword "embed". It looks like unlimited potential for fast debbuging, and testing the code without writing boilerplate to read the file and so on.
This feels like a mix of "Cex.C" and "dasae-headers" projects I've seen somewhere before - maybe it's just the Rust and Zig trend.
I wonder how this compares to the Beef programming language.
https://www.beeflang.org/
The Beef programming language was used to write Penny's Big Breakaway.
That's something I used to try to write, but failed due to complexity. A meta-preprocessor for C to make it a little bit more bearable...
KUDOS
The author includes some easter-eggs (printing random facts about Zen and various C constructs) which trigger randomly -- check out the file src/zen/zen_facts.c in the repository...
What about "Cex.C" and "dasae-headers"? they are integrated directly into the C ecosystem
Is this the Typescript of C ?
That's a very nice project.
List of remarks:
> var ints: int[5] = {1, 2, 3, 4, 5};
> var zeros: [int; 5]; // Zero-initialized
The zero initialized array is not intuitive IMO.
> // Bitfields
If it's deterministically packed.
> Tagged unions
Same, is the memory layout deterministic (and optimized)?
> 2 | 3 => print("Two or Three")
Any reason not to use "2 || 3"?
> Traits
What if I want to remove or override the "trait Drawing for Circle" because the original implementation doesn't fit my constraints? As long as traits are not required to be in a totally different module than the struct I will likely never welcome them in a programming language.
C uses `|` for bitwise OR and `||` for logical OR. I'm assuming this inherited the same operator paradigm since it compiles to C.
The whole language examples seem pretty rational, and I'm especially pleased / shocked by the `loop / repeat 5` examples. I love the idea of having syntax support for "maximum number of iterations", eg:
...obviously not trying to start any holy wars around exceptions (which don't seem supported) or exponential backoff (or whatever), but I guess I'm kindof shocked that I haven't seen any other languages support what seems like an obvious syntax feature.I guess you could easily emulate it with `for x in range(3): ...break`, but `repeat 3: ...break` feels a bit more like that `print("-"*80)` feature but for loops.
Ruby has a similarly intuitive `3.times do ... end` syntax
go also has
Answering the title, why not Julia?
Same. I've been using Julia for almost everything for a long time now, and it's an amazing language. Very understated.
The tagline also applies to C :-)
Constant hash seed? Never a good idea (std/core.zc)
What's the performance hit?
18 commits! I hope you keep up with the project, it’s really cool, great work.
Example at the top of the readme!
Is it memory safe?
Am I the only one who saw this syntax and immediately though "Man, this looks almost identical to Rust with a few slight variations"?
It seems to just be Rust for people who are allergic to using Rust.
It looks like a fun project, but I'm not sure what this adds to the point where people would actually use it over C or just going to Rust.
> what this adds
I guess the point is what is subtracts, instead - answer being the borrow-checker.
> answer being the borrow-checker
There is an entire world in Rust where you never have to touch the borrow-checker or lifetimes at all. You can just clone or move everything, or put everything in an Arc (which is what most other languages are doing anyway). It's very easy to not fight the compiler if you don't want to.
Maybe the real fix for Rust (for people that don't want to care), is just a compiler mode where everything is Arc-by-default?
So it re-adds manual lifetime checking. Got it.
It might or might not be a toy project, I'm not sure, but one advantage of subtracting the borrow checking is that the compiler avoids a lot of complex machinery.
Borrow checking in Rust isn't sound AFAIK, even after all these years, so some of the problems with designing and implementing lifetimes, region checking, and borrow checking algorithms, aren't trivial.
> Borrow checking in Rust isn't sound AFAIK, even after all these years
Huh? If borrow checking in Rust is unsound, that's akin to saying Rust is utterly broken. Sounds like you've been fed FUD.
If Rust was that unsound, Rust haters would flood Twitter with Rust L takes.
Not just that, it would also mean that every single Rust application is riddled with very noticeable miscompilation bugs due to the fact that Rust makes heavy use of strict aliasing rules for optimization. This isn't something that can easily be sweeped under the rug without people noticing.
I'm reading it as charitably as possible as in "Maybe some lifetime in some arcane combination is unsound." I did heard that 'static lifetime shouldn't be permitted in some cases.
The premise is too ridiculous to engage seriously. Apparently Google/AWS/MSFT/C++ engineers all missed a huge gaping hole in Rust borrow checker, something that random commenter could pick up.
Maybe take the parts of rust the author likes, but still encourages pointers in high level operations?
I thought the same and felt it looked really out of place to have I8 and F32 instead of i8 and f32 when so much else looks just like Rust. Especially when the rest of the types are all lower case.
Agreed, that really stood out as a ... questionable design decision, and felt extremely un-ergonomic which seems to go against the stated goals of the language.
Every language is apparently required to make one specific version of these totally arbitrary choices, like whether to call the keyword function, func, fun, fn, or def. Once they do, it’s a foolish inconsistency with everything else. What if the language supported every syntax?
My immediate thought was it looked a lot like Swift
nice to see it closure support.
But at that point why not Rust then?
Yet another overly-hyped language with no practical benefits. Is it just another one better C?
Why not compile to rust or assembly? C seems like an odd choice.
In fact why not simply write rust to begin with?
Assembly requires way more work than compiling to, say C. Clang and gcc do a lot of the heavy lifting regarding optimisation, spilling values to the stack, etc
Then you're stuck with the C stack, though, and no way to collect garbage.
really? you cant track and count your pointers in C? why not?
I have a couple interpreters I've been poking at and one uses 'musttail' while the other uses a trampoline to get around blowing up the C stack when dispatching the operators. As for the GC, the trampoline VM has a full-blown one (with bells-and-whistles like an arena for short lived intermediate results which get pushed/popped by the compiled instructions) while the other (a peg parser VM) just uses an arena as the 'evaluation' is short lived and the output is the only thing really needing tracking so uses reference counting to be (easily) compatible with the Python C-API. No worries about the C stack at all.
I mean, I could have used the C stack as the VM's stack but then you have to worry about blowing up the stack, not having access (without a bunch of hackery, looking at you scheme people) to the values on the stack for GC and whatnot and, I imagine, all the other things you have issues with but it's not needed at all, just make your own (or, you know, tail call) and pretend the C one doesn't exist.
And I've started on another VM which does the traditional stack thing but it's constrained (by the spec) to have a maximum stack depth so isn't too much trouble.
At times people think C is better. See recent discussion about https://sqlite.org/whyc.html
C is best
If I understand the history correctly then it started as a set of C preprocessor macros.
[flagged]
> C is the patrician's choice.
Is this suppose to be a positive thing? I thought we all wanted to violently murder the patricians.
Regardless, C might be a valid IR. I apologize for being bigoted.
Try it again and see how it goes.