Coming from Elixir, I gave Gleam a try for a couple of days over the holidays. Reasons I decided not to pursue:
- No ad-hoc polymorphism (apart from function overloading IIRC) means no standard way of defining how things work.
There are not many conventions yet in place so you won’t know if your library supports eg JSON deserialization for its types
- Coupled with a lack of macros, this means you have to implement even most basic functionality like JSON (de)serialization yourself - even for stdlib and most popular libs’ structs
- When looking on how to access the file system, I learned the stdlib does not provide fs access as the API couldn’t be shared between the JS and Erlang targets. The most popular fs package for erlang target didn’t look of high quality at all. Something so basic and important.
- This made me realise that in contrast to elixir which not only runs on the BEAM („Erlang“) but also runs with seamless Erlang interop, Gleam doesn’t have access to most of the Erlang / Elixir ecosystem out of the box.
There are many things I liked, like the algebraic data types, the Result and Option types, pattern matching with destructuring.
Which made me realize what I really want is Rust. My ways lead to Rust, I guess.
> Gleam doesn’t have access to most of the Erlang / Elixir ecosystem out of the box.
Gleam has access to the entire ecosystem out of the box, because all languages on the BEAM interoperate with one another. For example, here's a function inside the module for gleam_otp's static supervisor:
It's ok if you don't vibe with Gleam – no ad-hoc poly and no macros are usually dealbreakers for certain types of developer – but it's wrong to say you can't lean on the wider BEAM ecosystem!
Isn’t this the proof of my point - How does the need of writing „@external“ annotations by hand not contradict the point of being „out of the box“ usable?
Hayleigh, when I asked on the discord about how to solve my JSON problem in order to get structured logging working, you replied that I’m the first one to ask about this.
Now reading this:
> It's ok if you don't vibe with Gleam – no ad-hoc poly and no macros are usually dealbreakers for certain types of developer
Certainly makes me even more feel like gatekeeping.
I don't think Hayleigh was trying to gatekeep, just noting that some developers prefer features that Gleam intentionally omits.
As for the @external annotations, I think you're both right to a degree. Perhaps we can all agree to say: Gleam can use most libraries from Erlang/Elixir, but requires some minimal type-annotated FFI bindings to do so (otherwise it couldn't claim to be a type-safe language).
This is the same as Elixir, you need to specify what Erlang function to use in that language if you want to use Erlang code. The only difference is that Gleam has a more verbose syntax for it.
How does it contradict it? Without any modification/installation you can interop with Erlang/Javascript. How is that not out of the box usability of the Erlang/JS ecosystem? Syntax isn't as seamless as Elixir, but we need a way to tell Gleam what types are being passed around.
Why do you feel like a gatekeeper? Your opinion is valid, it's just that the interop statement was wrong.
That's FFI bindings. I need to provide the function signature of every API, because Erlang isn't statically typed. It's okay if some library provides it (like the linked , but I don't want to write this by hand if I can avoid it. And it's definitely not out of box, someone has to write the bindings for it to work
It would be different if I didn't have to write bindings and Gleam integrated automatically with foreign APIs. For Erlang that's probably not possible, but for the Javascript ecosystem it could make use of Typescript signatures maybe. (it would be very hard though)
Yeah, it's there out of the box but it's certainly not seamless. For an Elixir dev, it is more friction than you're used to. It is the cost of static types.
The same point holds of interfaces. And it’s not clear what the alternative is. No type system I’m aware of would force you to change all occurrences of this business logic pattern, with or without ad hoc polymorphism.
But at least ad hoc polymorphism lets you search for all instances of that business logic easily.
I’ve been doing Elixir for 9 years, 5 professionally. Nobody cares about ad-hoc polymorphism. The community doesn’t use protocols except “for data”. Whatever that means. Global singleton processes everywhere. I’m really discouraged by the practices I observe but it’s the most enjoyable language for me still.
>I’ve been doing Elixir for 9 years, 5 professionally. Nobody cares about ad-hoc polymorphism.
That’s true for Elixir as practiced, but it’s the wrong conclusion for Gleam.
Elixir doesn’t care about ad-hoc polymorphism because in Elixir it’s a runtime convention, not a compile-time guarantee. Protocols don’t give you universal quantification, exhaustiveness, coherence, or refactoring safety. Missing cases become production crashes, not compiler errors. So teams sensibly avoid building architecture on top of them.
In a statically typed language, ad-hoc polymorphism is a different beast entirely. It’s one of the primary ways you encode abstraction safely. The compiler enforces that implementations exist, pushes back on missing cases, and lets you refactor without widening everything into explicit pattern matches.
That’s exactly why people who like static types do care about it.
Pointing to Elixir community norms and concluding “nobody cares” is mixing up ecosystem habits with language design. Elixir doesn’t reward those abstractions, so people don’t use them. Gleam is explicitly targeting people who want the compiler to carry more of the burden.
If Gleam is “Elixir with types,” fine, lack of ad-hoc polymorphism is consistent.
If it’s “a serious statically typed language on the BEAM,” then the absence is a real limitation, not bikeshedding.
Static types aren’t about catching typos. They’re about moving failure from runtime to compile time. Ad-hoc polymorphism is one of the main tools for doing that without collapsing everything into concrete types.
That’s why the criticism exists, regardless of how Elixir codebases look today.
Well. Coming from TS, Gleam just wasn't/isn't my jam. It's a nice programming language research project, but it just goes against the grain for me a little too much. All the made-up rules early returning always being weird `use` call, the type boilerplate—no inline object types as I remember. Lot of inventions that just makes me go "why?" Like the opposite ideology of Go. And yes I've used Haskell before (didn't like it) and Rust (kinda like it) and others in smaller quantity.
I am more excited about making things rather than fetishizing about some language paradigms so, I acknowledge that Gleam just isn't for me. I did give me the insight that for me, it might be the best to stick with the common denominator languages for the foreseeable future.
I am in love with Gleam! As a young computer science student, I found that Gleam brought back the joy of programming just when I felt like I was seriously burning out. I was never a fan of functional programming languages. I had tried other BEAM languages like Elixir and Erlang before, but Gleam is the one I’ve enjoyed the most :)
For anyone opening the link and wondering why the expected "gleam.toml" is missing: the project contains 2 Gleam sub-projects. The server/ directory is the BEAM server (no framework) and the client/ directory is the gleam-compiled-js client (lustre framework).
Unfortunately, there are many tests for the server, and none for the client.
My “intro to gleam” was a lustre form for my blog, where people could submit feedback. So I was able to create a neatly separated client module in Gleam and compile it to JavaScript so I can insert it in my static blog page. The server part was a separate gleam module with erlang as a target. They shared models and some constants with a “shared” module - just like the tutorial.
I find this kind of explicit separation very powerful. It also removes some of the anxiety if something will end up in a client bundle when it’s supposed to be server only.
Gleam is technically as suitable for distributed computing as Erlang: since it compiles to Erlang, it can do anything that Erlang can. You can use Erlang and Elixir libraries and write FFI code to do things that would be unergonomic to do in Gleam. Sure the experience is different and if you want to embrace the guarantees of static typing, then the APIs will look different, like gleam_otp.
If you compile it to JS, then the guarantees change to JS's guarantees.
Personally I've felt that the JS target is a big plus and hasn't detracted from Gleam. Writing a full stack app with both sides being in Gleam and sharing common code is something I've enjoyed a lot. The most visible impact is that there's no target specific functions in the stdlib or the language itself, so Erlang related things are in gleam_erlang and gleam_otp, and e.g. filesystem access is a package instead of being in the stdlib. If you're just into Erlang, you don't need to interact with the JS target at all.
Same here, I've only been using it for a bit and have 100% been ignoring the JS part and the only time where I felt I needed to think about it for a moment was when I was writing a patch for someone else's code that did not ignore it, so basically when contributing to a library you might have to do extra work.
Of course I can't say if anyone ever made any decisions based on the other target that would have repercussions for me only using the BEAM.
I remember playing with Alpaca a few years ago, and it was fun though I didn’t find the resulting code to significantly less error-prone than when I wrote regular Erlang. It’s inelegant, but I find that Erlang’s quasi-runtime-typing with pattern matching gets you pretty far and it falls into Erlang’s “let it crash” philosophy nicely.
Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits. The wire doesn’t care about monads or integers or characters or strings or functors, just 1’s and 0’s, and ultimately I feel like imposing a type system can often get in the way more than it helps. There’s so much weirdness and uncertainty associated with stuff going over the wire, and pretty types often don’t really capture that.
I haven’t tried Gleam yet, and I will give it a go, and it’s entirely possible it will change my opinion on this, so I am willing to have my mind changed.
I don’t understand this comment, yes everything going over the wire is bits, but both endpoints need to know how to interpret this data, right? Types are a great tool to do this. They can even drive the exact wire protocol, verification of both data and protocol version.
So it’s hard to see how types get in the way instead of being the ultimate toolset for shaping distributed communication protocols.
Bits get lost, if you don’t have protocol verification you get mismatched types.
Types naively used can fall apart pretty easily. Suppose you have some data being sent in three chunks. Suppose you get chunk 1 and chunk 3 but chunk 2 arrives corrupted for whatever reason. What do you do? Do you reject the entire object since it doesn’t conform to the type spec? Maybe you do, maybe you don’t, or maybe you structure the type around it to handle that.
But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.
There are more elaborate type systems that do encode these things better like session types, and I should clarify that I don’t think that those get in the way. I just think that stuff like the C type system or HM type systems stop being useful, because these type systems don’t have the best way to encode the non-determinism of distributed stuff.
You can of course ameliorate this somewhat with higher level protocols like HTTP, and once you get to that level types do map pretty well and you should use them. I just have mixed feelings for low-level network stuff.
> But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.
Of course it’s different. You have a type that accurately reflects your domain/data model. Doing that helps to ensure you know to implement the necessary runtime checks, correctly. It can also help you avoid implementing a lot of superfluous runtime checks for conditions you don’t expect to handle (and to treat those conditions as invariant violations instead).
No, it really isn’t that different. If I had a dynamic type system I would have to null check everything. If I have declare everything as a Maybe, I would have to null check everything.
For things that are invariants, that’s also trivial to check against with `if(!isValid(obj)) throw Error`.
Sure. The difference is that with a strong typing system, the compiler makes sure you write those checks. I know you know this, but that’s the confusion in this thread. For me too, I find static type systems give a lot more assurance in this way. Of course it breaks down if you assume the wrong type for the data coming in, but that’s unavoidable. At least you can contain the problem and ensure good error reports.
I don’t think I did. I am one of the very few people who have had paying jobs doing Scala, Haskell, and F#. I have also had paying jobs doing Clojure and Erlang: dynamic languages commonly used for distributed apps.
I like HM type systems a lot. I’ve given talks on type systems, I was working on trying to extend type systems to deal with these particular problems in grad school. This isn’t meant to a statements on types entirely. I am arguing that most systems don’t encode for a lot of uncertainty that you find when going over the network.
With all due respect, you really do not understand these protocols if you think “just use TCP and ECC” addresses my complaints.
Again, it’s not that I have an issue with static types “not protecting you”, I am saying that you have to encode for this uncertainty regardless of the language you use. The way you typically encode for that uncertainty is to use an algebraic data type like Maybe or Optional. Checking against a Maybe for every field ends up being the same checks you would be doing with a dynamic language.
I don’t really feel the need to list out my full resume, but I do think it is very likely that I understand type systems better than you do.
Fair enough, though I feel so entirely differently that your position baffles me.
Gleam is still new to me, but my experience writing parsers in Haskell and handling error cases succinctly through functors was such a pleasant departure from my experiences in languages that lack typeclasses, higher-kinded types, and the abstractions they allow.
The program flowed happily through my Eithers until it encountered an error, at which point that was raised with a nice summary.
Part of that was GHC extensions though they could easily be translated into boilerplate, and that only had to be done once per class.
Gleam will likely never live to that level of programmer joy; what excites me is that it’s trying to bring some of it to BEAM.
It’s more than likely your knowledge of type systems far exceeds mine—I’m frankly not the theory type. My love for them comes from having written code both ways, in C, Python, Lisp, and Haskell. Haskell’s types were such a boon, and it’s not the HM inference at all.
While I don't agree with the OP about type systems, I understand what they mean about erlang. When an erlang node joins a cluster, it can't make any assumptions about the other nodes, because there is no guarantee that the other nodes are running the same code. That's perfectly fine in erlang, and the language is written in a way that makes that situation possible to deal with (using pattern matching).
> Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits.
Actually Gleam somewhat shares this view, it doesn't pretend that you can do typesafe distributed message passing (and it doesn't fall into the decades-running trap of trying to solve this). Distributed computing in Gleam would involve handling dynamic messages the same way handling any other response from outside the system is done.
This is a bit more boilerplate-y but imo it's preferable to the other two options of pretending its type safe or not existing.
Interesting! I don't share that view at all — I mean, everything running locally is just bits too, right? Your CPU doesn't care about monads or integers or characters or strings or functors either. But ultimately your higher level code does expect data to conform to some invariants, whether you explicitly model them or not.
IMO the right approach is just to parse everything into a known type at the point of ingress, and from there you can just deal with your language's native data structures.
I know everything reduces to bits eventually, but modern CPUs and memory aren’t as “lossy” as the network is, meaning you can make more assumptions about the data being and staying intact (especially if you have ECC).
Once you add distribution you have to encode for the fact that the network is terrible.
You absolutely can parse at ingress, but then there are issues with that. If the data you got is 3/4 good, but one field is corrupted, do you reject everything? Sometimes, but often Probably not, network calls are too expensive, so you encode that into the type with a Maybe. But of course any field could be corrupt so you have to encode lots of fields as Maybes. Suddenly you have reinvented dynamic typing but it’s LARPing as a static type system.
I think you can avoid most issues by not doing what you're describing! Ensuring data arrives uncorrupted is usually not an application-level concern, and if you use something like TCP you get that functionality for free.
TCP helps but only to a certain extent; it only guarantees specific ordering of bits during its session. Suppose you have to construct an object out of three separate transmissions, like some kind of multipart style thing. If one of the transmissions gets corrupted or gets errors out from TCP, then you still fall into that maybe trap.
I get what your saying, but can't you have the same issue if instead you have 3 local threads that you need to get the objects from, one can throw an exception and you only receive 2, same problem
Sometimes, but I am arguing that you need to encode for this uncertainty if you want to make distributed apps work correctly. If you can do transactions for what you’re doing then great, not every app can do that.
When you have to deal with large amounts of uncertainty, static types often reduce to a bunch of optionals, forcing you to null check every field. This is what you end up having to do with dynamic typing as well.
I don’t think types buy you much in cases with extreme uncertainty, and I think they create noise as a result.
It’s a potentially similar issue with threads as well, especially if you’re not sharing data between them, which has similar issues as a distributed app.
A difference is that it’s much cheaper to do retries within a single process compared to doing it over a network, so if something gets borked locally then a retry is (comparatively) free.
> static types often reduce to a bunch of optionals, forcing you to null check every field
On one end, you write / generate / assume a deserialisator that checks whether incoming data satisfies all required invariants, eg all fields are present. On the other end, you specify a type that has all the required fields in required format.
If deserialisation fails to satisfy type requirements, it produces an error which you can handle by eg falling back to a different type, rejecting operation or re-requesting data.
If deserialisation doesn't fail – hooray, now you don't have to worry about uncertainty.
The important thing here is that uncertainty is contained in a very specific place. It's an uncertainty barrier, if you wish: before it there's raw data, after it it's either an error or valid data.
If you don't have a strict barrier like that – every place in the program has to deal with uncertainty.
So it's not necessarily about dynamic / static. It's about being able to set barriers that narrow down uncertainty, and growing number of assumptions. The good thing about ergonomic typing system is that it allows you to offload these assumptions from your mind by encoding them in the types and let compiler worry about it.
It's basically automatization of assumptions book keeping.
But your program HAS to have some invariants. If those are not held, simply reject all the data!
What the hell is really the alternative here? Do you just pretend your process can accept any kind of data, and just never do anything with it??
If you need an integer and you get a string, you just don't work. This has nothing to do with types. There's no solution here, it's just no thank you, error, panic, 500.
You seem to have a fundamental misunderstanding about type systems. Most (the best?) typesystems are erased. This means they only have meaning "on compile time", and makes sure your code is sound and preferrably without UB.
The "its only bits" thing makes no sense in the world of types. In the end its machine code, that humans never (in practice) write or read.
Biggest issue with this language. But... fairly trivial to implement codegen with gleam/glance[0]. No good libraries do this well right now (e.g. support for discriminated unions).
One of the best things about erlang/elixir is the repl driven development/manual testing.
Gleam has no `interpreted` story, right? Something like clojure, common lisp, etc.
I think this matters because debugging on beam is not THAT great, there are tools in erlang/elixir to facilitate debugging, like inspect() or dbg().
If anyone has experience in this language, what is the mindset with gleam? How you guys debug?
> If anyone has experience in this language, what is the mindset with gleam? How you guys debug?
There is the echo keyword now, which is comparable to elixir's dbg(), I use that a lot.
Lacking a REPL, what I normally do is make a dev module, like 'dev/playground.gleam' where I'm testing things out (this is something that the gleam compiler supports, /dev is similar to /test) and then run it with 'gleam run -m playground'.
Sometimes I also use the Erlang shell. You can get an Erlang shell with all the gleam modules from your project loaded in with the 'gleam shell' command. You just need to know the Erlang syntax, and how Gleam modules are named when compiled to Erlang (they use an '@' separator, so gleam/json becomes 'gleam@json').
Glean is interesting from language nerd point of view, however I never had a reason to use Erlang at work, and probably never will, and I suspect that relates to most folks.
It’s funny how we avoid the technologies we can’t complain about much. Seeing an Elixir projects on production I always wonder „why we are not using it more often”. More talking about Elixir here.
For Elixir I saw a simple distributed job scheduler - it was dead simple in code and was ripped, because it didn’t require maintenance for ~8 years just working without issue and people who knew anything about it left company or switched part of company and acted as they forgot everything.
The other example is medium sized (in terms of features and code) web app - maintained by <30 people now, delivering more than 800 people at the other company, no stress, no issues and with great DX because of the BEAM (other company is drowning in JVM based nano-services).
I think the problem is that there is Erlang, the syntax, then Erlang, the features, and then there's OTP. It's a bit much all in one if you might have not done FP before, and then only with C-like syntax languages (e.g. Java).
When I joined an Erlang project I also had some aha moments with the syntax and how stuff is structured, and I found Elixir much nicer to work with (without any real Ruby experience). I don't want to say Erlang is not modern enough, but some things felt like they were around half the work (and more enjoyable) with some Elixir libraries (vastly bigger ecosystem than pure Erlang), for example handling XML.
It might be a bit simplistic, but I don't think you really lose anything meaningful when using Gleam or Elixir over pure Erlang. Just like you don't lose anything when using Clojure or Kotlin over pure Java.
I still suspect the effectiveness of plugging in a type system patch to a complete system, like typescript to javascript. We still observe so many `as any` or `as unknown as` at every corner.
Despite of the suspicion, Gleam provides a better and elegant syntax for those who are not familiar with Erlang or functional programming languages, which I loved most.
That doesn’t really apply to Gleam, it’s not a type syntax for another language that can be stripped, it’s its own language that compiles to Erlang and JS
Or the contrary, the kind of people finding this cool is usually the people you don't want in your community. Nice to have clarity about who doesn't want to even bother to deal with whom.
I'm now working on a real world legacy Elixir project in my day job and man oh man do I miss well defined types. Coming from Go, it makes a huge difference to my productivity when I'm able to click through fields and find usages of things, which comes down to the excellence of the Go language server. I know that the Elixir language server can infer some of this, but the language server in my experience is very fickle and flat out doesn't work if you have an older Elixir project.
I'm paying keen attention to Gleam to see if it can provide a robust development experience in this way, in the longer term.
Do the big updates to Elixir's type system help at all? afaik the most recent update added a huge amount of coverage that should extend to older code automatically.
I don't want to go into details of my work project too much, but the fundamental issue is that ElixirLS only supports 1.12+ (at least last time I checked).
I love coding in Raku - and I am sure that Gleam is nice too. But I get the feeling that Raku is underappreciated / dismissed by many due to the perl5 / perl6 history. So my thinking is, when I see a new language showcase an example on their website, presumably a carefully chosen snippet that showcases their language at its best, I like to see how Raku compares to that.
You know the take-aways from the comparison are quite instructive:
- do I need to import the io lib? (shouldn't this just be included)
- do I need a main() in every script? (does this rule out one liners like `> raku -e "say 'hi'"`)
- is `io.println` quite an awkward way to spell `print`?
I am not making the case that these are right or wrong language design decisions, but I do think that they are instructive of the goals of the designers. In the case of raku its "batteries included" and a push for "baby raku" to be as gentle on new coders as eg. Python.
The differences you mentioned are advantageous for Gleam depending on what you want. Like, having to namespace symbols instead of implicitly importing symbols makes it explicit where things come from which is good. Needing main, same thing. But the big differences are that Gleam is both functional, so everything is immutable, and fully typed safe. Completely the opposite of Perl/Haku so comparing these languages makes zero sense. If you don’t need types or functional programming you probably would just never use Gleam.
Gleam is nice. However it is still very lacking in the stdlib. You will need lots of dependencies to build something usable. I kind of wish Gleam could target something like Go, then you would have the option to go native without a "heavy" VM like the BEAM.
Now here's a type-safe functional programming language I recently bumped into, which with their focus on simplicity, ease of use, and developer experience, and compiling to either Erlang or Javascript, is really tempting to delve in deeper.
Coming from Elixir, I gave Gleam a try for a couple of days over the holidays. Reasons I decided not to pursue:
- No ad-hoc polymorphism (apart from function overloading IIRC) means no standard way of defining how things work. There are not many conventions yet in place so you won’t know if your library supports eg JSON deserialization for its types
- Coupled with a lack of macros, this means you have to implement even most basic functionality like JSON (de)serialization yourself - even for stdlib and most popular libs’ structs
- When looking on how to access the file system, I learned the stdlib does not provide fs access as the API couldn’t be shared between the JS and Erlang targets. The most popular fs package for erlang target didn’t look of high quality at all. Something so basic and important.
- This made me realise that in contrast to elixir which not only runs on the BEAM („Erlang“) but also runs with seamless Erlang interop, Gleam doesn’t have access to most of the Erlang / Elixir ecosystem out of the box.
There are many things I liked, like the algebraic data types, the Result and Option types, pattern matching with destructuring. Which made me realize what I really want is Rust. My ways lead to Rust, I guess.
> Gleam doesn’t have access to most of the Erlang / Elixir ecosystem out of the box.
Gleam has access to the entire ecosystem out of the box, because all languages on the BEAM interoperate with one another. For example, here's a function inside the module for gleam_otp's static supervisor:
As another example, I chose a package[0] at random that implements bindings to the Elixir package blake2[1]. It's ok if you don't vibe with Gleam – no ad-hoc poly and no macros are usually dealbreakers for certain types of developer – but it's wrong to say you can't lean on the wider BEAM ecosystem![0]: https://github.com/sisou/nimiq_gleam/blob/main/gblake2/src/g...
[1]: https://hex.pm/packages/blake2
Isn’t this the proof of my point - How does the need of writing „@external“ annotations by hand not contradict the point of being „out of the box“ usable?
Hayleigh, when I asked on the discord about how to solve my JSON problem in order to get structured logging working, you replied that I’m the first one to ask about this.
Now reading this: > It's ok if you don't vibe with Gleam – no ad-hoc poly and no macros are usually dealbreakers for certain types of developer
Certainly makes me even more feel like gatekeeping.
I don't think Hayleigh was trying to gatekeep, just noting that some developers prefer features that Gleam intentionally omits.
As for the @external annotations, I think you're both right to a degree. Perhaps we can all agree to say: Gleam can use most libraries from Erlang/Elixir, but requires some minimal type-annotated FFI bindings to do so (otherwise it couldn't claim to be a type-safe language).
This is the same as Elixir, you need to specify what Erlang function to use in that language if you want to use Erlang code. The only difference is that Gleam has a more verbose syntax for it.
How does it contradict it? Without any modification/installation you can interop with Erlang/Javascript. How is that not out of the box usability of the Erlang/JS ecosystem? Syntax isn't as seamless as Elixir, but we need a way to tell Gleam what types are being passed around.
Why do you feel like a gatekeeper? Your opinion is valid, it's just that the interop statement was wrong.
That's FFI bindings. I need to provide the function signature of every API, because Erlang isn't statically typed. It's okay if some library provides it (like the linked , but I don't want to write this by hand if I can avoid it. And it's definitely not out of box, someone has to write the bindings for it to work
It would be different if I didn't have to write bindings and Gleam integrated automatically with foreign APIs. For Erlang that's probably not possible, but for the Javascript ecosystem it could make use of Typescript signatures maybe. (it would be very hard though)
Yeah, it's there out of the box but it's certainly not seamless. For an Elixir dev, it is more friction than you're used to. It is the cost of static types.
I'm a bit torn on ad-hoc polymorphism. You can definitely do cool things with it. But, as others have pointed out, it does reduce type safety:
https://cs-syd.eu/posts/2023-08-25-ad-hoc-polymorphism-erode...
The same point holds of interfaces. And it’s not clear what the alternative is. No type system I’m aware of would force you to change all occurrences of this business logic pattern, with or without ad hoc polymorphism.
But at least ad hoc polymorphism lets you search for all instances of that business logic easily.
I’ve been doing Elixir for 9 years, 5 professionally. Nobody cares about ad-hoc polymorphism. The community doesn’t use protocols except “for data”. Whatever that means. Global singleton processes everywhere. I’m really discouraged by the practices I observe but it’s the most enjoyable language for me still.
>I’ve been doing Elixir for 9 years, 5 professionally. Nobody cares about ad-hoc polymorphism.
That’s true for Elixir as practiced, but it’s the wrong conclusion for Gleam.
Elixir doesn’t care about ad-hoc polymorphism because in Elixir it’s a runtime convention, not a compile-time guarantee. Protocols don’t give you universal quantification, exhaustiveness, coherence, or refactoring safety. Missing cases become production crashes, not compiler errors. So teams sensibly avoid building architecture on top of them.
In a statically typed language, ad-hoc polymorphism is a different beast entirely. It’s one of the primary ways you encode abstraction safely. The compiler enforces that implementations exist, pushes back on missing cases, and lets you refactor without widening everything into explicit pattern matches.
That’s exactly why people who like static types do care about it.
Pointing to Elixir community norms and concluding “nobody cares” is mixing up ecosystem habits with language design. Elixir doesn’t reward those abstractions, so people don’t use them. Gleam is explicitly targeting people who want the compiler to carry more of the burden.
If Gleam is “Elixir with types,” fine, lack of ad-hoc polymorphism is consistent. If it’s “a serious statically typed language on the BEAM,” then the absence is a real limitation, not bikeshedding.
Static types aren’t about catching typos. They’re about moving failure from runtime to compile time. Ad-hoc polymorphism is one of the main tools for doing that without collapsing everything into concrete types.
That’s why the criticism exists, regardless of how Elixir codebases look today.
Well, for the specific example I gave (JSON serialization), you certainly do care whether Jason.Encoder is implemented for a struct.
Yes, I just ranted, sorry. I share your view about Gleam.
IMHO this is an education problem.
Problem which plagues 90% of the people? How to overcome it?
Saw a great talk about Gleam last year at the Carolina Code Conference.
https://youtu.be/vyEWc0-kbkw?si=AayavKhhoqO5Mydh
Well. Coming from TS, Gleam just wasn't/isn't my jam. It's a nice programming language research project, but it just goes against the grain for me a little too much. All the made-up rules early returning always being weird `use` call, the type boilerplate—no inline object types as I remember. Lot of inventions that just makes me go "why?" Like the opposite ideology of Go. And yes I've used Haskell before (didn't like it) and Rust (kinda like it) and others in smaller quantity.
I am more excited about making things rather than fetishizing about some language paradigms so, I acknowledge that Gleam just isn't for me. I did give me the insight that for me, it might be the best to stick with the common denominator languages for the foreseeable future.
I am in love with Gleam! As a young computer science student, I found that Gleam brought back the joy of programming just when I felt like I was seriously burning out. I was never a fan of functional programming languages. I had tried other BEAM languages like Elixir and Erlang before, but Gleam is the one I’ve enjoyed the most :)
Have you tried F#? That usually gets a lot of praise in FP discussions.
For a fairly advanced example project I can recommend looking at Quickslice, a dev toolkit for making AT protocol applications.
https://tangled.org/slices.network/quickslice
For anyone opening the link and wondering why the expected "gleam.toml" is missing: the project contains 2 Gleam sub-projects. The server/ directory is the BEAM server (no framework) and the client/ directory is the gleam-compiled-js client (lustre framework).
Unfortunately, there are many tests for the server, and none for the client.
I'd rather them stick with ONE: JS or BEAM. Everytime a project claims it can do multiple things at once, it can't do either very well.
It's confusing too. Is Gleam suitable for distributed computing like Elixir/Erlang on BEAM? Would that answer change if I compile it to JS?
My “intro to gleam” was a lustre form for my blog, where people could submit feedback. So I was able to create a neatly separated client module in Gleam and compile it to JavaScript so I can insert it in my static blog page. The server part was a separate gleam module with erlang as a target. They shared models and some constants with a “shared” module - just like the tutorial.
I find this kind of explicit separation very powerful. It also removes some of the anxiety if something will end up in a client bundle when it’s supposed to be server only.
I've used gleam for a toy project in uni, and AoC
My main friction point is that the Int type maps to different concepts in erlang and js
In erlang it's a arbitrary precision Int
In js it the js number type, which is a 64bit float iirc.
Also recursion can hit limits way sooner in js.
For me, my code rarely ran in both js and erlang. But could be skillissue
Fair, but you usually don't run your project on both, unless you're writing a library.
Pick the target that makes sense for your project and stick with it :)
Gleam is technically as suitable for distributed computing as Erlang: since it compiles to Erlang, it can do anything that Erlang can. You can use Erlang and Elixir libraries and write FFI code to do things that would be unergonomic to do in Gleam. Sure the experience is different and if you want to embrace the guarantees of static typing, then the APIs will look different, like gleam_otp.
If you compile it to JS, then the guarantees change to JS's guarantees.
Personally I've felt that the JS target is a big plus and hasn't detracted from Gleam. Writing a full stack app with both sides being in Gleam and sharing common code is something I've enjoyed a lot. The most visible impact is that there's no target specific functions in the stdlib or the language itself, so Erlang related things are in gleam_erlang and gleam_otp, and e.g. filesystem access is a package instead of being in the stdlib. If you're just into Erlang, you don't need to interact with the JS target at all.
Same here, I've only been using it for a bit and have 100% been ignoring the JS part and the only time where I felt I needed to think about it for a moment was when I was writing a patch for someone else's code that did not ignore it, so basically when contributing to a library you might have to do extra work.
Of course I can't say if anyone ever made any decisions based on the other target that would have repercussions for me only using the BEAM.
I remember playing with Alpaca a few years ago, and it was fun though I didn’t find the resulting code to significantly less error-prone than when I wrote regular Erlang. It’s inelegant, but I find that Erlang’s quasi-runtime-typing with pattern matching gets you pretty far and it falls into Erlang’s “let it crash” philosophy nicely.
Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits. The wire doesn’t care about monads or integers or characters or strings or functors, just 1’s and 0’s, and ultimately I feel like imposing a type system can often get in the way more than it helps. There’s so much weirdness and uncertainty associated with stuff going over the wire, and pretty types often don’t really capture that.
I haven’t tried Gleam yet, and I will give it a go, and it’s entirely possible it will change my opinion on this, so I am willing to have my mind changed.
I don’t understand this comment, yes everything going over the wire is bits, but both endpoints need to know how to interpret this data, right? Types are a great tool to do this. They can even drive the exact wire protocol, verification of both data and protocol version.
So it’s hard to see how types get in the way instead of being the ultimate toolset for shaping distributed communication protocols.
Bits get lost, if you don’t have protocol verification you get mismatched types.
Types naively used can fall apart pretty easily. Suppose you have some data being sent in three chunks. Suppose you get chunk 1 and chunk 3 but chunk 2 arrives corrupted for whatever reason. What do you do? Do you reject the entire object since it doesn’t conform to the type spec? Maybe you do, maybe you don’t, or maybe you structure the type around it to handle that.
But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.
There are more elaborate type systems that do encode these things better like session types, and I should clarify that I don’t think that those get in the way. I just think that stuff like the C type system or HM type systems stop being useful, because these type systems don’t have the best way to encode the non-determinism of distributed stuff.
You can of course ameliorate this somewhat with higher level protocols like HTTP, and once you get to that level types do map pretty well and you should use them. I just have mixed feelings for low-level network stuff.
> But let’s dissect that last suggestion; suppose I do modify the type to encode that. Suddenly pretty much every field more or less just because Maybe/Optional. Once everything is Optional, you don’t really have a “type” anymore, you have a a runtime check of the type everywhere. This isn’t radically different than regular dynamic typing.
Of course it’s different. You have a type that accurately reflects your domain/data model. Doing that helps to ensure you know to implement the necessary runtime checks, correctly. It can also help you avoid implementing a lot of superfluous runtime checks for conditions you don’t expect to handle (and to treat those conditions as invariant violations instead).
No, it really isn’t that different. If I had a dynamic type system I would have to null check everything. If I have declare everything as a Maybe, I would have to null check everything.
For things that are invariants, that’s also trivial to check against with `if(!isValid(obj)) throw Error`.
Sure. The difference is that with a strong typing system, the compiler makes sure you write those checks. I know you know this, but that’s the confusion in this thread. For me too, I find static type systems give a lot more assurance in this way. Of course it breaks down if you assume the wrong type for the data coming in, but that’s unavoidable. At least you can contain the problem and ensure good error reports.
You missed the entire point of the strong static typing.
I don’t think I did. I am one of the very few people who have had paying jobs doing Scala, Haskell, and F#. I have also had paying jobs doing Clojure and Erlang: dynamic languages commonly used for distributed apps.
I like HM type systems a lot. I’ve given talks on type systems, I was working on trying to extend type systems to deal with these particular problems in grad school. This isn’t meant to a statements on types entirely. I am arguing that most systems don’t encode for a lot of uncertainty that you find when going over the network.
With all due respect, you can use all of those languages and their type systems without recognizing their value.
For ensuring bits don't get lost, you use protocols like TCP. For ensuring they don't silently flip on you, you use ECC.
Complaining that static types don't guard you against lost packets and bit flips is missing the point.
With all due respect, you really do not understand these protocols if you think “just use TCP and ECC” addresses my complaints.
Again, it’s not that I have an issue with static types “not protecting you”, I am saying that you have to encode for this uncertainty regardless of the language you use. The way you typically encode for that uncertainty is to use an algebraic data type like Maybe or Optional. Checking against a Maybe for every field ends up being the same checks you would be doing with a dynamic language.
I don’t really feel the need to list out my full resume, but I do think it is very likely that I understand type systems better than you do.
> ends up being the same checks you would be doing with a dynamic language
Sure thing. Unless dev forgets to do (some of) these checks, or some code downstream changes and upstream checks become gibberish or insufficient.
Fair enough, though I feel so entirely differently that your position baffles me.
Gleam is still new to me, but my experience writing parsers in Haskell and handling error cases succinctly through functors was such a pleasant departure from my experiences in languages that lack typeclasses, higher-kinded types, and the abstractions they allow.
The program flowed happily through my Eithers until it encountered an error, at which point that was raised with a nice summary.
Part of that was GHC extensions though they could easily be translated into boilerplate, and that only had to be done once per class.
Gleam will likely never live to that level of programmer joy; what excites me is that it’s trying to bring some of it to BEAM.
It’s more than likely your knowledge of type systems far exceeds mine—I’m frankly not the theory type. My love for them comes from having written code both ways, in C, Python, Lisp, and Haskell. Haskell’s types were such a boon, and it’s not the HM inference at all.
While I don't agree with the OP about type systems, I understand what they mean about erlang. When an erlang node joins a cluster, it can't make any assumptions about the other nodes, because there is no guarantee that the other nodes are running the same code. That's perfectly fine in erlang, and the language is written in a way that makes that situation possible to deal with (using pattern matching).
> Honestly, and I realize that this might get me a bit of flack here and that’s obviously fine, but I find type systems start losing utility with distributed applications. Ultimately everything being sent over the wire is just bits.
Actually Gleam somewhat shares this view, it doesn't pretend that you can do typesafe distributed message passing (and it doesn't fall into the decades-running trap of trying to solve this). Distributed computing in Gleam would involve handling dynamic messages the same way handling any other response from outside the system is done.
This is a bit more boilerplate-y but imo it's preferable to the other two options of pretending its type safe or not existing.
Interesting! I don't share that view at all — I mean, everything running locally is just bits too, right? Your CPU doesn't care about monads or integers or characters or strings or functors either. But ultimately your higher level code does expect data to conform to some invariants, whether you explicitly model them or not.
IMO the right approach is just to parse everything into a known type at the point of ingress, and from there you can just deal with your language's native data structures.
I know everything reduces to bits eventually, but modern CPUs and memory aren’t as “lossy” as the network is, meaning you can make more assumptions about the data being and staying intact (especially if you have ECC).
Once you add distribution you have to encode for the fact that the network is terrible.
You absolutely can parse at ingress, but then there are issues with that. If the data you got is 3/4 good, but one field is corrupted, do you reject everything? Sometimes, but often Probably not, network calls are too expensive, so you encode that into the type with a Maybe. But of course any field could be corrupt so you have to encode lots of fields as Maybes. Suddenly you have reinvented dynamic typing but it’s LARPing as a static type system.
I think you can avoid most issues by not doing what you're describing! Ensuring data arrives uncorrupted is usually not an application-level concern, and if you use something like TCP you get that functionality for free.
TCP helps but only to a certain extent; it only guarantees specific ordering of bits during its session. Suppose you have to construct an object out of three separate transmissions, like some kind of multipart style thing. If one of the transmissions gets corrupted or gets errors out from TCP, then you still fall into that maybe trap.
so you need transactions?
I get what your saying, but can't you have the same issue if instead you have 3 local threads that you need to get the objects from, one can throw an exception and you only receive 2, same problem
Sometimes, but I am arguing that you need to encode for this uncertainty if you want to make distributed apps work correctly. If you can do transactions for what you’re doing then great, not every app can do that.
When you have to deal with large amounts of uncertainty, static types often reduce to a bunch of optionals, forcing you to null check every field. This is what you end up having to do with dynamic typing as well.
I don’t think types buy you much in cases with extreme uncertainty, and I think they create noise as a result.
It’s a potentially similar issue with threads as well, especially if you’re not sharing data between them, which has similar issues as a distributed app.
A difference is that it’s much cheaper to do retries within a single process compared to doing it over a network, so if something gets borked locally then a retry is (comparatively) free.
> static types often reduce to a bunch of optionals, forcing you to null check every field
On one end, you write / generate / assume a deserialisator that checks whether incoming data satisfies all required invariants, eg all fields are present. On the other end, you specify a type that has all the required fields in required format.
If deserialisation fails to satisfy type requirements, it produces an error which you can handle by eg falling back to a different type, rejecting operation or re-requesting data.
If deserialisation doesn't fail – hooray, now you don't have to worry about uncertainty.
The important thing here is that uncertainty is contained in a very specific place. It's an uncertainty barrier, if you wish: before it there's raw data, after it it's either an error or valid data.
If you don't have a strict barrier like that – every place in the program has to deal with uncertainty.
So it's not necessarily about dynamic / static. It's about being able to set barriers that narrow down uncertainty, and growing number of assumptions. The good thing about ergonomic typing system is that it allows you to offload these assumptions from your mind by encoding them in the types and let compiler worry about it.
It's basically automatization of assumptions book keeping.
But your program HAS to have some invariants. If those are not held, simply reject all the data!
What the hell is really the alternative here? Do you just pretend your process can accept any kind of data, and just never do anything with it??
If you need an integer and you get a string, you just don't work. This has nothing to do with types. There's no solution here, it's just no thank you, error, panic, 500.
You seem to have a fundamental misunderstanding about type systems. Most (the best?) typesystems are erased. This means they only have meaning "on compile time", and makes sure your code is sound and preferrably without UB.
The "its only bits" thing makes no sense in the world of types. In the end its machine code, that humans never (in practice) write or read.
I really like the idea of gleam but I don't want to hand implement serialization for every type (even with an LSP action) in 2026.
Indeed. Gleam is a sort-of mix between Elixir and Rust, yet you don't have to explicitly implement serialization for either of them.
It's definitely something they should figure out.
Biggest issue with this language. But... fairly trivial to implement codegen with gleam/glance[0]. No good libraries do this well right now (e.g. support for discriminated unions).
[0] https://hexdocs.pm/glance/glance.html
Dart has the same glaring issue (yes, yes, you can use a codegen library but it's not the same).
I rarely serialise every type in my gleam code, My quick back of the napkin math is less than 5%.
But 100 percent of projects writing the same stuff
One of the best things about erlang/elixir is the repl driven development/manual testing.
Gleam has no `interpreted` story, right? Something like clojure, common lisp, etc. I think this matters because debugging on beam is not THAT great, there are tools in erlang/elixir to facilitate debugging, like inspect() or dbg().
If anyone has experience in this language, what is the mindset with gleam? How you guys debug?
> If anyone has experience in this language, what is the mindset with gleam? How you guys debug?
There is the echo keyword now, which is comparable to elixir's dbg(), I use that a lot.
Lacking a REPL, what I normally do is make a dev module, like 'dev/playground.gleam' where I'm testing things out (this is something that the gleam compiler supports, /dev is similar to /test) and then run it with 'gleam run -m playground'.
Sometimes I also use the Erlang shell. You can get an Erlang shell with all the gleam modules from your project loaded in with the 'gleam shell' command. You just need to know the Erlang syntax, and how Gleam modules are named when compiled to Erlang (they use an '@' separator, so gleam/json becomes 'gleam@json').
You can use all the BEAM debuggers and tracing tools, and Gleam has a print debugging keyword.
Unfortunately there is not yet a plugin for the BEAM debuggers for them to use Gleam syntax.
Glean is interesting from language nerd point of view, however I never had a reason to use Erlang at work, and probably never will, and I suspect that relates to most folks.
It’s funny how we avoid the technologies we can’t complain about much. Seeing an Elixir projects on production I always wonder „why we are not using it more often”. More talking about Elixir here.
For Elixir I saw a simple distributed job scheduler - it was dead simple in code and was ripped, because it didn’t require maintenance for ~8 years just working without issue and people who knew anything about it left company or switched part of company and acted as they forgot everything.
The other example is medium sized (in terms of features and code) web app - maintained by <30 people now, delivering more than 800 people at the other company, no stress, no issues and with great DX because of the BEAM (other company is drowning in JVM based nano-services).
The way many of us get work assignments is:
- Have to deploy product XYZ (because we don't write everything from scratch)
- Need to extend said product
- Use one of the official SDKs, because we aren't yak shaving for new platforms
Thus that is how we end up using the languages we kind of complain about.
To be fair, languages like Elixir and Gleam do exist, because too many complain about Erlang, which me with my Prolog background see no issues with.
I think the problem is that there is Erlang, the syntax, then Erlang, the features, and then there's OTP. It's a bit much all in one if you might have not done FP before, and then only with C-like syntax languages (e.g. Java).
When I joined an Erlang project I also had some aha moments with the syntax and how stuff is structured, and I found Elixir much nicer to work with (without any real Ruby experience). I don't want to say Erlang is not modern enough, but some things felt like they were around half the work (and more enjoyable) with some Elixir libraries (vastly bigger ecosystem than pure Erlang), for example handling XML.
It might be a bit simplistic, but I don't think you really lose anything meaningful when using Gleam or Elixir over pure Erlang. Just like you don't lose anything when using Clojure or Kotlin over pure Java.
I still suspect the effectiveness of plugging in a type system patch to a complete system, like typescript to javascript. We still observe so many `as any` or `as unknown as` at every corner.
Despite of the suspicion, Gleam provides a better and elegant syntax for those who are not familiar with Erlang or functional programming languages, which I loved most.
That doesn’t really apply to Gleam, it’s not a type syntax for another language that can be stripped, it’s its own language that compiles to Erlang and JS
There’s no “unknown” or “any” in Gleam, it’s not possible to cheat the type system that way
A recent post about using Gleam for Advent of Code:
https://news.ycombinator.com/item?id=46255991
Gleam is ready and is amazing. We use gleam as the main language in our company
What's the product/use case?
We are a web agency, so client projects that are open to use a new language and our products like:
https://news.ycombinator.com/item?id=46530011
One of programming languages with political agenda.
red flag for future drama that might cause problems, one of the reasons I walked away
human rights are not politics
Every "right" is politics.
A set of "rights" comes from current law.
And Code of Law is an invention like everything else.
All open source projects are political by their very nature.
But not every open source project has a political agenda.
All open source projects have a political agenda. That's the purpose of the licence, to force certain behaviour.
The type of people complaining about this are usually the people you don't want in your community to begin with, so I doubt Gleam is missing out here.
Or the contrary, the kind of people finding this cool is usually the people you don't want in your community. Nice to have clarity about who doesn't want to even bother to deal with whom.
I'm now working on a real world legacy Elixir project in my day job and man oh man do I miss well defined types. Coming from Go, it makes a huge difference to my productivity when I'm able to click through fields and find usages of things, which comes down to the excellence of the Go language server. I know that the Elixir language server can infer some of this, but the language server in my experience is very fickle and flat out doesn't work if you have an older Elixir project.
I'm paying keen attention to Gleam to see if it can provide a robust development experience in this way, in the longer term.
Do the big updates to Elixir's type system help at all? afaik the most recent update added a huge amount of coverage that should extend to older code automatically.
I don't want to go into details of my work project too much, but the fundamental issue is that ElixirLS only supports 1.12+ (at least last time I checked).
thought I’d try the showcase example in Raku (https://raku.org), so this Gleam
becomes this Raku well maybe you really want to have a main() so you can pass in name from the command lineOh God, they actually put that awful logo front and center.
I'd always thought it would be a go-like thing where the put the mascot away for everything except for the minor hero section or buried in the footer.
RIP Perl.
Raku looks sweet, but what is the point of this comparison? :)
I love coding in Raku - and I am sure that Gleam is nice too. But I get the feeling that Raku is underappreciated / dismissed by many due to the perl5 / perl6 history. So my thinking is, when I see a new language showcase an example on their website, presumably a carefully chosen snippet that showcases their language at its best, I like to see how Raku compares to that.
You know the take-aways from the comparison are quite instructive:
- do I need to import the io lib? (shouldn't this just be included)
- do I need a main() in every script? (does this rule out one liners like `> raku -e "say 'hi'"`)
- is `io.println` quite an awkward way to spell `print`?
I am not making the case that these are right or wrong language design decisions, but I do think that they are instructive of the goals of the designers. In the case of raku its "batteries included" and a push for "baby raku" to be as gentle on new coders as eg. Python.
The differences you mentioned are advantageous for Gleam depending on what you want. Like, having to namespace symbols instead of implicitly importing symbols makes it explicit where things come from which is good. Needing main, same thing. But the big differences are that Gleam is both functional, so everything is immutable, and fully typed safe. Completely the opposite of Perl/Haku so comparing these languages makes zero sense. If you don’t need types or functional programming you probably would just never use Gleam.
I think they have an issue on homepage: there is no "download/get start" link. All big buttons link to a tour page, and stopped there.
I was able to find that in the Docs https://gleam.run/documentation/ ( from gleam.run )
I've always thought this would be an excellent language for coding agents.
To use to write coding agents or for coding agents to write code in?
Gleam is nice. However it is still very lacking in the stdlib. You will need lots of dependencies to build something usable. I kind of wish Gleam could target something like Go, then you would have the option to go native without a "heavy" VM like the BEAM.
Surely the BEAM is one of the major selling points.
In a world with package management there’s no practical difference between the core modules being in one package or multiple packages.
Now here's a type-safe functional programming language I recently bumped into, which with their focus on simplicity, ease of use, and developer experience, and compiling to either Erlang or Javascript, is really tempting to delve in deeper.