> The Killer Feature: |> with Auto-Tracing. No other language has this combination
Of the languages listed, Elixir, Python and Rust can all achieve this combination. Elixir has a pipe operator built-in, and Python and Rust have operator overloading, so you could overload the bitwise | operator (or any other operator you want) to act as a pipeline operator. And Rust and Elixir have macros, and Python has decorators, which can be used to automatically add logging/tracing to functions.
It's not automatic for all functions, though having to be explicit/selective about what is logged/traced is generally considered a good thing. It's rare that real-world software wants to log/trace literally everything, since it's not only costly (and slow) but also a PII risk.
I'm envisioning that in Rust (and Python), the operator overload would be on a class/struct. It would be the macro/decorator (the same one that adds logging) which would turn the function definition into an object that implements Fn.
I have done exactly that as an exercise in what you can do with Python: overload |, and a decorator that you can use to on any function to return an instance of a callable class that calls that function and overloads |.
Whether it is a good idea to use it is another matter (it does not feel Pythonic), but it is easy to implement.
Pretty cool to have a first-class tracing mechanism. Obviously... it's a monad! Haskell has had a MonadTrace monad for a long time, that can be switched on or off depending on your environment.
This strikes me as cool to see someone build another language with python using lark, it's also possible to override the ">>" or "|" characters in python to achieve the same thing, and also you don't have to worry about the "lark" grammar.
I had a custom lark grammar I thought was cool to do something similar, but after a while I just discarded it and went back to straight python, and found it was faster my an order of magnitude.
Pipelines are often dynamic, how is this achieved?
Pipelines are just a description of computation, sometimes it makes sense to increase throughput, instead of low latency, by batching, is execution separate from the pipeline definition?
Cool project. Could You expand on what is the use case for something like it compares to e.g. a python library? Maybe an example of more complex workflows or open ended loops/agents that can showcase the pros of using such a language compared to other solutions. Are these pipelines durable for example or how do they execute?
I like it. Seems like a nice combination of features. It's pitched at AI/ML usecases, which is understandable given the current hypetrain, but on first glance I think it can stand up well in a more general-purpose context.
Re: pipe tracing, half a decade or so ago I made a little language called OTPCL, which has user-definable pipeline operators; combined with the ability to redefine any command in a given interpreter state, it'd be straightforward for a user to shove something like (pardon the possibly-incorrect syntax; haven't touched Erlang in awhile)
into an Erlang module, and then by adding that to a custom interpreter state with otpcl:cmd/3 you end up with automatic logging every time a script uses a pipe.
Downside is that you'd have to do this for every command defining a pipe operator (i.e. every command with a name starting with "|"); alternate user-facing approach would be to get the AST from otpcl:parse/1, inject log/trace commands before or after every command, and pass the modified tree to otpcl:interpret/2 (alongside an interpreter state with those log/trace commands defined). Or do the logging outside of the interpreter between manual calls to otpcl:interpret/2 for each command; something like
> Elixir and F# have |> but neither auto-traces.
Using dbg/2 [1]:
This code prints: ---1. Debugging - dbg/2
https://hexdocs.pm/elixir/debugging.html#dbg-2
I should have bet more on Elixir. I did work in F# but MS really didn't seem serious enough about it, but the Elixer community keeps going strong.
> The Killer Feature: |> with Auto-Tracing. No other language has this combination
Of the languages listed, Elixir, Python and Rust can all achieve this combination. Elixir has a pipe operator built-in, and Python and Rust have operator overloading, so you could overload the bitwise | operator (or any other operator you want) to act as a pipeline operator. And Rust and Elixir have macros, and Python has decorators, which can be used to automatically add logging/tracing to functions.
It's not automatic for all functions, though having to be explicit/selective about what is logged/traced is generally considered a good thing. It's rare that real-world software wants to log/trace literally everything, since it's not only costly (and slow) but also a PII risk.
In Rust, wouldn't implementing BitOr for Fn/FnOnce/FnMut violate the orphan rule?
I'm envisioning that in Rust (and Python), the operator overload would be on a class/struct. It would be the macro/decorator (the same one that adds logging) which would turn the function definition into an object that implements Fn.
I have done exactly that as an exercise in what you can do with Python: overload |, and a decorator that you can use to on any function to return an instance of a callable class that calls that function and overloads |.
Whether it is a good idea to use it is another matter (it does not feel Pythonic), but it is easy to implement.
somehow this counts like model cot.
Rust is really not built for pipelining. It is extremely cumbersome to do even moderately sized chains of maps, filters, etc.
Python's scoping and mutability make it an extremely poor language for pipelining.
Pretty cool to have a first-class tracing mechanism. Obviously... it's a monad! Haskell has had a MonadTrace monad for a long time, that can be switched on or off depending on your environment.
https://hackage.haskell.org/package/tracing-0.0.7.4/docs/Con...
haskell guys gonna call for loop a monad and then gush how amazing monads are
This strikes me as cool to see someone build another language with python using lark, it's also possible to override the ">>" or "|" characters in python to achieve the same thing, and also you don't have to worry about the "lark" grammar.
I had a custom lark grammar I thought was cool to do something similar, but after a while I just discarded it and went back to straight python, and found it was faster my an order of magnitude.
Pipelines are often dynamic, how is this achieved?
Pipelines are just a description of computation, sometimes it makes sense to increase throughput, instead of low latency, by batching, is execution separate from the pipeline definition?
Cool project. Could You expand on what is the use case for something like it compares to e.g. a python library? Maybe an example of more complex workflows or open ended loops/agents that can showcase the pros of using such a language compared to other solutions. Are these pipelines durable for example or how do they execute?
I like it. Seems like a nice combination of features. It's pitched at AI/ML usecases, which is understandable given the current hypetrain, but on first glance I think it can stand up well in a more general-purpose context.
Re: pipe tracing, half a decade or so ago I made a little language called OTPCL, which has user-definable pipeline operators; combined with the ability to redefine any command in a given interpreter state, it'd be straightforward for a user to shove something like (pardon the possibly-incorrect syntax; haven't touched Erlang in awhile)
into an Erlang module, and then by adding that to a custom interpreter state with otpcl:cmd/3 you end up with automatic logging every time a script uses a pipe.Downside is that you'd have to do this for every command defining a pipe operator (i.e. every command with a name starting with "|"); alternate user-facing approach would be to get the AST from otpcl:parse/1, inject log/trace commands before or after every command, and pass the modified tree to otpcl:interpret/2 (alongside an interpreter state with those log/trace commands defined). Or do the logging outside of the interpreter between manual calls to otpcl:interpret/2 for each command; something like
should do the trick, covering all pipes and ordinary commands alike.Very interesting! I'll definitely give it a try. However, the documentation link[1] isn't working at the moment (404).
[1] https://crux-ecosystem.github.io/MOL/
Kind of like Ruby... with pipes. Elixir has them, but this reminds me more like Ruby.