I really appreciate how this article explains why certain design patterns became a thing. Usually, it was to address some very practical problem or limitation. And yet, a lot of younger programmers treat these patterns like a religious dogma that they must follow and don't question if they really make sense for the specific situation they are in.
The main motivation for the concept of design patterns is to give unique names to existing programming patterns, so that when someone says “Strategy pattern”, everyone knows what pattern that refers to, and vice versa that the same pattern isn’t called ten different things. It’s to make communication about program design efficient, by defining a vocabulary of patterns that tend to reoccur, and a common structure for describing them. Not all patterns were successful in that way, but it’s the main idea behind design patterns.
The question of when a using a given pattern is appropriate is orthogonal to that. The fact that a named pattern has been defined doesn’t imply a recommendation to use it across the board. It depends on the context and on design forces, and those change with time and circumstances. Anti-patterns are patterns as well.
It’s a pity that the idea of design patterns ended up (after the pattern language craze faded) being almost exclusively associated with the specific patterns named and described in the GoF book.
> The main motivation for the concept of design patterns is to give unique names to existing programming patterns
No, naming them is not the main purpose, preserving and transmitting knowledge of what they are and what they are useful for, so that people aren't fofced to rediscover solutions to the same problems over and over again. [0] Naming is obviously important for that purpose, but isn't the main goal, but a means of supporting it.
[0] If this sounds like a subset of the purpose of a reusable code library, it is, which is why in languages with sufficient abstraction facilities to allow the generic implementation of a pattern to be reusable, well documented (for the “where and when to use this” piece) code libraries replace documents that have the explanation paired with implementation recipes that one can modify to one’s particular use.
> It’s a pity that the idea of design patterns ended up (after the pattern language craze faded) being almost exclusively associated with the specific patterns named and described in the GoF book
That book is the closest we came to establishing a common language. I remember brushing up on the names of design patterns whenever I had an interview. Ultimately, though, it didn't yield any benefit that the industry is missing now.
Like you said, the fundamental idea behind the book was that consciously naming, cataloging, and studying design patterns would improve communication among programmers. There was also an idea that studying design patterns would give beginning programmers a richer repertoire of programming techniques faster than if they had to figure them out themselves.
Looking back with decades of hindsight, my belief is that awareness and intentional use of design patterns made no difference whatsoever. Some names stuck, and would have anyway. Others didn't, and years of status as official "design patterns" in a book widely studied across the industry couldn't make them. The younger programmers I work with who had no exposure to the GoF book, and for whom "design patterns" is something that dusty old farts used to talk about, use patterns like Flyweight, Proxy, Command, Facade, Strategy, Chain of Responsibility, Decorator, etc. without knowing or needing a name for them, and they communicate amongst themselves just as efficiently as my generation did at the height of the design pattern craze.
In the final analysis, I have never looked at the less experienced programmers around me and thought, "This situation would go faster and smoother if they had studied design patterns." The generation that learned to program after design patterns had faded as an idea learned just as quickly and communicates just as well as the generation that studied them assiduously as junior programmers like I did.
> Like you said, the fundamental idea behind the book was that consciously naming, cataloging, and studying design patterns would improve communication among programmers.
> The younger programmers I work with who had no exposure to the GoF book.....and they communicate amongst themselves just as efficiently as my generation
> The generation that learned to program after design patterns had faded as an idea learned just as quickly and communicates just as well as the generation that studied them assiduously as junior programmers like I did.
I've never read GOF so I don't know if they emphasize communication, but I have read and studied many other programming pattern books and communication is low on the list of reasons to learn them in my opinion. Their only purpose for me is to organize code in a way that has been proven to "scale" along with a codebase so that you don't end up with a plate of spaghetti.
Probably also a problem that exists because of how programmers are taught. Using Java and being presented with the patterns as solutions to what Java does.
I don't however appreciate that the author doesn't actually know about Java or C++ well enough such that they are spewing falsehoods about Java or C++. Saying things like "There’s no clean way to say 'this is private to this file' (in C++)" is just bonkers. The author is well intentioned, but coming up with the wrong reason is worse than not offering any reason.
> I really appreciate how this article explains why certain design patterns became a thing. Usually, it was to address some very practical problem or limitation.
I don't agree at all. I feel that those who criticise design patterns as solution to practical problems are completely missing the point of design patterns, and the whole reason they have the name the have: design patterns. I'll explain.
Design patterns are solutions to common design problems, but "problems" isn't the kind of problems you think it is. It's "problems" in the sense that there are requirements to be met. A state pattern is a way to implement a state machine, but you still have a state machine and a state pattern if your state classes don't handle state transitions.
More to the point, look at singletons. It's irrelevant if they are implemented with a class or a closure or a module. What makes a singleton a singleton is the fact that there is an assurance that there will be a single instance of an object. Does an implementation that allow multiple instances or doesn't return the same instance qualifies as a singleton? Obviously not.
Design patterns are recurring solutions to recurring problems. They are so recurring that they get their name and represent a high level concept. A message queue is a design pattern. An exception is a design pattern. Lazy loading is a design pattern. Retries and exponential backoffs are design patterns. Etc. Is anyone arguing that Python has none of it?
So many people trying to criticise the GoF but they don't even bother to be informed or form an educated opinion.
Design patterns aren’t solutions to common design problems. They’re after the fact descriptions of solutions for design problems. That’s the issue. That’s the beef. Everyone thought of that book as a cook book instead of a naturalists’ musings on an ecosystem, which is what they are.
Those of us who designed before people discovered that stupid book were constantly asked what the differences were between this pattern and that. And the book just isn’t thick enough and Eric Gamma was just trying to complete a thesis not write a book, so despite having at least 8 years in industry before completing his masters he cocked it up. And ruined Java in the process.
We had a contemporary of Vlissades teach a class at my last company and he crystallized all of my suspicions about the GoF book and added a whole lot more.
My advice for at least fifteen years is, if you think you want to read GoF, read Refactoring instead. If you’ve read Refactoring and still want to read GoF, read Refactoring a second time because it didn’t all sink in.
Refactoring is ten times the value of GoF for teaching you how to do this trade and how to break up architectural brambles.
I also completely disagree with the builder description.
It sounds like they've not had to use it for any meaningful work, and basically described a constructor. Yeah, named parameters are great, and I miss them when I don't have them, but if you think a builder only works like this
Object.setX().setY().build()
Then it tells me that you haven't built anything meaningful. It's a way of building a state and running an operation at the end in a contained manner. If your build method doesn't run some sort of validation then it's probably a waste of time, might as well just call setters, return this and call it a day if you want to be verbose and chain commands.
I mean, people usually call those 'feature' when they're built-in. I would never call 'lazy evaluation' in Haskell a design pattern, because it's part of the language.
If I have to implement something similar myself in C++ however, I'll use a niche design pattern.
I am an ex Java developer. Enterprise Fizz Buzz is highly entertaining. That stupid masters thesis pretending to be a design book landed right an inflection point and ruined half a generation of developers.
What isn’t entertaining is using OpenTelemetry, which takes me right back to Java for over-engineering. Moving to OTEL from StatsD cost us about 3% CPU per core, which on 32 core machines is an entire CPU lost to telemetry. Or more accurately, an entire second CPU lost to telemetry. That is not right.
Prometheus doesn’t have these problems. And isn’t trying to fix quite as many problems I’ve never had.
I don't agree with the builder pattern. For basic objects yes its a bit silly.
But the ACTUAL value of the builder pattern is when you want to variadically construct an object. Create the base object, then loop or otherwise control flow over other state to optionally add stuff to the base object.
Then, additionally, the final "build" call can run validations over the set of the complete object. This is useful in cases where an intermediate state could be invalid but a subsequent update will update it to a valid state. So you dont want to validate on every update.
I've used the builder pattern in python when I wanted to have the mutable and immutable version of a class be different types. you do a bunch of construction on the mutable version then call "freeze" which uses the final data to construct the "immutable" class.
Builders have some neat properties like partial evaluation, which becomes especially neat when you use stateless builders that return new instances. They can also be subclassed, allowing not only behavior that can be overridden at individual method granularity, but able to build a different subclass.
Obviously don't reach for a builder if you don't have these use cases though.
Those slides would be a lot more useful with a transcript of the talk that went along with them. Or a video of it. Wonder if anything like that still exists.
On a closer read, TFA shows strong evidence of being AI-generated, at least in parts. Overall it's just super padded and not especially insightful, and it has this quirky writing style that seems... rather familiar. But I especially want to complain about:
> Okay, maybe you want to delay creating the object until it’s actually needed — lazy initialization. Still no need for Singleton patterns.
> Use a simple function with a closure and an internal variable to store the instance:
The given example does not actually defer instantiation, which would be clear to anyone who actually tried testing the code before publishing it (for example, by providing a definition for the class being instantiated and `print`ing a message from its `__init__`) or just understands Python well enough.
But also, using closures in this way actually is an attempt to implement the pattern. It just doesn't work very well, since... well, it trivially allows client code to end up with multiple separate instances. In fact, it actually expects you to create distinct ordinary instances in order to call the "setter" (instead of supplying new "construction" arguments).
So actually it's effectively useless, and will just complicate the client code for no reason.
I don't think any of the examples in the article contradict the Zen of Python. Even if there's one simplest and clearest way to do it in Python, there's nothing stopping people from using a more complicated solution that they got used to while working in a different language. They might not know to look for a simpler way, because they're used to working in a language where their way is the simplest.
> than in any other programming language
After reading the article, I couldn't believe anyone designs their systems like that. His "solutions" seemed to be the obvious way to do things.
$ python -c 'import this' | grep way
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
There are many layers to this, but the most important thing to point out is that having only one obvious way is just a preference (or ideal). In practice, any deliberate attempt to prevent something logical from working is counter-productive, and there is really no way to control what other people think is or isn't "obvious". And we all sometimes just expect things to work very differently than they actually do, even in ways that might seem bizarre in retrospect. We can't all "be Dutch" all the time.
But let me dig into just one more layer. Pay attention to the hyphens used to simulate em-dashes, and how they're spaced, versus what you might think of as "obvious" ways to use them. I'm assured that this is a deliberate joke. And of course, now that we have reasonably widespread Unicode support (even in terminals), surely using an actual emdash character is the "obvious" way. Or is it? People still have reasons for clinging to ASCII in places where it suffices. Then consider that this was written in 2004. What was your environment like at that point? How old was Unicode at that point? What other options did you have (and which ones did you have to worry about) for representing non-ASCII characters? (You can say that all those "code pages" and such were all really Unicode encodings, but how long did it take until people actually thought of them that way?) On the other hand, Python had a real `unicode` type since 2.0, released in 2001. But who do you know who used it? On yet another hand, an emdash in a terminal will typically only be one column wide (just as an 'm' character is), and barely visually distinct from U+002D HYPHEN-MINUS. (And hackers will freely identify "dash" with this character, while Unicode recognizes 25 characters as dashes: https://www.compart.com/en/unicode/category/Pd) Reasonable people can disagree on exactly when it should have become sensible to use actual emdashes, or even whether it is now. Or on whether proper typography is valuable here anyway.
If I had but one design pattern I would just LOVE to see disappear from Python, it's the need for super(). Don't get me wrong, super() is a clever piece of engineering, but if your code actually needs what it's useful for (C3 linearization, MRO, etc), then you've made things too complicated. I deplore the proliferation of libraries that have embraced the seductive, but ultimately deceptive ways of the mixin, because they saw all the big boys reaching for it. The devil gave multiple inheritance a cooler name, some new outfits, and sunglasses to confuse the Pythonistas and they embraced it with open arms.
Refactor to favor composition over inheritance. But if you really must inherit, single over multiple, and shallow over deep. Eventually your code will less and less need super() and it'll become pointless to use it over the more explicit mechanism, which incidentally makes everything cognitively lighter.
If the language is limited to only single and shallow inheritance, then super() becomes a syntactic convenience that saves everyone the burden of spelling out the inheriting class. But in Python, even if your code emulates these constraints, you lose in clarity from using super() because someone reading your source has to wonder if or why it was specifically needed, since its main purpose is to resolve the kinds of conflicts that arise in complex inheritance scenarios (diamond, problematic cycles, and such). So, to need it is to make your code complicated. To not need it while using it, is to lose in clarity.
You can implement a singleton class that works properly quite easily. The advantage is that most people are familiar with singleton as a pattern, and it is a self contained chunk of code. The cache solution you provided works, but its functionality is not obvious and it feels very hacky to me. Somebody's going to initialize Whatever in another way down the line without using the cached function...
Another technique I've seen is to hide or overwrite the name of the class, such that client code only knows about the instance (and is expected to use it directly rather than doing any kind of access or instantiation). Of course, this doesn't give you lazy initialization unless you lazily import the module, at which point you would definitely be better off just using the module object directly.
There's also code out there which replaces the cached module object with a class instance in top-level code! This is especially used to work around the prior lack of module-level `__getattr__` (and `__dir__`), added in 3.7 (https://peps.python.org/pep-0562/). But you might still need it if for some reason you want to hook into the lower-level `__getattribute__`. And Andrew Moffat's `sh` package still does this (https://github.com/amoffat/sh/blob/develop/sh.py#L3635) even though it now only declares support for 3.8 and above. (Perhaps there was simply no clear reason to change it.)
I feel the blog post is a bunch of poorly thought through strawmen. I was browsing through the singleton example and I was wondering why would anyone use buggy code to implement something it clearly was not designed to implement.
The whole article is quite subpar. I was expecting idiomatic stuff that eliminated the need to implement something, like for example implementing singletons with modules and even getter functions, but there was none of that: just strawmen.
The programmers that insist in using type hints in python usually are the ones that makes these mistakes.
I think the main reason that these patterns do not make sense is because python is a dynamic language.
If you turn off the part of your brain that thinks in types you realize that you can solve most of these in plain functions and dicts.
Using default args as replacement to the builder pattern is just ridiculous. If you want to encode rules for creating data, that screams schema validation, not builder pattern.
Python type hints are hugely valuable both as a means of correctness checking, but also just as a means of documentation. It strikes me as incredibly shortsighted to say you can forget about types just because it’s a dynamic language. The types are absolutely still there and need thought about. They just aren’t defined or used in terms of allocation and management of memory.
Usually with OOP several builders are composed together to express the creation of some data. These builders have functions with types, which define the rules for the creation of the objects.
My point is that the CarBuilder is not a real type that relates to the business, but something that we had to create to encode some behaviour/rules.
Some function that validates that a dict is a valid car is much more explicit that lots of different builder classes in my opinion.
I have observed these "design pattern shoehorned into Python" so many times ... Great post. When you see these things done in a code base, you know that you got people, who would rather want to write Java working on it. Or maybe people who don't have the feel for Python as a language or something.
First thing I looked up in the article was "singleton", as a sanity check, whether the article is any good. And yes, it shows module level binding as alternative, exactly what I expected, because I looked into this in the past, when someone implemented API client singleton, in a case of irrelevant early optimization of something that was never a bottleneck.
Articles like this are helpful in spreading the awareness, that one should not hold a Python like one holds a Java.
Best practices in software engineering seem to usually pertain to a particular language or set of languages. I've also noticed that authors usually don't notice that this is the case.
In fact, I've pissed off some people in interviews for holding this view. We aren't really that empirical as an industry about best practices.
Module-level initialization has one huge problem in Python, though.
That means that as soon as you import a module, initialization happens. Ad infinitum, and you get 0.5s or more import times for libraries like sqlalchemy, requests...
The result of module import is cached (which is also what makes it valid to use a module as a singleton); you do not pay this price repeatedly. Imports can also be deferred; `import` is an ordinary statement in Python which takes effect at runtime rather than compile time.
Modules that are slow to import are usually slow because of speculatively importing a large tree of sub-modules. Otherwise it's because there's actual work being done in the top-level code, which is generally unavoidable.
(Requests is "only" around a .1s import on my 11-year-old hardware. But yes, that is still pretty big; several times as long as the Python interpreter plus the default modules imported automatically at startup.)
yep, the legacy codebase I maintain does a lot of this kind of stuff and has made it difficult to write unit tests in some cases due to all the code that runs at import and all the state we end up with
I know, I'm just complaining about the mountain of code that does this at my company. And there is no fixing it using the article's approach or any other for that matter due to the sheer scale of the abuse.
I've seen a person using in Perl Singleton classes, Factory classes and other Java patterns. The app worked well but the code was probably 2x large than it could be.
The singleton one is one I've attempted, and no, it doesn't work well in Python. Early in my career I worked on a number of projects where the architects had used a singleton pattern for a number of things. The pattern sort of stuck in my head, but this was in C# and I've mostly worked in Python ever since. As the article points out it's designed for language like Java and C++ (and C#).
In my opinion the singleton pattern does however not make Python code harder to test. In Python it's actually extremely handy, because its incredibly easy to mock out a singleton in your test cases.
How does Python do mocking if it doesn't use singletons?
Or do people just not do unit testing in Python using spock-like technology? And I do use the word technology because Spock is that much better than just a bunch of test scripts
You can overwrite pretty much anything in Python at runtime, and the mock tooling in the standard library can help you with that, if you can't straight up just override a method or object.
So with a Singleton what you can do at least in Java land is have your service invoke the method to get the Singleton semi-global object.
So you can mock that method invocation to return a different object. So you basically have the local object to play with in your test and won't be affecting any actual real global state
But basically the article was just saying just use global variables. Does python have a means for then intercepting the value request and data assignments for that global variable for the local scope of the testing code? Or is it hardwired like a global variable, presumably is?
I'm tired and I'm not sure I understood your question correctly, sorry if it doesn't address your point:
In python test library, you have something called 'monkeypatch' that allows you to intercept calls to specific functions or classes and set the response yourself (I mostly use it to mock API responses tbh but it can do a lot more, an really complex operations). Monkeypatch only operate in the scope of the function or file it's written in (I think. I only remember using it in unit test functions).
The answer if I understand correctly is if you want to use testing frameworks in Python, you should probably not be using global variables and you should probably actually be using the Singleton pattern.
Python test frameworks generally do mocking by monkey-patching, and they take care of it for you. This does not generally require global state, either in the form of module-level variables or singletons. It works by doing things like replacing the attributes of an existing class or module object. (Python takes "everything is an object" seriously, so a module is an object whose attributes are the functions, classes and other global variables of the corresponding file's code, roughly speaking. It is not simply reflected as such through a reflection API, like in Java; it is directly represented with an object. This is the sort of benefit dynamic typing gets you, along with the object-qua-dictionary-of-attributes model.)
The value of the visitor pattern is that it lets you emulate tagged unions in languages that don't have them (e.g., Java 16 and earlier). Of course, Python has no need of this because you can check the type of anything at runtime and the optional type annotations also support union types.
There is, however, a certain elegance to multiple dispatch, which Python doesn't natively support. Visitors are indeed a common approach to emulating it. Doing the runtime checks is external rather than internal polymorphism; there are reasons for either, and aesthetics count.
I remember the fad for dependency injection frameworks in ruby, and the eventual similar pushback pointing out you could just use the language features for most of it
When I first heard the term "dependency injection", I spent quite a bit of time trying to understand it. And then when I was pretty sure I had grasped it, I felt even more confused. "Isn't that just... passing an argument to a function instead of having the function access the information from global state? ... Isn't that normally what the function should do?"
there are at least two separate use cases I can think of;
1. there are certain dependencies that it was natural to access as global state - e.g. there is typically one stdout in a program and most code just prints to it
2. a class often hard codes constructors for its internal state, e.g. if you have a Graph class it would likely have associated Edge and Node classes that it calls internally to create new edges and nodes.
the big issue with both those is testing; if you want to mock the behaviour of the print statement, or you want a graph to hold nodes that track their own creation, or whatever, you have no way of overriding the internal constructors your class uses. hence dependency injection, the idea that e.g. a graph class would be defined with a node interface and you would pass it a concrete class when constructing it (some languages have type parameters for this, but even if they do you might not think to make something as simple as the node class used internally by the graph a type parameter, and more importantly you might want to set it at run time rather than compile time).
the issue was not that ruby didn't need dependency injection the concept, it's that it didn't need the sort of dependency injection frameworks that more rigid languages like java needed, you could use ruby's built in features to do the same thing.
> the idea that e.g. a graph class would be defined with a node interface and you would pass it a concrete class when constructing it
Right, but that's still just a generic instance of "plan ahead to use a parameter (possibly with a default value) instead of grabbing something hard-coded". I don't see a proposed fix for a class that was already designed to expect its own inner node implementation instead of an abstraction, except to rewrite it. Which is why it confused me, because the rewrite would just involve adding a parameter, so that you could pass an argument to it, and making the implementation use that parameter. I suppose in statically typed languages you might have to actually define an interface type for that parameter, whatever. None of that explained to me why the Java guys were talking about needing a "framework" to accomplish this, that apparently somehow involved a ton of XML.
> the issue was not that ruby didn't need dependency injection the concept, it's that it didn't need the sort of dependency injection frameworks that more rigid languages like java needed, you could use ruby's built in features to do the same thing.
Yes, it's much the same in Python. (And yes, I have for example used `print` as an argument to a higher-order function before.)
Great post. I dont write much python these days, but I distinctly remember things being suspiciously easy .. to the point where I started to wonder why arent things so complicated.
Singleton is the worst example of design pattern, not sure why these kinds of posts always like to mention it. Singleton is just a hack for avoiding OOP with OOP languages. Obviously python allows non OOP code, so not surprised singleton is useless there.
Great post. Now let's do C# because if I see a repository pattern doing nothing but calling Entity Framework under the hood again I'm going to rip the planet in half
> Great post. Now let's do C# because if I see a repository pattern doing nothing but calling Entity Framework under the hood again I'm going to rip the planet in half
Your comment shows a hefty amount of ignorance. Repositories wrap Entity Framework because Entity Framework's DbContext & Co are notoriously complicated to mock and stub.
Once you wrap EF stuff with a repository that implements an interface, that problem vanishes and all your code suddenly is unit testable.
We don't really have this problem in .NET 8, we mock stuff just fine using an in-memory database provider.
But I admit my tone missed the mark. It may have been much harder to do in the past, or maybe I'm missing some nuance.
But also at this point why not just have your DbContext directly implement an interface that you can mock? Surely that must be more straightforward than adding an entire abstraction layer, that you have to extend with each new usage scenario, and without sacrificing implicit transactionality.
public class Repository<TEntity> : IRepository<TEntity> where TEntity : class
{
public async Task Add(TEntity entity)
{
_context.Add(entity);
await _context.SaveChangesAsync();
}
}
Builder patterns are seriously useful for high complexity state construction with the ability to encore rules to prevent degenerate state.
A good example from my experience might be connecting to a Cassandra cluster it other type of database that can have extremely complex distributed settings and behaviors: timeouts, consistency levels, failure modes, retry behavior, seed connector sets.
Javaland definitely had a problem with overuse of patterns, but the patterns are legitimate tools even outside of Oop.
I haven't done much research into testing frameworks in other languages, but the spock testing framework in groovy/javaland is a serious piece of good software engineering that needs singletons and other "non hard coded/not global" approaches to work well.
Spring gets a ton of hate outside of jabs, and I get it, they tried to subsume every aspect of programming and apis especially web into their framework, but the core spring framework solved complex object graph construction in a very effective way
Oh you hate "objects" but have thousand line struct graph construction code?
It's kind of sad that groovy never took off. It offered all the good parts of java with a ton of good python, ruby, and other langs with the solid foundation of the jvm for high speed execution.
But it's effectively dead. Kind of like Cassandra is effectively dead. The tech treadmill will eventually leave you behind.
In my experience everyone will hate on Spring, showing how much easier other frameworks are using tiny unrealistic examples, until they hit a really hard architectural challenge (imagine reimplementing @Transactional in pure Java) and that's where Spring shines.
Yeah, it's sad, I like Groovy a lot. It got relegated to a second-class citizen role on Jenkins, for the most part.
I'd say Kotlin took most good parts of Groovy syntax and put it into a decent type system, then Clojure peeled off the folks who still preferred a more dynamic language. Languages can't all live forever, otherwise there'd be no room for new growth.
I was writing a python thing where the class was going to have like at least 20 paramaters to configure it. Builder pattern was kind of feeling like a good idea to keep it cleaner for the user. But it is surprising to see in the python world. It felt like a mess of default values though for the user to handle.
In Java (or even Go) this pattern is required to enforce type safety. In Python it seems that they ignore the typing part and just pass a bunch of dicts. Looks much cleaner, even if not entirely typesafe.
Rarely have I seen a class that truly needs 20 parameters, that's most often a design flaw. There might be cases where this isn't true, but those are edge cases, so it's probably also fine to apply a special patterns, such as the builder pattern.
> Rarely have I seen a class that truly needs 20 parameters
If a class seems like it needs that many parameters, it is very common that one or both of these is true:
1. It is doing too much, or
2. There are things-that-should-be-their-own-classes hiding in groups of the parameters.
#2 is kind of a subset of #1, but the "doing too much" tends to be concentrated in validating relations between parameters rather than what happens after the object is constructed.
I like method chaining (not the same as builder patterns) for code that needs to run chained operations on objects. It is not exacrly the same because each chained operation usually does more than just setting a variable.
E.g.
signal = Sine(freq=440.0, amp=1.0)
.rectify()
.gain(2.0)
.center()
.clip(0.5)
Each of the methods may return a Signal object that can get processed by the next function, allowing you to chain them together to get complex results quickly. The ergonomics on this are stellar and because these methods can be implemented as generators, each step can yield values lazily instead of building full intermediate arrays.
That way, you get the clean chaining style and efficient, streaming computation.
All good ones. I'll add this because A: It's common in Python, and B: There are suitable alternatives in the standard library:
Conflating key/value lookups (dicts) with structured data (classes). They are both useful tools, but are for different purposes. Many python programmers (Myself many years ago included!) misused dicts when they should have been using dataclasses.
Everything in the codebase I maintain at my job is an arbitrary dict and there is no type information anywhere. It wasn't even written that long ago (dataclasses were a thing long before this codebase was written).
There's actually a place where the original authors subclassed dict, and dynamically generate attributes of a "data class" such that it can be used with dotted attribute access syntax or dict access syntax but the `__slots__` attribute of these classes is also generated dynamically so you don't have any auto-complete when trying the dotted attribute access. It's genuinely insane lol.
I really appreciate how this article explains why certain design patterns became a thing. Usually, it was to address some very practical problem or limitation. And yet, a lot of younger programmers treat these patterns like a religious dogma that they must follow and don't question if they really make sense for the specific situation they are in.
The main motivation for the concept of design patterns is to give unique names to existing programming patterns, so that when someone says “Strategy pattern”, everyone knows what pattern that refers to, and vice versa that the same pattern isn’t called ten different things. It’s to make communication about program design efficient, by defining a vocabulary of patterns that tend to reoccur, and a common structure for describing them. Not all patterns were successful in that way, but it’s the main idea behind design patterns.
The question of when a using a given pattern is appropriate is orthogonal to that. The fact that a named pattern has been defined doesn’t imply a recommendation to use it across the board. It depends on the context and on design forces, and those change with time and circumstances. Anti-patterns are patterns as well.
It’s a pity that the idea of design patterns ended up (after the pattern language craze faded) being almost exclusively associated with the specific patterns named and described in the GoF book.
> The main motivation for the concept of design patterns is to give unique names to existing programming patterns
No, naming them is not the main purpose, preserving and transmitting knowledge of what they are and what they are useful for, so that people aren't fofced to rediscover solutions to the same problems over and over again. [0] Naming is obviously important for that purpose, but isn't the main goal, but a means of supporting it.
[0] If this sounds like a subset of the purpose of a reusable code library, it is, which is why in languages with sufficient abstraction facilities to allow the generic implementation of a pattern to be reusable, well documented (for the “where and when to use this” piece) code libraries replace documents that have the explanation paired with implementation recipes that one can modify to one’s particular use.
> It’s a pity that the idea of design patterns ended up (after the pattern language craze faded) being almost exclusively associated with the specific patterns named and described in the GoF book
That book is the closest we came to establishing a common language. I remember brushing up on the names of design patterns whenever I had an interview. Ultimately, though, it didn't yield any benefit that the industry is missing now.
Like you said, the fundamental idea behind the book was that consciously naming, cataloging, and studying design patterns would improve communication among programmers. There was also an idea that studying design patterns would give beginning programmers a richer repertoire of programming techniques faster than if they had to figure them out themselves.
Looking back with decades of hindsight, my belief is that awareness and intentional use of design patterns made no difference whatsoever. Some names stuck, and would have anyway. Others didn't, and years of status as official "design patterns" in a book widely studied across the industry couldn't make them. The younger programmers I work with who had no exposure to the GoF book, and for whom "design patterns" is something that dusty old farts used to talk about, use patterns like Flyweight, Proxy, Command, Facade, Strategy, Chain of Responsibility, Decorator, etc. without knowing or needing a name for them, and they communicate amongst themselves just as efficiently as my generation did at the height of the design pattern craze.
In the final analysis, I have never looked at the less experienced programmers around me and thought, "This situation would go faster and smoother if they had studied design patterns." The generation that learned to program after design patterns had faded as an idea learned just as quickly and communicates just as well as the generation that studied them assiduously as junior programmers like I did.
> Like you said, the fundamental idea behind the book was that consciously naming, cataloging, and studying design patterns would improve communication among programmers.
> The younger programmers I work with who had no exposure to the GoF book.....and they communicate amongst themselves just as efficiently as my generation
> The generation that learned to program after design patterns had faded as an idea learned just as quickly and communicates just as well as the generation that studied them assiduously as junior programmers like I did.
I've never read GOF so I don't know if they emphasize communication, but I have read and studied many other programming pattern books and communication is low on the list of reasons to learn them in my opinion. Their only purpose for me is to organize code in a way that has been proven to "scale" along with a codebase so that you don't end up with a plate of spaghetti.
> a lot of younger programmers treat these patterns like a religious dogma
First you learn what the pattern is. Then you learn when to use it. Then you learn when not to use it.
The gap between the first and third step can be many years.
> The gap between the first and third step can be many years.
I admire your optimism!
The explanations are great! The condescension, not so much.
> Simple: we just use the language like it was meant to be used.
> Use Default Arguments Like a Normal Human
etc
Yes, programmer is super human
Probably also a problem that exists because of how programmers are taught. Using Java and being presented with the patterns as solutions to what Java does.
I don't however appreciate that the author doesn't actually know about Java or C++ well enough such that they are spewing falsehoods about Java or C++. Saying things like "There’s no clean way to say 'this is private to this file' (in C++)" is just bonkers. The author is well intentioned, but coming up with the wrong reason is worse than not offering any reason.
I had that thought too, and thought I must have misunderstood something. I generally assume I’m the dummy. :)
> I really appreciate how this article explains why certain design patterns became a thing. Usually, it was to address some very practical problem or limitation.
I don't agree at all. I feel that those who criticise design patterns as solution to practical problems are completely missing the point of design patterns, and the whole reason they have the name the have: design patterns. I'll explain.
Design patterns are solutions to common design problems, but "problems" isn't the kind of problems you think it is. It's "problems" in the sense that there are requirements to be met. A state pattern is a way to implement a state machine, but you still have a state machine and a state pattern if your state classes don't handle state transitions.
More to the point, look at singletons. It's irrelevant if they are implemented with a class or a closure or a module. What makes a singleton a singleton is the fact that there is an assurance that there will be a single instance of an object. Does an implementation that allow multiple instances or doesn't return the same instance qualifies as a singleton? Obviously not.
Design patterns are recurring solutions to recurring problems. They are so recurring that they get their name and represent a high level concept. A message queue is a design pattern. An exception is a design pattern. Lazy loading is a design pattern. Retries and exponential backoffs are design patterns. Etc. Is anyone arguing that Python has none of it?
So many people trying to criticise the GoF but they don't even bother to be informed or form an educated opinion.
I see you’ve been downvoted.
Design patterns aren’t solutions to common design problems. They’re after the fact descriptions of solutions for design problems. That’s the issue. That’s the beef. Everyone thought of that book as a cook book instead of a naturalists’ musings on an ecosystem, which is what they are.
Those of us who designed before people discovered that stupid book were constantly asked what the differences were between this pattern and that. And the book just isn’t thick enough and Eric Gamma was just trying to complete a thesis not write a book, so despite having at least 8 years in industry before completing his masters he cocked it up. And ruined Java in the process.
We had a contemporary of Vlissades teach a class at my last company and he crystallized all of my suspicions about the GoF book and added a whole lot more.
My advice for at least fifteen years is, if you think you want to read GoF, read Refactoring instead. If you’ve read Refactoring and still want to read GoF, read Refactoring a second time because it didn’t all sink in.
Refactoring is ten times the value of GoF for teaching you how to do this trade and how to break up architectural brambles.
I also completely disagree with the builder description.
It sounds like they've not had to use it for any meaningful work, and basically described a constructor. Yeah, named parameters are great, and I miss them when I don't have them, but if you think a builder only works like this
Object.setX().setY().build()
Then it tells me that you haven't built anything meaningful. It's a way of building a state and running an operation at the end in a contained manner. If your build method doesn't run some sort of validation then it's probably a waste of time, might as well just call setters, return this and call it a day if you want to be verbose and chain commands.
I mean, people usually call those 'feature' when they're built-in. I would never call 'lazy evaluation' in Haskell a design pattern, because it's part of the language.
If I have to implement something similar myself in C++ however, I'll use a niche design pattern.
I am an ex Java developer. Enterprise Fizz Buzz is highly entertaining. That stupid masters thesis pretending to be a design book landed right an inflection point and ruined half a generation of developers.
What isn’t entertaining is using OpenTelemetry, which takes me right back to Java for over-engineering. Moving to OTEL from StatsD cost us about 3% CPU per core, which on 32 core machines is an entire CPU lost to telemetry. Or more accurately, an entire second CPU lost to telemetry. That is not right.
Prometheus doesn’t have these problems. And isn’t trying to fix quite as many problems I’ve never had.
I don't agree with the builder pattern. For basic objects yes its a bit silly.
But the ACTUAL value of the builder pattern is when you want to variadically construct an object. Create the base object, then loop or otherwise control flow over other state to optionally add stuff to the base object.
Then, additionally, the final "build" call can run validations over the set of the complete object. This is useful in cases where an intermediate state could be invalid but a subsequent update will update it to a valid state. So you dont want to validate on every update.
I've used the builder pattern in python when I wanted to have the mutable and immutable version of a class be different types. you do a bunch of construction on the mutable version then call "freeze" which uses the final data to construct the "immutable" class.
Couldn't you build up a dictionary of keyword arguments instead and do all the validation in the __init__ method? E.g.
Builders have some neat properties like partial evaluation, which becomes especially neat when you use stateless builders that return new instances. They can also be subclassed, allowing not only behavior that can be overridden at individual method granularity, but able to build a different subclass.
Obviously don't reach for a builder if you don't have these use cases though.
> Create the base object, then loop or otherwise control flow over other state to optionally add stuff to the base object.
That’s what list comprehensions are for.
> Then, additionally, the final "build" call can run validations over the set of the complete object.
The constructor function can do that too.
The constructor can not do it because the constructor does not have all the data. This is lazy evaluation.
Could you not just use dicts and some schema validation logic for this?
Peter Norvig has a well-known piece that goes into more depth on why the GoF-style patterns don't make much sense in high-level languages:
https://www.norvig.com/design-patterns/
Those slides would be a lot more useful with a transcript of the talk that went along with them. Or a video of it. Wonder if anything like that still exists.
On a closer read, TFA shows strong evidence of being AI-generated, at least in parts. Overall it's just super padded and not especially insightful, and it has this quirky writing style that seems... rather familiar. But I especially want to complain about:
> Okay, maybe you want to delay creating the object until it’s actually needed — lazy initialization. Still no need for Singleton patterns.
> Use a simple function with a closure and an internal variable to store the instance:
The given example does not actually defer instantiation, which would be clear to anyone who actually tried testing the code before publishing it (for example, by providing a definition for the class being instantiated and `print`ing a message from its `__init__`) or just understands Python well enough.
But also, using closures in this way actually is an attempt to implement the pattern. It just doesn't work very well, since... well, it trivially allows client code to end up with multiple separate instances. In fact, it actually expects you to create distinct ordinary instances in order to call the "setter" (instead of supplying new "construction" arguments).
So actually it's effectively useless, and will just complicate the client code for no reason.
Brandon Rhodes has a series of talks on this topic. Here's the most up to date one:
Classic Design Patterns: Where Are They Now - Brandon Rhodes (https://www.youtube.com/watch?v=pGq7Cr2ekVM)
The Zen of Python: there should be one obvious way to do things.
Python in practice: there is more ways of doing it than in any other programming language.
Oh Python, how I love and hate you.
I don't think any of the examples in the article contradict the Zen of Python. Even if there's one simplest and clearest way to do it in Python, there's nothing stopping people from using a more complicated solution that they got used to while working in a different language. They might not know to look for a simpler way, because they're used to working in a language where their way is the simplest.
People misunderstand the target audience and code base for the zen of python
Who's it for, then?
> than in any other programming language After reading the article, I couldn't believe anyone designs their systems like that. His "solutions" seemed to be the obvious way to do things.
The actual text:
There are many layers to this, but the most important thing to point out is that having only one obvious way is just a preference (or ideal). In practice, any deliberate attempt to prevent something logical from working is counter-productive, and there is really no way to control what other people think is or isn't "obvious". And we all sometimes just expect things to work very differently than they actually do, even in ways that might seem bizarre in retrospect. We can't all "be Dutch" all the time.But let me dig into just one more layer. Pay attention to the hyphens used to simulate em-dashes, and how they're spaced, versus what you might think of as "obvious" ways to use them. I'm assured that this is a deliberate joke. And of course, now that we have reasonably widespread Unicode support (even in terminals), surely using an actual emdash character is the "obvious" way. Or is it? People still have reasons for clinging to ASCII in places where it suffices. Then consider that this was written in 2004. What was your environment like at that point? How old was Unicode at that point? What other options did you have (and which ones did you have to worry about) for representing non-ASCII characters? (You can say that all those "code pages" and such were all really Unicode encodings, but how long did it take until people actually thought of them that way?) On the other hand, Python had a real `unicode` type since 2.0, released in 2001. But who do you know who used it? On yet another hand, an emdash in a terminal will typically only be one column wide (just as an 'm' character is), and barely visually distinct from U+002D HYPHEN-MINUS. (And hackers will freely identify "dash" with this character, while Unicode recognizes 25 characters as dashes: https://www.compart.com/en/unicode/category/Pd) Reasonable people can disagree on exactly when it should have become sensible to use actual emdashes, or even whether it is now. Or on whether proper typography is valuable here anyway.
If I had but one design pattern I would just LOVE to see disappear from Python, it's the need for super(). Don't get me wrong, super() is a clever piece of engineering, but if your code actually needs what it's useful for (C3 linearization, MRO, etc), then you've made things too complicated. I deplore the proliferation of libraries that have embraced the seductive, but ultimately deceptive ways of the mixin, because they saw all the big boys reaching for it. The devil gave multiple inheritance a cooler name, some new outfits, and sunglasses to confuse the Pythonistas and they embraced it with open arms.
Refactor to favor composition over inheritance. But if you really must inherit, single over multiple, and shallow over deep. Eventually your code will less and less need super() and it'll become pointless to use it over the more explicit mechanism, which incidentally makes everything cognitively lighter.
Isn't super() also commonly used in languages that have only single inheritance?
If the language is limited to only single and shallow inheritance, then super() becomes a syntactic convenience that saves everyone the burden of spelling out the inheriting class. But in Python, even if your code emulates these constraints, you lose in clarity from using super() because someone reading your source has to wonder if or why it was specifically needed, since its main purpose is to resolve the kinds of conflicts that arise in complex inheritance scenarios (diamond, problematic cycles, and such). So, to need it is to make your code complicated. To not need it while using it, is to lose in clarity.
I've never seen anybody do that... In Python you can use a module as a singleton (mentioned in the article). Or provide some data like:
And use `get_whatever` as your interface to get the resource.You can implement a singleton class that works properly quite easily. The advantage is that most people are familiar with singleton as a pattern, and it is a self contained chunk of code. The cache solution you provided works, but its functionality is not obvious and it feels very hacky to me. Somebody's going to initialize Whatever in another way down the line without using the cached function...
Another technique I've seen is to hide or overwrite the name of the class, such that client code only knows about the instance (and is expected to use it directly rather than doing any kind of access or instantiation). Of course, this doesn't give you lazy initialization unless you lazily import the module, at which point you would definitely be better off just using the module object directly.
There's also code out there which replaces the cached module object with a class instance in top-level code! This is especially used to work around the prior lack of module-level `__getattr__` (and `__dir__`), added in 3.7 (https://peps.python.org/pep-0562/). But you might still need it if for some reason you want to hook into the lower-level `__getattribute__`. And Andrew Moffat's `sh` package still does this (https://github.com/amoffat/sh/blob/develop/sh.py#L3635) even though it now only declares support for 3.8 and above. (Perhaps there was simply no clear reason to change it.)
I do this in fast api and then pass get_whatever as a dependency to an endpoint
Alternatively you could make use of the lifespan[0] and its state[1][2].
[0]: https://fastapi.tiangolo.com/advanced/events/#lifespan-funct...
[1]: https://asgi.readthedocs.io/en/latest/specs/lifespan.html#li...
[2]: https://www.starlette.io/lifespan/#lifespan-state
> I've never seen anybody do that...
I feel the blog post is a bunch of poorly thought through strawmen. I was browsing through the singleton example and I was wondering why would anyone use buggy code to implement something it clearly was not designed to implement.
The whole article is quite subpar. I was expecting idiomatic stuff that eliminated the need to implement something, like for example implementing singletons with modules and even getter functions, but there was none of that: just strawmen.
Really disappointing.
The programmers that insist in using type hints in python usually are the ones that makes these mistakes. I think the main reason that these patterns do not make sense is because python is a dynamic language. If you turn off the part of your brain that thinks in types you realize that you can solve most of these in plain functions and dicts. Using default args as replacement to the builder pattern is just ridiculous. If you want to encode rules for creating data, that screams schema validation, not builder pattern.
Python type hints are hugely valuable both as a means of correctness checking, but also just as a means of documentation. It strikes me as incredibly shortsighted to say you can forget about types just because it’s a dynamic language. The types are absolutely still there and need thought about. They just aren’t defined or used in terms of allocation and management of memory.
> The types are absolutely still there and need thought about
Yes, if they aren't in the code, it just means the programmer has figure out and carry that around mentally when reading or writing code.
Usually with OOP several builders are composed together to express the creation of some data. These builders have functions with types, which define the rules for the creation of the objects.
My point is that the CarBuilder is not a real type that relates to the business, but something that we had to create to encode some behaviour/rules.
Some function that validates that a dict is a valid car is much more explicit that lots of different builder classes in my opinion.
I have observed these "design pattern shoehorned into Python" so many times ... Great post. When you see these things done in a code base, you know that you got people, who would rather want to write Java working on it. Or maybe people who don't have the feel for Python as a language or something.
First thing I looked up in the article was "singleton", as a sanity check, whether the article is any good. And yes, it shows module level binding as alternative, exactly what I expected, because I looked into this in the past, when someone implemented API client singleton, in a case of irrelevant early optimization of something that was never a bottleneck.
Articles like this are helpful in spreading the awareness, that one should not hold a Python like one holds a Java.
Best practices in software engineering seem to usually pertain to a particular language or set of languages. I've also noticed that authors usually don't notice that this is the case.
In fact, I've pissed off some people in interviews for holding this view. We aren't really that empirical as an industry about best practices.
Module-level initialization has one huge problem in Python, though.
That means that as soon as you import a module, initialization happens. Ad infinitum, and you get 0.5s or more import times for libraries like sqlalchemy, requests...
> Ad infinitum
The result of module import is cached (which is also what makes it valid to use a module as a singleton); you do not pay this price repeatedly. Imports can also be deferred; `import` is an ordinary statement in Python which takes effect at runtime rather than compile time.
Modules that are slow to import are usually slow because of speculatively importing a large tree of sub-modules. Otherwise it's because there's actual work being done in the top-level code, which is generally unavoidable.
(Requests is "only" around a .1s import on my 11-year-old hardware. But yes, that is still pretty big; several times as long as the Python interpreter plus the default modules imported automatically at startup.)
Initialization only happens once, when you import the module for the first time, afaik. Unless you are running multiple Python processes, that is.
yep, the legacy codebase I maintain does a lot of this kind of stuff and has made it difficult to write unit tests in some cases due to all the code that runs at import and all the state we end up with
The article addresses this.
I know, I'm just complaining about the mountain of code that does this at my company. And there is no fixing it using the article's approach or any other for that matter due to the sheer scale of the abuse.
I like the concept of the article but I’m not sure I’ve seen these in the wild.
It's more like "if switching from Java to Python". I've also never seen anyone writing Python do this.
I've seen a person using in Perl Singleton classes, Factory classes and other Java patterns. The app worked well but the code was probably 2x large than it could be.
The singleton one is one I've attempted, and no, it doesn't work well in Python. Early in my career I worked on a number of projects where the architects had used a singleton pattern for a number of things. The pattern sort of stuck in my head, but this was in C# and I've mostly worked in Python ever since. As the article points out it's designed for language like Java and C++ (and C#).
In my opinion the singleton pattern does however not make Python code harder to test. In Python it's actually extremely handy, because its incredibly easy to mock out a singleton in your test cases.
How does Python do mocking if it doesn't use singletons?
Or do people just not do unit testing in Python using spock-like technology? And I do use the word technology because Spock is that much better than just a bunch of test scripts
You can overwrite pretty much anything in Python at runtime, and the mock tooling in the standard library can help you with that, if you can't straight up just override a method or object.
Global vars?
So with a Singleton what you can do at least in Java land is have your service invoke the method to get the Singleton semi-global object.
So you can mock that method invocation to return a different object. So you basically have the local object to play with in your test and won't be affecting any actual real global state
But basically the article was just saying just use global variables. Does python have a means for then intercepting the value request and data assignments for that global variable for the local scope of the testing code? Or is it hardwired like a global variable, presumably is?
I'm tired and I'm not sure I understood your question correctly, sorry if it doesn't address your point:
In python test library, you have something called 'monkeypatch' that allows you to intercept calls to specific functions or classes and set the response yourself (I mostly use it to mock API responses tbh but it can do a lot more, an really complex operations). Monkeypatch only operate in the scope of the function or file it's written in (I think. I only remember using it in unit test functions).
The answer if I understand correctly is if you want to use testing frameworks in Python, you should probably not be using global variables and you should probably actually be using the Singleton pattern.
Python test frameworks generally do mocking by monkey-patching, and they take care of it for you. This does not generally require global state, either in the form of module-level variables or singletons. It works by doing things like replacing the attributes of an existing class or module object. (Python takes "everything is an object" seriously, so a module is an object whose attributes are the functions, classes and other global variables of the corresponding file's code, roughly speaking. It is not simply reflected as such through a reflection API, like in Java; it is directly represented with an object. This is the sort of benefit dynamic typing gets you, along with the object-qua-dictionary-of-attributes model.)
An example of the Builder pattern in the wild: https://github.com/bikram990/PyScep/blob/8d80bc03368ea8dc6ea...
...especially annoying if it's only used in one continuous chain: https://github.com/bikram990/PyScep/blob/8d80bc03368ea8dc6ea...
So weird that there are plenty of errors in this article and people are full of praise for it...
He didn't mention the worst pattern, the visitor pattern, which has extremely few use cases.
The value of the visitor pattern is that it lets you emulate tagged unions in languages that don't have them (e.g., Java 16 and earlier). Of course, Python has no need of this because you can check the type of anything at runtime and the optional type annotations also support union types.
There is, however, a certain elegance to multiple dispatch, which Python doesn't natively support. Visitors are indeed a common approach to emulating it. Doing the runtime checks is external rather than internal polymorphism; there are reasons for either, and aesthetics count.
I remember the fad for dependency injection frameworks in ruby, and the eventual similar pushback pointing out you could just use the language features for most of it
When I first heard the term "dependency injection", I spent quite a bit of time trying to understand it. And then when I was pretty sure I had grasped it, I felt even more confused. "Isn't that just... passing an argument to a function instead of having the function access the information from global state? ... Isn't that normally what the function should do?"
there are at least two separate use cases I can think of;
1. there are certain dependencies that it was natural to access as global state - e.g. there is typically one stdout in a program and most code just prints to it 2. a class often hard codes constructors for its internal state, e.g. if you have a Graph class it would likely have associated Edge and Node classes that it calls internally to create new edges and nodes.
the big issue with both those is testing; if you want to mock the behaviour of the print statement, or you want a graph to hold nodes that track their own creation, or whatever, you have no way of overriding the internal constructors your class uses. hence dependency injection, the idea that e.g. a graph class would be defined with a node interface and you would pass it a concrete class when constructing it (some languages have type parameters for this, but even if they do you might not think to make something as simple as the node class used internally by the graph a type parameter, and more importantly you might want to set it at run time rather than compile time).
the issue was not that ruby didn't need dependency injection the concept, it's that it didn't need the sort of dependency injection frameworks that more rigid languages like java needed, you could use ruby's built in features to do the same thing.
> the idea that e.g. a graph class would be defined with a node interface and you would pass it a concrete class when constructing it
Right, but that's still just a generic instance of "plan ahead to use a parameter (possibly with a default value) instead of grabbing something hard-coded". I don't see a proposed fix for a class that was already designed to expect its own inner node implementation instead of an abstraction, except to rewrite it. Which is why it confused me, because the rewrite would just involve adding a parameter, so that you could pass an argument to it, and making the implementation use that parameter. I suppose in statically typed languages you might have to actually define an interface type for that parameter, whatever. None of that explained to me why the Java guys were talking about needing a "framework" to accomplish this, that apparently somehow involved a ton of XML.
> the issue was not that ruby didn't need dependency injection the concept, it's that it didn't need the sort of dependency injection frameworks that more rigid languages like java needed, you could use ruby's built in features to do the same thing.
Yes, it's much the same in Python. (And yes, I have for example used `print` as an argument to a higher-order function before.)
Great post. I dont write much python these days, but I distinctly remember things being suspiciously easy .. to the point where I started to wonder why arent things so complicated.
Singletons are considered an antipattern in pretty much all PLs. C++, mentioned in the article, is not an exception.
Singleton is the worst example of design pattern, not sure why these kinds of posts always like to mention it. Singleton is just a hack for avoiding OOP with OOP languages. Obviously python allows non OOP code, so not surprised singleton is useless there.
Great post. Now let's do C# because if I see a repository pattern doing nothing but calling Entity Framework under the hood again I'm going to rip the planet in half
> Great post. Now let's do C# because if I see a repository pattern doing nothing but calling Entity Framework under the hood again I'm going to rip the planet in half
Your comment shows a hefty amount of ignorance. Repositories wrap Entity Framework because Entity Framework's DbContext & Co are notoriously complicated to mock and stub.
Once you wrap EF stuff with a repository that implements an interface, that problem vanishes and all your code suddenly is unit testable.
We don't really have this problem in .NET 8, we mock stuff just fine using an in-memory database provider.
But I admit my tone missed the mark. It may have been much harder to do in the past, or maybe I'm missing some nuance.
But also at this point why not just have your DbContext directly implement an interface that you can mock? Surely that must be more straightforward than adding an entire abstraction layer, that you have to extend with each new usage scenario, and without sacrificing implicit transactionality.
> We don't really have this problem in .NET 8, we mock stuff just fine using an in-memory database provider.
No, you don't. At best in-memory databases represent a test double that you can use in integration tests.
If you need to write unit tests, EF leaves you no better option than to add repositories to abstract out everything and anything involving DbContext.
What you dont like
everywhere?I got a feeling the author has but a vague idea of what Java and C++ are.
Builder patterns are seriously useful for high complexity state construction with the ability to encore rules to prevent degenerate state.
A good example from my experience might be connecting to a Cassandra cluster it other type of database that can have extremely complex distributed settings and behaviors: timeouts, consistency levels, failure modes, retry behavior, seed connector sets.
Javaland definitely had a problem with overuse of patterns, but the patterns are legitimate tools even outside of Oop.
I haven't done much research into testing frameworks in other languages, but the spock testing framework in groovy/javaland is a serious piece of good software engineering that needs singletons and other "non hard coded/not global" approaches to work well.
Spring gets a ton of hate outside of jabs, and I get it, they tried to subsume every aspect of programming and apis especially web into their framework, but the core spring framework solved complex object graph construction in a very effective way
Oh you hate "objects" but have thousand line struct graph construction code?
It's kind of sad that groovy never took off. It offered all the good parts of java with a ton of good python, ruby, and other langs with the solid foundation of the jvm for high speed execution.
But it's effectively dead. Kind of like Cassandra is effectively dead. The tech treadmill will eventually leave you behind.
In my experience everyone will hate on Spring, showing how much easier other frameworks are using tiny unrealistic examples, until they hit a really hard architectural challenge (imagine reimplementing @Transactional in pure Java) and that's where Spring shines.
Yeah, it's sad, I like Groovy a lot. It got relegated to a second-class citizen role on Jenkins, for the most part.
I'd say Kotlin took most good parts of Groovy syntax and put it into a decent type system, then Clojure peeled off the folks who still preferred a more dynamic language. Languages can't all live forever, otherwise there'd be no room for new growth.
I was writing a python thing where the class was going to have like at least 20 paramaters to configure it. Builder pattern was kind of feeling like a good idea to keep it cleaner for the user. But it is surprising to see in the python world. It felt like a mess of default values though for the user to handle.
Good example for Python not needing this pattern sometimes is Pulumi. Check out the differences between example code in Java and Python.
https://www.pulumi.com/docs/iac/get-started/kubernetes/revie...
In Java (or even Go) this pattern is required to enforce type safety. In Python it seems that they ignore the typing part and just pass a bunch of dicts. Looks much cleaner, even if not entirely typesafe.
Rarely have I seen a class that truly needs 20 parameters, that's most often a design flaw. There might be cases where this isn't true, but those are edge cases, so it's probably also fine to apply a special patterns, such as the builder pattern.
> Rarely have I seen a class that truly needs 20 parameters
If a class seems like it needs that many parameters, it is very common that one or both of these is true:
1. It is doing too much, or
2. There are things-that-should-be-their-own-classes hiding in groups of the parameters.
#2 is kind of a subset of #1, but the "doing too much" tends to be concentrated in validating relations between parameters rather than what happens after the object is constructed.
I like method chaining (not the same as builder patterns) for code that needs to run chained operations on objects. It is not exacrly the same because each chained operation usually does more than just setting a variable.
E.g.
Each of the methods may return a Signal object that can get processed by the next function, allowing you to chain them together to get complex results quickly. The ergonomics on this are stellar and because these methods can be implemented as generators, each step can yield values lazily instead of building full intermediate arrays.That way, you get the clean chaining style and efficient, streaming computation.
Just keep in mind that idiomatic Python expects https://en.wikipedia.org/wiki/Command%E2%80%93query_separati... if you return instances to support chaining, please return new instances rather than modifying the original in-place. Take the example set by the builtins; `list.append` etc. return `None` for a reason. cf. https://stackoverflow.com/questions/11205254.
This also greatly increases API discoverability, because I can just type "." and see the IDE pop up me all the options right there, no need for docs.
[dead]
All good ones. I'll add this because A: It's common in Python, and B: There are suitable alternatives in the standard library:
Conflating key/value lookups (dicts) with structured data (classes). They are both useful tools, but are for different purposes. Many python programmers (Myself many years ago included!) misused dicts when they should have been using dataclasses.
Everything in the codebase I maintain at my job is an arbitrary dict and there is no type information anywhere. It wasn't even written that long ago (dataclasses were a thing long before this codebase was written).
There's actually a place where the original authors subclassed dict, and dynamically generate attributes of a "data class" such that it can be used with dotted attribute access syntax or dict access syntax but the `__slots__` attribute of these classes is also generated dynamically so you don't have any auto-complete when trying the dotted attribute access. It's genuinely insane lol.