> If you want to avoid this issue altogether, consider using a source generator library like Mapster. That way, mapping issues can be caught at build time rather than at runtime.
The only winning move is not to play. Mapping libraries, even with source generators, produce lots of bugs and surprising behavior. Just write mappers by hand.
We've been using Mapperly (https://mapperly.riok.app/), after a migration from AutoMapper, in our production application. I'm having a good experience, and we kind of like the holistic of this library.
So far, there have been no surprises, and the library warns about potential issues very explicitly, I quite like it.
Of course, if it's just a handful of fields that need mapping, than write manually is the way to go, specially if said fields require a custom mapping, where the library would not facilitate.
Auto mappers sincerely need to go away. They work kind of fine initially, but at the first custom field mapping or nested field extraction, you have to invest hours into mostly complete failures of unecessary DSLs in order to something that is extremely trivial to do in basic C#, and often it is impossible to shoe horn the necessary mapping into place. Then you have to deal with package upgrades which regularly rewrite custom mapping logic, and to be sure you have to write additional tests just to hand hold.
With multi-caret editors and regex there is no need for automappers. You can write a mapping once and forget about it.
>so preoccupied with whether or not they could, they didn't stop to think if they should
This describes more than half of .net community packages and patterns. So much stuff driven by chasing "oh that's clever" high - forgetting that clever code is miserable to support in prod/maintain. Even when it's your code, but when it's third party libs - it's just asking for weekend debugging sessions and all nighters 2 months past initial delivery date. At some point you just get too old for that shit.
All of the caveats basically boil down to "if you need to access the private backing field from anywhere other than the property getter/setter; then be aware it's going to have a funky non C# compliant field name".
In the EF Core and Automapper type of cases, I consider it an anti-pattern that something outside the class is taking a dependency on a private member of the class in the first place, so the compiler is really doing you a favor by hiding away the private backing field more obscurely.
> In the EF Core and Automapper type of cases, I consider it an anti-pattern that something outside the class is taking a dependency on a private member of the class in the first place, so the compiler is really doing you a favor by hiding away the private backing field more obscurely.
It's another variation of the "parse don't validate" dance. Just because you can do model validation in property setters doesn't always mean it is the best place to do model validation. If you are trying to bypass the setter in a DB Model, then you may have data in your database that doesn't validate, you just want to "parse" it and move on.
It is similar with auto-mapping scenarios, with the complication that automapping was originally meant to be the Validation step in some workflows and code architectures. I think that's personally why AutoMapper and other similar libraries have had a code smell to me as where those tools are often used are "parsing boundaries" more than they should be "validation boundaries" and the coupling between validation logic and AutoMapper logic to me starts to feel like a big ball of spaghetti to me versus a dedicated validation layer that is only concerned with validation not also doing a lot of heavy lifting in copying data around.
To prevent easy Reflection? It would make debugging harder and make writing a debugger harder, for maybe a small gain of avoiding some user code breaking an encapsulation boundary here or there. (But those serious about using reflection to break encapsulation boundaries would likely build complex workarounds anyway.)
It is the compiler's job to guard encapsulation boundaries in most situations, but it's also not necessarily the compiler's job to guard encapsulation boundaries in all situations. There are a lot of good reasons code may want to marshall/serialize raw data. There are a lot of good reasons where cross-cutting is desirous (logging, debugging, meta-programming), which is a part of why .NET has such rich runtime reflection tools.
You are conflating awkward auto-generated backing fields with plain backing fields. A proper serializer handles these cases.
Yes, serialization should and must depend on names, how else to put things back together? The onus is always on the programmer to not break serialization, or provide migration.
> be aware it's going to have a funky non C# compliant field name
That's longstanding behaviour. Ever since features such as anonymous types or lambdas arrived, they mean that classes and methods need to be generated from them. And of course these need names, assigned by the compiler. But these names are deliberately not allowed from the code. The compiler allows itself a wider set of names, including the "<>" chars.
I have heard them referred to as "unspeakable names" because it's not that they're unknown, you literally can't say them in the code.
I can appreciate the steady syntactic sugar that c# has been introducing these past years, they never feel like an abrupt departure from the consistency throughout. I often think that this is what java could have been if it hadn't been mangled by oracle's influence, unfortunately as much I like java it's pretty obvious that different parts have been designed by disjointed committee members focused on just one thing.
This started long before Oracle and the favouring of verbose, ritualistic boiler code was set back at Sun. James Gosling has been staunchly against overloading operators, properties and value types (almost out of spite from Microsoft's success with providing this in C#), the aftermath of which the language and run-time still struggle today and forever will. It's unfortunate that the original inventor, while a brilliant programmer himself, thought so little of others that such features were not to be included, because other programmers might mess up their use.
I always shy away from syntax sugars. If I like a private field with setter and getter I write it into my code. The most of the code is written by autocomplete and if I do now like to se it I just fold it away. I have control over the naming and I can set breakpoints into the getter/setter to trap all those case where I somehow write rubbish. I also have the benefit of seeing the field in my debugger and can access them for hydration without the setter. I see no real use in such new keywords. Just my 2 cents
It's been a while, but from memory I think C# allows you to override keywords and use them as variable names when prefixed with @
The compiler knows what you're doing. A keyword like 'field's inside a function's braces just isn't valid. Putting 'field' after a type name in a variable declaration makes as much sense as 'private int class;'
This is the first time they've done this in a long time fwiw. So the answer is "they usually never worry about this because it never happens".
That said, they will also throw compiler warnings in console during build if you are using an all lowercase word with some small number of characters. I don't remember the exact trigger or warning, but it says something like "such words may be reserved for future use by the compiler" to disincentivize use.
Historically every time new keywords are added, they try to make them contextual so that existing code won't break. 'await' and 'yield' are both examples where a new keyword was added without (generally) breaking existing code - they're only keywords in specific contexts and the way you use them ensures that existing code won't parse ambiguously, AFAIK.
Though, contextual keywords are a thing going back to the original design of C# 1.0 even. The nearest and most obvious example to the topic at hand is that `value` is only reserved in situations such as a property setter, and always has been. You don't need `var @value = …` in the vast majority of C# code and can just write `var value = …` just about anywhere but inside a `set { }` block.
Part of why C# has been so successful in introducing new contextual keywords is that they've been there all along. I think C# 1.0 was ahead of the game on that, and it's interesting how much contextual keywords have started being a bigger tool in language design since C# (all of ES3 of ES4 and some of ES5 were predicated on keywords are always keywords and ES6/ES2015 is where you first start to the shift in JS to a broader contextual keyword approach which seems equal parts inspired by C# as not).
I feel like in a few more years and 2-3 major versions C# will have all the useful features of F#.
It will also keep being much more exciting because our benevolent corporate visionaries manage to add new gotchas with every major and some minor releases
> If you want to avoid this issue altogether, consider using a source generator library like Mapster. That way, mapping issues can be caught at build time rather than at runtime.
The only winning move is not to play. Mapping libraries, even with source generators, produce lots of bugs and surprising behavior. Just write mappers by hand.
We've been using Mapperly (https://mapperly.riok.app/), after a migration from AutoMapper, in our production application. I'm having a good experience, and we kind of like the holistic of this library.
So far, there have been no surprises, and the library warns about potential issues very explicitly, I quite like it.
Of course, if it's just a handful of fields that need mapping, than write manually is the way to go, specially if said fields require a custom mapping, where the library would not facilitate.
Every time I've worked on a project that used AutoMapper, I've hated it. But I'll admit that when you read why it was created, it actually makes sense: https://www.jimmybogard.com/automappers-design-philosophy/
It was meant to enforce a convention. Not to avoid the tedium of writing mapping code by hand (although that is another result).
2025 version: write mapping functions by llm.
Agree, mapping libraries make things only more complicated and harder to debug.
Auto mappers sincerely need to go away. They work kind of fine initially, but at the first custom field mapping or nested field extraction, you have to invest hours into mostly complete failures of unecessary DSLs in order to something that is extremely trivial to do in basic C#, and often it is impossible to shoe horn the necessary mapping into place. Then you have to deal with package upgrades which regularly rewrite custom mapping logic, and to be sure you have to write additional tests just to hand hold. With multi-caret editors and regex there is no need for automappers. You can write a mapping once and forget about it.
>so preoccupied with whether or not they could, they didn't stop to think if they should
This describes more than half of .net community packages and patterns. So much stuff driven by chasing "oh that's clever" high - forgetting that clever code is miserable to support in prod/maintain. Even when it's your code, but when it's third party libs - it's just asking for weekend debugging sessions and all nighters 2 months past initial delivery date. At some point you just get too old for that shit.
All of the caveats basically boil down to "if you need to access the private backing field from anywhere other than the property getter/setter; then be aware it's going to have a funky non C# compliant field name".
In the EF Core and Automapper type of cases, I consider it an anti-pattern that something outside the class is taking a dependency on a private member of the class in the first place, so the compiler is really doing you a favor by hiding away the private backing field more obscurely.
> In the EF Core and Automapper type of cases, I consider it an anti-pattern that something outside the class is taking a dependency on a private member of the class in the first place, so the compiler is really doing you a favor by hiding away the private backing field more obscurely.
It's another variation of the "parse don't validate" dance. Just because you can do model validation in property setters doesn't always mean it is the best place to do model validation. If you are trying to bypass the setter in a DB Model, then you may have data in your database that doesn't validate, you just want to "parse" it and move on.
It is similar with auto-mapping scenarios, with the complication that automapping was originally meant to be the Validation step in some workflows and code architectures. I think that's personally why AutoMapper and other similar libraries have had a code smell to me as where those tools are often used are "parsing boundaries" more than they should be "validation boundaries" and the coupling between validation logic and AutoMapper logic to me starts to feel like a big ball of spaghetti to me versus a dedicated validation layer that is only concerned with validation not also doing a lot of heavy lifting in copying data around.
I'm surprised there isn't something pseudorandom thrown in for good measure – like a few digits of a hash of the source file.
To prevent easy Reflection? It would make debugging harder and make writing a debugger harder, for maybe a small gain of avoiding some user code breaking an encapsulation boundary here or there. (But those serious about using reflection to break encapsulation boundaries would likely build complex workarounds anyway.)
It is the compiler's job to guard encapsulation boundaries in most situations, but it's also not necessarily the compiler's job to guard encapsulation boundaries in all situations. There are a lot of good reasons code may want to marshall/serialize raw data. There are a lot of good reasons where cross-cutting is desirous (logging, debugging, meta-programming), which is a part of why .NET has such rich runtime reflection tools.
The trick with using characters which by definition are not allowed inside variable names, "<" and ">", should be sufficient no?
I believe the reason for this is that it would break deterministic builds.
dotnet build is’t deterministic as default. Never has been.
Except deterministic builds have been the default since 2015?
Serialization is a pretty good cause.
Serialization shouldn’t be dependent on the name of the backing field.
You are conflating awkward auto-generated backing fields with plain backing fields. A proper serializer handles these cases. Yes, serialization should and must depend on names, how else to put things back together? The onus is always on the programmer to not break serialization, or provide migration.
> be aware it's going to have a funky non C# compliant field name
That's longstanding behaviour. Ever since features such as anonymous types or lambdas arrived, they mean that classes and methods need to be generated from them. And of course these need names, assigned by the compiler. But these names are deliberately not allowed from the code. The compiler allows itself a wider set of names, including the "<>" chars.
I have heard them referred to as "unspeakable names" because it's not that they're unknown, you literally can't say them in the code.
e.g. by Jon Skeet, here https://codeblog.jonskeet.uk/category/async/ from 2013.
> they’re all "unspeakable" names including angle-brackets, just like all compiler-generated names.
I can appreciate the steady syntactic sugar that c# has been introducing these past years, they never feel like an abrupt departure from the consistency throughout. I often think that this is what java could have been if it hadn't been mangled by oracle's influence, unfortunately as much I like java it's pretty obvious that different parts have been designed by disjointed committee members focused on just one thing.
This started long before Oracle and the favouring of verbose, ritualistic boiler code was set back at Sun. James Gosling has been staunchly against overloading operators, properties and value types (almost out of spite from Microsoft's success with providing this in C#), the aftermath of which the language and run-time still struggle today and forever will. It's unfortunate that the original inventor, while a brilliant programmer himself, thought so little of others that such features were not to be included, because other programmers might mess up their use.
I always shy away from syntax sugars. If I like a private field with setter and getter I write it into my code. The most of the code is written by autocomplete and if I do now like to se it I just fold it away. I have control over the naming and I can set breakpoints into the getter/setter to trap all those case where I somehow write rubbish. I also have the benefit of seeing the field in my debugger and can access them for hydration without the setter. I see no real use in such new keywords. Just my 2 cents
> I can set breakpoints into the getter/setter
field doesn't stop this.
> I also have the benefit of seeing the field in my debugger
The debugger could still show it. The backing field is still there.
the text says the backing variable is hidden from debugger by attribut, isn't it?
That's why I wrote could. You can also use an IL postprocessor to get rid of the attribute.
How does C# the language or C# the language standard evolution process accommodate a new keyword with such a generic name? Is it context-dependent?
It's been a while, but from memory I think C# allows you to override keywords and use them as variable names when prefixed with @
The compiler knows what you're doing. A keyword like 'field's inside a function's braces just isn't valid. Putting 'field' after a type name in a variable declaration makes as much sense as 'private int class;'
Yes, it's contextual. There is more details in this section of the article: Naming Conflicts with Existing Class Members
Thanks, my bad. I didn’t continue reading past the sections on Entity Framework and AutoMapper.
This is the first time they've done this in a long time fwiw. So the answer is "they usually never worry about this because it never happens".
That said, they will also throw compiler warnings in console during build if you are using an all lowercase word with some small number of characters. I don't remember the exact trigger or warning, but it says something like "such words may be reserved for future use by the compiler" to disincentivize use.
Yes, you have to use field as the backing variable name in a property. The article is pretty clear about its usage.
Historically every time new keywords are added, they try to make them contextual so that existing code won't break. 'await' and 'yield' are both examples where a new keyword was added without (generally) breaking existing code - they're only keywords in specific contexts and the way you use them ensures that existing code won't parse ambiguously, AFAIK.
Though, contextual keywords are a thing going back to the original design of C# 1.0 even. The nearest and most obvious example to the topic at hand is that `value` is only reserved in situations such as a property setter, and always has been. You don't need `var @value = …` in the vast majority of C# code and can just write `var value = …` just about anywhere but inside a `set { }` block.
Part of why C# has been so successful in introducing new contextual keywords is that they've been there all along. I think C# 1.0 was ahead of the game on that, and it's interesting how much contextual keywords have started being a bigger tool in language design since C# (all of ES3 of ES4 and some of ES5 were predicated on keywords are always keywords and ES6/ES2015 is where you first start to the shift in JS to a broader contextual keyword approach which seems equal parts inspired by C# as not).
I feel like in a few more years and 2-3 major versions C# will have all the useful features of F#. It will also keep being much more exciting because our benevolent corporate visionaries manage to add new gotchas with every major and some minor releases
...except compactness, which is the feature I love most
TL;DR nothing surprising, it's just syntactical sugar.