They're just spooky names for simple concepts - and the article defines them on first use. If abstract algebra were a requirement, they'd skip these definitions.
Paraphrasing 'Group' from the article to see if I've understood it:
A set of elements G, and some operation ⊕, where
(g1 ⊕ g2) is also in G. // "Type-safety"
Some g0 exists such that (gn ⊕ g0) == (g0 ⊕ gn) == gn // "Zero"
For every g, there's some inverse gi such that (g ⊕ gi) == (gi ⊕ g) == g0 // "Cancelling-out"
a ⊕ (b ⊕ c) == (a ⊕ b) ⊕ c // "Associative"
If (a ⊕ b) == (b ⊕ a) then the group is also "abelian/commutative"
Is the aspirin symbol you're using as + figure, a special kind of +, or just a different looking +? What does the circle around the + mean?
I'm mentioning this, as other people in this thread are discussing "explaining symbols you use", and you're using a non-standard symbol for +. I can easily imagine a circle around + making + a different operation, and wonder if it is so?
Aspirin I've bought in the past has a + on it, and its trademark is a + within a circle. That's why I've latched on what a "common person" might view the symbol as:
⊕ is a standard symbol for this kind of math. The symbol itself is ancient because it's so simple, so I don't see what Bayer's aspirin logo has to do with it.
It acts as a normal +, mostly. When you're dealing with modulo math, the "normal" plus becomes a bit weird as there are rules attached to a number expressed as "(a + b) mod c", so mathematicians often use symbols like ⊕ to mean something like "+, but different". The second link you posted does the same, it acts sort of like normal addition, conceptually, except it's not done on actual numbers but groups.
In definitions like these, you may as well use a peace symbol or a picture of a frog; "some operation ⊕" means "there is some operation we write down like this, and it does this and that".
Another place you may find ⊕ is when it's used to represent XOR in some cases; (a + b) mod 2 is a bitwise XOR when operating on single bits (again, it means "normal addition except with weird rules", namely the mod 2 that makes you throw out anything larger than the last bit).
You can see a group and similar structures as sets of rules an object needs to follow to be considered a group or whatever. Conceptually, a group is anything that behaves like a group. It could be a dog! So, the operator can be anything you want as long as the indicated properties hold. It's like a generic API that lets you use whatever concrete type you want as long as it conforms to certain rules.
edit: What I mean is that, as a consequence, the symbol used is not really important.
Sorry about that. I tried to introduce the necessary concepts starting from zero, or I thought I did.
We devs (take back that "just" :) ) deal with much harder stuff when we build complex APIs, so the problem must be at the syntactic level. To us devs, math may look like an antipattern, with all the short names and operator overloading.
But that's unavoidable, unfortunately. It's normal to spend hours or more on a single concept until it clicks. I'd say don't give up, but I understand one's time is valuable, and the return might not be high enough to justify the cost.
A big part of what makes maths hard for non-math-people is math notation. That is like if a well versed python programmer told you it is as simple as:
result = [[{"x": x, "y": y, "v": (x+y if (x+y)%2==0 else None)} for (x, y) in row] for row in data]
Now with a bit of reasoning another programmer that hasn't used python might be able to figure out what that means. But what if my audience is non-programmers? The moment they encounter the first unexplained square brackets and then an opening curly brace it will essentially feel like telling them: "Here is a riddle for you" or potentially even like "I expect you to know this, dummy".
Not that this text was particular bad in that regard, but I wish more math people had a heightened awareness of the fact that for many the hard part is not understanding the concept (e.g. fourier transformation), but the foreign looking signs mathematicians have decided to use to write them down.
That is as if someone explaines the way to the next train station to you in a foreign language. The hard part isn't understanding the way, it is understanding the noises that are supposed to make up the description.
And as a programmer who from time to time has to translate maths into discrete programs (or the other way around) the hard part was always parsing the notation and when I figured it out I was usually like: "Ohh, this is just a simple algorithm doing that.
So if you want to explain a math concept to programmers you should chose one of two routes:
(A) Stay with your notation and explain every character that isn't visible on a regular keyboard in length and gently lead the reader into being able to read the notation or
(B) let go of the notation and first explain what it does and how, e.g. for our FFT example: FFT slices your list of values into frequency buckets, figures out how much of each frequency is present, and returns those strengths as numbers. And then you can work backwards from that understanding towards your notation explaining which sign relates to which part of the concept (e.g. to the number of buckets).
I would prefer the latter, since it explains both the concept and gives the mathematician a chance to explain how and why math notation can be useful on top, e.g. to figure out certain properties of the method that may even have practical implications.
Was nodding along as I was reading this. I recently was given a paper and spoke with the engineer implementing it. The paper was incredibly dense and hard to parse. But through talking with the engineer and rewriting some terms to more common names, the math turned out to be quite simple. Echoing your sentiment, I wish more mathematicians would use simple terminology. My personal theory as to why this isn't done is the same reason why overengineering happens, that the writer is trying to cover every base but makes the hottest path a jumbled mess.
My personal theory is that (notation+terms of art) is incredibly information dense even if inscrutable to outsiders.
What you wish for is more akin to coding like this:
declaring a function whose name is "max" and its arguments are "x" (of type number) and "y" (of type number) that returns a number:
statement: if a is greater than b, the function returns a
statement: the function returns b
But programmers don't bat an eye at {}[](),.!&^| (and I just realized I used the term "function" which outsiders might wish was replaced by simpler terminology!)
// This is more readable if you're "in the know"
// even if it looks like a jumbled mess to outsiders
fn max(a: num, b: num): num => a > b ? a : b
Math uses terms of art like "group", "field", "modulo" and "multiplicative inverse"; and notation like "∑"; because they are short and communicate very specific (and common) things, many of which are implicit and we probably wouldn't even notice.
> Math uses terms of art like "group", "field", "modulo" and "multiplicative inverse"; and notation like "∑"; because they are short and communicate very specific (and common) things, many of which are implicit and we probably wouldn't even notice.
I don't have anything against introducing new words. If your concept can be adequately described by existing language that seems like a good way to allow people to learn and talk about it. Technically as a person who has studied philosophy the greek alphabet is also no big hurdle to me. But it is to others. Try googling some weird sign you found in a formula. First you don't know how it is called or how to write it, second any signed might have been used in 100 different formulae so even if you, know how to search for it (there are applications people use to identify mathematical signs) good luck at finding any meaningful answer.
I know for mathematicians these signs are arbitrary and they would say you could just use emojis as well. But then it turns out mathematicians ascribe meaning to which alphabet they are using and whether it is upper- or lowercase. Except sometimes they will break that convention for what appears to be mostly historical reasons.
I know mathematicians will get used to this just fine, but the mathematical notation system has incredibly bad UX and the ideals embedded within it are more about density and intransparency (only the genius mathematician knows what is going on), than about rigorous precision and understanding.
When I studied philosophy there were philosophers like Hegel who had to expand the German language to express their new thoughts. And there were philosophers who shall remain unnamed that would use nearly unparseable dense and complex language to express trivial thoughts. The latter always felt like an attempt to paper over their own deficiencies with the convoluted language they had learned to express themselves in.
Mathematicans can also have a degree of the latter at times. If your notation is more complex than the problem it describes your notation sucks and you waste collective human potential by using it.
The article would be a lot better if it was what it said on the tin, instead of being filled with lots of unnecessary (as described in the article) digressions. If you couldn’t restrain yourself to sticking to the subject, at least put the digressions behind links or footnotes or pop-ups where they don’t detract from reading about the actual claimed intended subject.
I think that's a little unfair, as there's only a single digression about Fibonacci numbers (a very interesting one, IMO). The section is clearly indicated as skippable and can be quickly skipped by using the tree on the right.
Since my exposition is constructive in nature, the proofs and other remarks are an integral part of the article, not digressions.
In contrast, EdDSA (which is based on Schorr signatures) does, by construction: the public key is included in one of the hashes, which binds the signature to a particular public key.
I haven't investigated whether cryptocurrency's use of Schnorr satisfies this property or not. (Indeed, I do not care about cryptocurrency at all.) So it's an exercise to the reader if it's satisfactory or not :3
Excellent blog by the way. I esp. love the humility - advanced concepts about cryptogtaphy then I see an article for new people about how to get into tech. Keeping the ladder out, so to speak.
Your definition of "basic math" greatly differs from mine...
> abstract algebra is not a requirement.
and talks about fields and groups
They're just spooky names for simple concepts - and the article defines them on first use. If abstract algebra were a requirement, they'd skip these definitions.
Paraphrasing 'Group' from the article to see if I've understood it:
A set of elements G, and some operation ⊕, where
Is the aspirin symbol you're using as + figure, a special kind of +, or just a different looking +? What does the circle around the + mean?
I'm mentioning this, as other people in this thread are discussing "explaining symbols you use", and you're using a non-standard symbol for +. I can easily imagine a circle around + making + a different operation, and wonder if it is so?
Aspirin I've bought in the past has a + on it, and its trademark is a + within a circle. That's why I've latched on what a "common person" might view the symbol as:
https://www.brand.aspirin.com/sites/g/files/vrxlpx46831/file...
Interestingly, I have University level math courses, but decades out of date, and have never run into that symbol. I see it here:
https://en.wikipedia.org/wiki/Direct_sum
⊕ is a standard symbol for this kind of math. The symbol itself is ancient because it's so simple, so I don't see what Bayer's aspirin logo has to do with it.
It acts as a normal +, mostly. When you're dealing with modulo math, the "normal" plus becomes a bit weird as there are rules attached to a number expressed as "(a + b) mod c", so mathematicians often use symbols like ⊕ to mean something like "+, but different". The second link you posted does the same, it acts sort of like normal addition, conceptually, except it's not done on actual numbers but groups.
In definitions like these, you may as well use a peace symbol or a picture of a frog; "some operation ⊕" means "there is some operation we write down like this, and it does this and that".
Another place you may find ⊕ is when it's used to represent XOR in some cases; (a + b) mod 2 is a bitwise XOR when operating on single bits (again, it means "normal addition except with weird rules", namely the mod 2 that makes you throw out anything larger than the last bit).
⊕ is variable! Just like g1 or g2.
I specifically didn't use an already-existing symbol because then you wouldn't know if I'm talking about that symbol, or any symbol in general.
Integer-multiplication is associative, E.g.
and it has an identity element, E.g.You can see a group and similar structures as sets of rules an object needs to follow to be considered a group or whatever. Conceptually, a group is anything that behaves like a group. It could be a dog! So, the operator can be anything you want as long as the indicated properties hold. It's like a generic API that lets you use whatever concrete type you want as long as it conforms to certain rules.
edit: What I mean is that, as a consequence, the symbol used is not really important.
They're spooky names for simple concepts, with extremely deep consequences and hard theory, don't be fooled.
I've recently understood how RSA works and thought it was a cool achievement. But this article with "basic" math... Not so enjoyable for just a dev =)
Sorry about that. I tried to introduce the necessary concepts starting from zero, or I thought I did.
We devs (take back that "just" :) ) deal with much harder stuff when we build complex APIs, so the problem must be at the syntactic level. To us devs, math may look like an antipattern, with all the short names and operator overloading.
But that's unavoidable, unfortunately. It's normal to spend hours or more on a single concept until it clicks. I'd say don't give up, but I understand one's time is valuable, and the return might not be high enough to justify the cost.
ECDSA is a horrible workaround for patent on Schnorr signatures. Here's my talk from 2019 about the issue.
https://www.youtube.com/live/2IpZWSWUIVE?si=-LRRbU2mJgL9LiNP...
Great talk. Wish the camera focused on the slides more.
The ed25519 issues are absolutely insane. Anywhere I can read more about that?
Excellent. Really enjoyed that.
A big part of what makes maths hard for non-math-people is math notation. That is like if a well versed python programmer told you it is as simple as:
Now with a bit of reasoning another programmer that hasn't used python might be able to figure out what that means. But what if my audience is non-programmers? The moment they encounter the first unexplained square brackets and then an opening curly brace it will essentially feel like telling them: "Here is a riddle for you" or potentially even like "I expect you to know this, dummy".Not that this text was particular bad in that regard, but I wish more math people had a heightened awareness of the fact that for many the hard part is not understanding the concept (e.g. fourier transformation), but the foreign looking signs mathematicians have decided to use to write them down.
That is as if someone explaines the way to the next train station to you in a foreign language. The hard part isn't understanding the way, it is understanding the noises that are supposed to make up the description.
And as a programmer who from time to time has to translate maths into discrete programs (or the other way around) the hard part was always parsing the notation and when I figured it out I was usually like: "Ohh, this is just a simple algorithm doing that.
So if you want to explain a math concept to programmers you should chose one of two routes:
(A) Stay with your notation and explain every character that isn't visible on a regular keyboard in length and gently lead the reader into being able to read the notation or
(B) let go of the notation and first explain what it does and how, e.g. for our FFT example: FFT slices your list of values into frequency buckets, figures out how much of each frequency is present, and returns those strengths as numbers. And then you can work backwards from that understanding towards your notation explaining which sign relates to which part of the concept (e.g. to the number of buckets).
I would prefer the latter, since it explains both the concept and gives the mathematician a chance to explain how and why math notation can be useful on top, e.g. to figure out certain properties of the method that may even have practical implications.
Was nodding along as I was reading this. I recently was given a paper and spoke with the engineer implementing it. The paper was incredibly dense and hard to parse. But through talking with the engineer and rewriting some terms to more common names, the math turned out to be quite simple. Echoing your sentiment, I wish more mathematicians would use simple terminology. My personal theory as to why this isn't done is the same reason why overengineering happens, that the writer is trying to cover every base but makes the hottest path a jumbled mess.
My personal theory is that (notation+terms of art) is incredibly information dense even if inscrutable to outsiders.
What you wish for is more akin to coding like this:
But programmers don't bat an eye at {}[](),.!&^| (and I just realized I used the term "function" which outsiders might wish was replaced by simpler terminology!) Math uses terms of art like "group", "field", "modulo" and "multiplicative inverse"; and notation like "∑"; because they are short and communicate very specific (and common) things, many of which are implicit and we probably wouldn't even notice.In other words: we're not the target audience.
Note that this is not only a matter of conciseness. See Ken Iverson's (of APL/J fame) "Notation as a Tool of Thought": https://www.eecg.utoronto.ca/~jzhu/csc326/readings/iverson.p...
> Math uses terms of art like "group", "field", "modulo" and "multiplicative inverse"; and notation like "∑"; because they are short and communicate very specific (and common) things, many of which are implicit and we probably wouldn't even notice.
I don't have anything against introducing new words. If your concept can be adequately described by existing language that seems like a good way to allow people to learn and talk about it. Technically as a person who has studied philosophy the greek alphabet is also no big hurdle to me. But it is to others. Try googling some weird sign you found in a formula. First you don't know how it is called or how to write it, second any signed might have been used in 100 different formulae so even if you, know how to search for it (there are applications people use to identify mathematical signs) good luck at finding any meaningful answer.
I know for mathematicians these signs are arbitrary and they would say you could just use emojis as well. But then it turns out mathematicians ascribe meaning to which alphabet they are using and whether it is upper- or lowercase. Except sometimes they will break that convention for what appears to be mostly historical reasons.
I know mathematicians will get used to this just fine, but the mathematical notation system has incredibly bad UX and the ideals embedded within it are more about density and intransparency (only the genius mathematician knows what is going on), than about rigorous precision and understanding.
When I studied philosophy there were philosophers like Hegel who had to expand the German language to express their new thoughts. And there were philosophers who shall remain unnamed that would use nearly unparseable dense and complex language to express trivial thoughts. The latter always felt like an attempt to paper over their own deficiencies with the convoluted language they had learned to express themselves in.
Mathematicans can also have a degree of the latter at times. If your notation is more complex than the problem it describes your notation sucks and you waste collective human potential by using it.
The article would be a lot better if it was what it said on the tin, instead of being filled with lots of unnecessary (as described in the article) digressions. If you couldn’t restrain yourself to sticking to the subject, at least put the digressions behind links or footnotes or pop-ups where they don’t detract from reading about the actual claimed intended subject.
I think that's a little unfair, as there's only a single digression about Fibonacci numbers (a very interesting one, IMO). The section is clearly indicated as skippable and can be quickly skipped by using the tree on the right.
Since my exposition is constructive in nature, the proofs and other remarks are an integral part of the article, not digressions.
In addition to the malleability attack (high-S and low-S both being valid for a given value of R), ECDSA doesn't provide a property called exclusive ownership: https://soatok.blog/2023/04/03/asymmetric-cryptographic-comm...
In contrast, EdDSA (which is based on Schorr signatures) does, by construction: the public key is included in one of the hashes, which binds the signature to a particular public key.
I haven't investigated whether cryptocurrency's use of Schnorr satisfies this property or not. (Indeed, I do not care about cryptocurrency at all.) So it's an exercise to the reader if it's satisfactory or not :3
Excellent blog by the way. I esp. love the humility - advanced concepts about cryptogtaphy then I see an article for new people about how to get into tech. Keeping the ladder out, so to speak.