1. You’re about to spend 100k+ tokens on generated code, why add 1-2 seconds of valuable human time backtracking to fix a typo or typing slower to avoid a typo. At 100tok/s that’s 10ms/tok. Can you spell correctly at an upper bound of 20ms slower vs a keyboard mash to save those 2 tokens at zero net cost? Haven’t done the math but I feel like that’s well over 100wpm delta..
2. If you spend your mental load typing with correct spelling you might make a grammatical ambiguity that adds a round trip - adding 10-120sec of human time.
I’m not saying this advice is penny wise / pound foolish in 100% of circumstances, and it’s great to see the data; but there’s a bigger picture to think of. Premature optimization, and all that.
I appreciate that the post was short and to the point, but it is, like so much content submitted to HN now, heavily filtered through an LLM's voice, if not completely written by one.
> Writing things down generally helps me build my own clarity and I hope to get the same out of this particular write up. Hopefully I achieve this clarity sooner rather than later.
Are people really trying to cut the token counts of their manually-typed prompts? Obviously it's good to do what you can to trim skills, documentation that gets ingested, or anything which is in an automated pipeline, but trying to police your typing like this in a live chat session just seems like making yourself crazy to save a fraction of a cent.
Agreed. Saving maybe a dozen tokens is meaningless when a task can easily chew through ten thousand times that many. A single misdescribed task will use more tokens than all my spelling mistakes all year.
A couple of counter-points..
1. You’re about to spend 100k+ tokens on generated code, why add 1-2 seconds of valuable human time backtracking to fix a typo or typing slower to avoid a typo. At 100tok/s that’s 10ms/tok. Can you spell correctly at an upper bound of 20ms slower vs a keyboard mash to save those 2 tokens at zero net cost? Haven’t done the math but I feel like that’s well over 100wpm delta..
2. If you spend your mental load typing with correct spelling you might make a grammatical ambiguity that adds a round trip - adding 10-120sec of human time.
I’m not saying this advice is penny wise / pound foolish in 100% of circumstances, and it’s great to see the data; but there’s a bigger picture to think of. Premature optimization, and all that.
I can't write any prompt without typos, never thought it will increase my LLM cost. Time to fix this habit.
I am gonna keep my "please" and "thanks" though. Trying to save few tokens can lead to changed habit in real life.
I agree. I've switched to pls and tx just for my own typing because of how common they are.
I appreciate that the post was short and to the point, but it is, like so much content submitted to HN now, heavily filtered through an LLM's voice, if not completely written by one.
Here is how the author used to write: https://pankajpipada.com/posts/2022-07-09-voraciousness/
> Writing things down generally helps me build my own clarity and I hope to get the same out of this particular write up. Hopefully I achieve this clarity sooner rather than later.
Are people really trying to cut the token counts of their manually-typed prompts? Obviously it's good to do what you can to trim skills, documentation that gets ingested, or anything which is in an automated pipeline, but trying to police your typing like this in a live chat session just seems like making yourself crazy to save a fraction of a cent.
Agreed. Saving maybe a dozen tokens is meaningless when a task can easily chew through ten thousand times that many. A single misdescribed task will use more tokens than all my spelling mistakes all year.