I responded with a mix of mostly B and C answers and got “advanced.” Yet, as pointed out by another commenter, selecting all D answers (which would make you an expert!) gets you called a beginner.
I can only assume the quiz itself was vibe-coded and not tested. What an incredible time we live in.
I'm a beginner with agentic coding. I vibe code something most days, from a few lines up to refactors over a few files. I don't knowingly use skills, rarely _choose_ to call out to tools, haven't written any skills and only one or two ad hoc scripts, and have barely touched MCPs (because the few I've used seem flaky and erratic). I answered as such and got... intermediate.
Hey! Thanks for the feedback on the quiz and you're right, the scoring logic has a bug. Already on my fix list.
But the quiz is just the entry point. The real value is the 11 interactive modules and terminal simulators where you practice actual Claude Code commands, config builders that generate real files, and quizzes that explain the "why" when you get it wrong.
Would love to hear what you think of the actual modules.
There seems to be a particular way that people working with LLMs start speaking - its like utterly confident, leaving no room for self introspection, borderline arrogant, and almost always selling the last thing they output. Hm
Strongly agree with the sentiment, but I'd say if you're familiar with the terminal you may as well just install it and truly 'learn by doing'!
I could see this being great for true beginners, but for them it might be nice to have even some more basics to start (how do I open the terminal, what is a command, etc).
I feel that the tricky part now is you can “learn by doing” without ever knowing if you’re doing it right. You get something working, but your mental model can be completely off.
I feel there’s a lot of marketing and pure bullshit around LLMs configuration and conventions.
Law of diminishing returns applies here perfectly - you can learn prompting in 2 hours and get 400% performance boost or spend weeks on subagents and skills and Opus and st best it’s another 50% boost but not really - in my case in a good day Sonnet is a genius and on a bad one Opus is an moron. One day the same query consumes 6k tokens, the next 700k.
They want to get you hooked and need to show investors they’re super busy but in fact it’s mostly smoke and mirrors. And prompting, once you learn to give proper context, is far from rocket science.
Despite reading many articles / blog posts about Claude operation, this site had nuance about features that I hadn't encountered. Tests may not work correctly, but the value (for me) was, ironically, reading.
I love the pedagogical approach here and the ability to easily hone in on your level before diving into content. Your approach would work really well for other subjects as well.
Side note: I don’t know what Anthropic changed but now Claude Code consumes the quota incredibly fast. I have the Max5 plan, and it just consumed about 10% of the session quota in 10 minutes on a single prompt. For $100/month, I have higher expectations.
That explains things. Im getting this:
API Error: 400 {"error":{"message":"Budget has been exceeded! Current cost: 271.29866200000015, Max budget:
200.0","type":"budget_exceeded","param":null,"code":"400"}}
So I completetly ran out of tokens and haven’t even used it at all for the past couple of days, and last week my usage was very light. Let me scratch that, all my usage has been very light since I got this plan at work. It’s a an enterprise subscription I believe, hard to tell since it doesn’t connect directly to Anthropic, rather it goes through a proxy on Azure.
Im not liking this at all and all, so flaky and opaque. Not possible to get a breakdown on what the usage went on, right? Do we have to contact Anthropic for a refund or will they restore the bogus usage?
I had to double check that they'd removed the non-1M option, and... WTF? This is what's in `/config` → `model`
1. Default (recommended) Opus 4.6 with 1M context · Most capable for complex work
2. Sonnet Sonnet 4.6 · Best for everyday tasks
3. Sonnet (1M context) Sonnet 4.6 with 1M context · Billed as extra usage · $3/$15 per Mtok
4. Haiku Haiku 4.5 · Fastest for quick answers
So there's an option to use non-1M Sonnet, but not non-1M Opus?
Except wait, I guess that actually makes sense, because it says Sonnet 1M is billed as extra usage... but also WTF, why is Sonnet 1M billed as extra usage? So Opus 1M is included in Max, but if you want the worse model with that much context, you have to pay extra? Why the heck would anyone do that?
The screen does also say "For other/previous model names, specify with --model", so I assume you can use that to get 200K Opus, but I'm very confused why Anthropic wouldn't include that in the list of options.
What a strange UX decision. I'm not personally annoyed, I just think it's bizarre.
I've been jumping from Claude -> Gemini -> GPT Codex. Both Claude and Gemini really reduced quotas and so I cancelled. Only subbed GPT for the special 2x quota in March and now my allocation is done as well.
I decided to give opencode go a try today. It's $5 for the first month. Didn't get much success with Kimi K2, overly chatty, built too complex solutions - burned 40% of my allocation and nothing worked. ¯\_(ツ)_/¯.
But Minimax m2.7. Wow, it feels just like Claude Opus 4.6. Really has serious chops in Rust.
Tomorrow/Wednesday will try a month of their $40 plan and see how it goes.
I've heard this a few times lately, but this past weekend I built a website for a friend's birthday, and it took me several hours and many queries to get through my regular paid plan. I just use default settings (Sonnet 4.6, medium effort, thinking on).
I'm guessing Opus eats up usage much, much faster. I don't know what's going on, since a lot of people are hitting limits and I don't seem to be.
Even with Opus I don’t usually hit limits on the standard plan. But I am not doing professional work at the moment and I actually alternate between using the LLM and reading/writing code the old fashioned way. I can see how you’d blow through the quota quickly if you try to use LLMs as universal problem solvers.
Have had similar issues with costs sometimes being all over the map. I suspect that the major providers will figure this out as it’s an important consideration in the enterprise setting
Reminds me of when I would mess with my friends on "pay per text" plans by sending them 10 text messages instead of just 1. I should start paying attention to unattended laptops and blow up some token usage in the same manner.
Why wpuld anyone want to "learn" how to use some non-deterministic black box of bullshit that is frequently wrong? When you get different output fkr the same input, how do you learn?
How is that beneficial? Why would you waste your time learning something that is frequently changing at the whims of some greedy third party? No thanks.
No. 100% no. Learn the art of programming. Read K&R. In 5 years we will see "new is old" again. Tokens will become prohibitively expensive and, once more, another $steve.ballmer.2.0 will be yelling "developers ... developers". And Claude Code ... will become another "pentesting" / "linting" tool.
Are people again learning a new set of tools? Just tell the AI what you want, if the AI tool doesn't allow that then tell another Ai tool to make you a translation layer that will convert the natural language to the commands etc. What's the point of learning yet another tool?
I haven't used Claude, but the problem seems to be not refusal, but cheerful failure. "Sure, I'll help you with that!" And it produces something wrong in obvious and/or subtle ways.
I think somewhere between 2016 and 2026 the market realized that programmers _love_ writing tools for themselves and others, and it went full bore into catering to the Bike Shedding economy, and now AI is accelerating this to an absurd degree.
Me too, I love writing tools for myself and end up yak shaving all the time but why there's a tutorial for a machine that understand human language? Just type down your inner monologue and it will do it.
I continue to find the non-stop claude spam fascinating. Gemini and ChatGPT have been very good for my needs, Claude not so much. Every week, if not every day, Claude spam is all over this site. But barely a peep about Gemini or ChatGPT coding capabilities.
I started with Claude for a basic JS project. It failed over and over. Gemini sorted out the same problems faster. Claude was always wanting to rip out huge blocks of code and replace them. Did it fix the problem? Almost never. It was a small JS code base Claude made itself.
Claude was my first coding AI, I liked it, I wanted to use it. But when I ran out of tokens I went to Gemini and got way better results.
And now every day I see Claude spam like it's the best thing that ever happened. Real world use tells a different story. I didnt just "one off" try it and have a problem. This is week and weeks of issues.
Claude fails basic questions when given very clear prompts - ON VANILLA JAVASCRIPT.
I use claude code every day, I've written plugins and skills, use MCP servers, subagent workflows, and filled out the "Find your level" quiz as such.
According to the quiz, I am a beginner!
I was a bit confused by the quiz results as well. But it's just a bug :)
Level ranges for the 10 questions (the score ranges are in the html): Beginner 0~3, Intermediate 4~7, Advanced 8~10
Makes sense. But:
- You get 0 points if you press A/B, 1 point if you press C, 2 points if you press D
- Scoring uses a fallback to Beginner level if your total score exceeds the expected max which is 10
`const t = Object.values(r).find(a => l >= a.min && l <= a.max) ?? r.beginner`
Pressed D 5x then A 5x, got Advanced
And you’ll never guess who wrote it…
I think it’s just buggy, I had the same results despite of knowing every single question in depth other than building a plugin.
Did anyone not get beginner?
I got it as well.
I responded with a mix of mostly B and C answers and got “advanced.” Yet, as pointed out by another commenter, selecting all D answers (which would make you an expert!) gets you called a beginner.
I can only assume the quiz itself was vibe-coded and not tested. What an incredible time we live in.
I'm a beginner with agentic coding. I vibe code something most days, from a few lines up to refactors over a few files. I don't knowingly use skills, rarely _choose_ to call out to tools, haven't written any skills and only one or two ad hoc scripts, and have barely touched MCPs (because the few I've used seem flaky and erratic). I answered as such and got... intermediate.
A lot of these quizzes end up measuring whether you use the author's preferred workflow, not whether you're actually effective with the tool.
Those aren't the same thing.
Just ask it to fill it in for you.
Master level.
Hey! Thanks for the feedback on the quiz and you're right, the scoring logic has a bug. Already on my fix list. But the quiz is just the entry point. The real value is the 11 interactive modules and terminal simulators where you practice actual Claude Code commands, config builders that generate real files, and quizzes that explain the "why" when you get it wrong.
Would love to hear what you think of the actual modules.
If the entry point is obviously broken, most people won’t continue on to the “real value”, myself included
There seems to be a particular way that people working with LLMs start speaking - its like utterly confident, leaving no room for self introspection, borderline arrogant, and almost always selling the last thing they output. Hm
Strongly agree with the sentiment, but I'd say if you're familiar with the terminal you may as well just install it and truly 'learn by doing'!
I could see this being great for true beginners, but for them it might be nice to have even some more basics to start (how do I open the terminal, what is a command, etc).
I feel that the tricky part now is you can “learn by doing” without ever knowing if you’re doing it right. You get something working, but your mental model can be completely off.
I’m missing something here. Isn’t the best “doing” to actually use Claude to build stuff? The barrier to entry is so low.
Why do you need to memorize slash commands? They are somewhat useful and you can just read them from the autocomplete.
People will do anything to avoid RTFM.
Many of the same people probably use LLMs to avoid having to WTFM, so I’m not surprised.
I feel there’s a lot of marketing and pure bullshit around LLMs configuration and conventions.
Law of diminishing returns applies here perfectly - you can learn prompting in 2 hours and get 400% performance boost or spend weeks on subagents and skills and Opus and st best it’s another 50% boost but not really - in my case in a good day Sonnet is a genius and on a bad one Opus is an moron. One day the same query consumes 6k tokens, the next 700k.
They want to get you hooked and need to show investors they’re super busy but in fact it’s mostly smoke and mirrors. And prompting, once you learn to give proper context, is far from rocket science.
find your level -> answer D to everything -> you're a beginner! And I thought I have high standards...
Despite reading many articles / blog posts about Claude operation, this site had nuance about features that I hadn't encountered. Tests may not work correctly, but the value (for me) was, ironically, reading.
Is that quiz correct? I have answered mostly C or D and maybe a few of B, but still got "Beginner". How?!
The quiz is super weird too. They A-C are knowledge questions D is something you’ve done.
I love the pedagogical approach here and the ability to easily hone in on your level before diving into content. Your approach would work really well for other subjects as well.
thank you to OP -- this was a really easy way to look up how plugins inside of the claude code
This is awesome, thanks for sharing!
Side note: I don’t know what Anthropic changed but now Claude Code consumes the quota incredibly fast. I have the Max5 plan, and it just consumed about 10% of the session quota in 10 minutes on a single prompt. For $100/month, I have higher expectations.
Relevant: https://www.reddit.com/r/ClaudeAI/comments/1s7zgj0/investiga...
https://www.reddit.com/r/ClaudeAI/comments/1s7mkn3/psa_claud...
That explains things. Im getting this: API Error: 400 {"error":{"message":"Budget has been exceeded! Current cost: 271.29866200000015, Max budget: 200.0","type":"budget_exceeded","param":null,"code":"400"}}
So I completetly ran out of tokens and haven’t even used it at all for the past couple of days, and last week my usage was very light. Let me scratch that, all my usage has been very light since I got this plan at work. It’s a an enterprise subscription I believe, hard to tell since it doesn’t connect directly to Anthropic, rather it goes through a proxy on Azure.
Im not liking this at all and all, so flaky and opaque. Not possible to get a breakdown on what the usage went on, right? Do we have to contact Anthropic for a refund or will they restore the bogus usage?
This is a serious problem with the fact that it's nearly impossible to understand what a "token" is and how to tame their use in a principled way.
It's like if cars didn't advertise MPG, but instead something that could change randomly.
Anthropic really needs to opensource claude code.
One of the biggest turnoffs as a claude code user is the CC community cargo culting the subreddit because community outreach is otherwise poor.
I noticed 1M context window is default and no way not to use it. If your context is at 500-900k tokens every prompt, you’re gonna hit limits fast.
I had to double check that they'd removed the non-1M option, and... WTF? This is what's in `/config` → `model`
So there's an option to use non-1M Sonnet, but not non-1M Opus?Except wait, I guess that actually makes sense, because it says Sonnet 1M is billed as extra usage... but also WTF, why is Sonnet 1M billed as extra usage? So Opus 1M is included in Max, but if you want the worse model with that much context, you have to pay extra? Why the heck would anyone do that?
The screen does also say "For other/previous model names, specify with --model", so I assume you can use that to get 200K Opus, but I'm very confused why Anthropic wouldn't include that in the list of options.
What a strange UX decision. I'm not personally annoyed, I just think it's bizarre.
export CLAUDE_CODE_DISABLE_1M_CONTEXT=1
do you pay for the full context every prompt? what happened with the idea of caching the context server side?
I've been jumping from Claude -> Gemini -> GPT Codex. Both Claude and Gemini really reduced quotas and so I cancelled. Only subbed GPT for the special 2x quota in March and now my allocation is done as well.
I decided to give opencode go a try today. It's $5 for the first month. Didn't get much success with Kimi K2, overly chatty, built too complex solutions - burned 40% of my allocation and nothing worked. ¯\_(ツ)_/¯.
But Minimax m2.7. Wow, it feels just like Claude Opus 4.6. Really has serious chops in Rust.
Tomorrow/Wednesday will try a month of their $40 plan and see how it goes.
Minimax 2.7 is great. Not close to Claude but good enough for a lot of coding tasks.
I've heard this a few times lately, but this past weekend I built a website for a friend's birthday, and it took me several hours and many queries to get through my regular paid plan. I just use default settings (Sonnet 4.6, medium effort, thinking on).
I'm guessing Opus eats up usage much, much faster. I don't know what's going on, since a lot of people are hitting limits and I don't seem to be.
Update: Maybe the difference is that I think I was just using the vscode extension at the time: https://news.ycombinator.com/item?id=47586176
I go back and forth between vscode and claude in the terminal, but that day I think I did vscode.
what they changed was peak vs off-peak usage metering.
using it on the weekend gets you more use than during weekdays 9-5 in US eastern time.
Even with Opus I don’t usually hit limits on the standard plan. But I am not doing professional work at the moment and I actually alternate between using the LLM and reading/writing code the old fashioned way. I can see how you’d blow through the quota quickly if you try to use LLMs as universal problem solvers.
Have had similar issues with costs sometimes being all over the map. I suspect that the major providers will figure this out as it’s an important consideration in the enterprise setting
This is a very normal thing to be the top comment on an article on how to use Claude Code.
They need to get to profitability because that sweet sweet Saudi subsidy cash is gone gone.
They wont be profitable at this point...they just dont realise they are eating their own tail.
Looks like they are falling victim to their own slop. This smells a lot like the Amazon outages caused by mandated clanker usage.
things are rough out there right now
I'm very surprised to see enshittification starting so early. I was expecting at last 3-4 years of VC subsidized gravy train.
This has been 6 months of constant decline so at this point I am wondering when they cliff it like wework
Reminds me of when I would mess with my friends on "pay per text" plans by sending them 10 text messages instead of just 1. I should start paying attention to unattended laptops and blow up some token usage in the same manner.
It's almost like an evolution of bobby tables.
Why wpuld anyone want to "learn" how to use some non-deterministic black box of bullshit that is frequently wrong? When you get different output fkr the same input, how do you learn? How is that beneficial? Why would you waste your time learning something that is frequently changing at the whims of some greedy third party? No thanks.
One of the things you can learn is how to get consistently useful results out of it despite it being a non-deterministic black box.
Because you will soon be working for it unless you learn to make it work for you.
It's fucking insane that we all have to pay rent every month to an AI company just to keep doing our jobs.
No. 100% no. Learn the art of programming. Read K&R. In 5 years we will see "new is old" again. Tokens will become prohibitively expensive and, once more, another $steve.ballmer.2.0 will be yelling "developers ... developers". And Claude Code ... will become another "pentesting" / "linting" tool.
Are people again learning a new set of tools? Just tell the AI what you want, if the AI tool doesn't allow that then tell another Ai tool to make you a translation layer that will convert the natural language to the commands etc. What's the point of learning yet another tool?
I cannot decipher what you mean, have you mixed up the tabs, and wanted to post this somewhere else?
The linked site is a pretty good interactive Claude tutorial for beginners.
I don't understand the purpose of a tutorial for a natural language ai system.
Nope, why would anybody type commands to a machine that does natural language processing? Just tell the thing what you want.
I haven't used Claude, but the problem seems to be not refusal, but cheerful failure. "Sure, I'll help you with that!" And it produces something wrong in obvious and/or subtle ways.
I think somewhere between 2016 and 2026 the market realized that programmers _love_ writing tools for themselves and others, and it went full bore into catering to the Bike Shedding economy, and now AI is accelerating this to an absurd degree.
Me too, I love writing tools for myself and end up yak shaving all the time but why there's a tutorial for a machine that understand human language? Just type down your inner monologue and it will do it.
I continue to find the non-stop claude spam fascinating. Gemini and ChatGPT have been very good for my needs, Claude not so much. Every week, if not every day, Claude spam is all over this site. But barely a peep about Gemini or ChatGPT coding capabilities.
That’s good to know your personal preferences. Please keep us posted!
I started with Claude for a basic JS project. It failed over and over. Gemini sorted out the same problems faster. Claude was always wanting to rip out huge blocks of code and replace them. Did it fix the problem? Almost never. It was a small JS code base Claude made itself.
Claude was my first coding AI, I liked it, I wanted to use it. But when I ran out of tokens I went to Gemini and got way better results.
And now every day I see Claude spam like it's the best thing that ever happened. Real world use tells a different story. I didnt just "one off" try it and have a problem. This is week and weeks of issues.
Claude fails basic questions when given very clear prompts - ON VANILLA JAVASCRIPT.
Tool de jour, similar to web framework of the month etc. Gemini and ChatGPT are just as useful