Each one has it's own strength and I use each one for different tasks:
- DeepSeek: excellent at coming up with solutions and churning out prototypes / working solutions with Reasoning mode turned on.
- Claude Code: I use this with cursor to quickly come up with overviews / READMEs for repos / new code I'm browsing and in making quick changes to the code-base (I only use it for simple tasks and don't usually trust it for implementing more advanced features).
- QWEN Coder: similar to deep-seek but much better at working with visual / image data sets.
- ChatGPT: usually use it for simple answers / finding bugs in code / resolving issues.
- Google Gemini: is catching up to other models when it comes to coding and more advanced tasks but still produces code that is a bit too verbose for my taste. Still solid progress since initial release and will most likely catch up to other models on most coding tasks soon.
Qwen3 coder 30b I've finally sorted and was using it all day with qwen code.
Qwen3 30b thinking is likely still better than coder?
Free gemini 2.5 pro is my backup for the tough problems.
Tomorrow though. LM Studio just released their latest version which greatly improves tool calling with GPT 20b. I'm running it at 120k context and medium reasoning. I'm pretty confident it's about to become to go-to.
My favorite LLMs ranked:
Each one has it's own strength and I use each one for different tasks:i use claude-4-sonnet then gemini-2.5-pro as a fallback
claude-4-sonnet: seems to be the best at tool calling and actually changing the lines
gemini-2.5-pro: solves what sonnet can't solve, but you have to run it a couple of times to get the tool calling mistakes out
claude code is my daily pair programming buddy;
I tried chatGPT, claude, claude code, cursor and ampcode. I sticked to claude code at the end.
Devstral + Openhands is still my workhorse.
Qwen3 coder 30b I've finally sorted and was using it all day with qwen code.
Qwen3 30b thinking is likely still better than coder?
Free gemini 2.5 pro is my backup for the tough problems.
Tomorrow though. LM Studio just released their latest version which greatly improves tool calling with GPT 20b. I'm running it at 120k context and medium reasoning. I'm pretty confident it's about to become to go-to.