I've seen similar arguments for a while, and even if it's true now, I'm not sure why it would stay at that level.
Even if you assume LLMs fully stop improving, heavy re-training for many customized models (acting as modules) attached to more traditional statistical systems should clear the "cat" level without too much trouble, but I don't see that happening. I don't think LLMs/other massive single training runs are out of steam yet.
I've seen similar arguments for a while, and even if it's true now, I'm not sure why it would stay at that level.
Even if you assume LLMs fully stop improving, heavy re-training for many customized models (acting as modules) attached to more traditional statistical systems should clear the "cat" level without too much trouble, but I don't see that happening. I don't think LLMs/other massive single training runs are out of steam yet.
[dead]