Humans *are* amazing, and there is a generality to our intelligence that seems to be meaningfully beyond what we’re building in language models, but we should be under no misconceptions on the rate at which problems become doable for LLMs.
You can believe the inner quote while realizing how wrong the outer quote is. 3-4 years ago people were amazed LLMs could answer simple questions. Now they’re complaining the critical infra they write has too many bugs. 3-4 years from now the models will be unrecognizably good.
The core of ai psychosis, by the way, is the failure to properly align human-intelligence axes with ai-intelligence ones.
Humans *are* amazing, and there is a generality to our intelligence that seems to be meaningfully beyond what we’re building in language models, but we should be under no misconceptions on the rate at which problems become doable for LLMs.