A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.
Apple’s study proves that LLM-based AI models are flawed because they cannot reason
This really isn’t a good title, I think. It was understood that LLM-based models don’t reason.
A better one would be that researchers at Apple proposed a metric that better accounts for reasoning capability, a better sort of “score” for an AI’s capability.
This really isn’t a good title, I think. It was understood that LLM-based models don’t reason.
A better one would be that researchers at Apple proposed a metric that better accounts for reasoning capability, a better sort of “score” for an AI’s capability.