Clickbait.
They tested it on a single, year old model (Llama2-70B), which is way below state of the art.
Lots of people seem to like them. I’ve read several AI summaries also here on Lemmy. And they kind of always miss the nuance. Oftentimes they entirely miss the point. So much that it sometimes becomes wrong.
One evaluator highlighted this problem by calling out an AI summary for being “wordy and pointless—just repeating what was in the submission.”
Basically sums up my view of AI generated text, it has a certain waffle to it that is very distinctive, regardless of the source.
I find AI summaries so damn tedious to read, they just keep repeating themselves over and over again :(
Yeah, and they say the same things repeatedly, too!
And then they start repeating it again, just differently, but same, again!
In conclusion, the conclusion most often concludes by concluding that we can conclude, that the answer is the answer.
To summarize, the summary most often summarizes the problem and its nuances, and concludes by summarizing the conclusion as the correct answer.
Therefore, the correct answer is correct.
Let me know if there is anything else I can help you with today.
And if it does not work, let’s try another approach.
Gives exact same information/instructions