Sorry, but that article is completely non sense.
LLMs are pretty clear in what they do. It’s true that they are often superficial, but most of this superficiality is due to creator trying to escape liabilities. ChatGTP is often evasive or superficial on purpose, because openai is trying to find a balance between usefulness and risk of being sued.
LLMs do not try to be smart. They don’t do trick. They are built to give the best possible answer they are capable of doing (given how they are trained and built). Sometimes these answers are good, sometimes not, sometimes mixed.
Why writing a whole article trying to demonstrate frauds in a tool. Is a washing machine a fraud because it tries to convince me clothes are clean? I am satisfied by the results, given it is a machine, my aunt complains that “washing by hand” is better.
Same situation here, some people are happy, some would like more…
Definitely the first. I work in ML, and I find for instance people with background mainly in c# to be the least fit for my field, particularly if they have long experience. So I understand this kind of requests