Ugh someone recently sent me LLM-generated meeting notes for a meeting that only a couple colleagues were able to attend. They sucked, a lot. Got a number of things completely wrong, duplicated the same random note a bunch of times in bullet lists, and just didn’t seem to reflect what was actually talked about. Luckily a coworker took their own real notes, and comparing them made it clear that LLMs are definitely causing more harm than good. It’s not exactly the same thing, but no, we’re not there yet.
You just have to love that these assholes are so lazy that they first use an LLM to write their work, but then are also too lazy to quickly proof read what the LLM spat out.
People caught doing this should be fired on the spot, you’re not doing your job.
I hosted a meeting with about a dozen attendees recently, and one attendee silently joined with an AI note taking bot and immediately went AFK.
It was in about 5 minutes before we clocked it and then kicked it out. It automatically circulated its notes. Amusingly, 95% of them were “is that a chat bot?” “Steve, are you actually on this meeting?” “I’m going to kick Steve out in a minute if nobody can get him to answer”, etc. But even with that level of asinine, low impact chat, it still managed to garble them to the point of barely legible.
Have you seen current doctor visit note summaries?
The bar is pretty low. A lot of these are made with conventional dictation software that has no sense of context when it misunderstands. Agree the consequences can be worse if the context is wrong, but I would guess a well programmed AI could summarize better on average than most visit summaries are currently. With this sort of thing there will be errors, but let’s not forget that there ARE errors.
Ugh someone recently sent me LLM-generated meeting notes for a meeting that only a couple colleagues were able to attend. They sucked, a lot. Got a number of things completely wrong, duplicated the same random note a bunch of times in bullet lists, and just didn’t seem to reflect what was actually talked about. Luckily a coworker took their own real notes, and comparing them made it clear that LLMs are definitely causing more harm than good. It’s not exactly the same thing, but no, we’re not there yet.
You just have to love that these assholes are so lazy that they first use an LLM to write their work, but then are also too lazy to quickly proof read what the LLM spat out.
People caught doing this should be fired on the spot, you’re not doing your job.
I hosted a meeting with about a dozen attendees recently, and one attendee silently joined with an AI note taking bot and immediately went AFK.
It was in about 5 minutes before we clocked it and then kicked it out. It automatically circulated its notes. Amusingly, 95% of them were “is that a chat bot?” “Steve, are you actually on this meeting?” “I’m going to kick Steve out in a minute if nobody can get him to answer”, etc. But even with that level of asinine, low impact chat, it still managed to garble them to the point of barely legible.
Also: what a dick move.
Wait until you hear about doctors using AI to summarize visits 😎
What about it?
All the above would apply to doctor visit notes. Would you find that helpful?
Plus, they can hallucinate phrases or entire sentences
Have you seen current doctor visit note summaries? The bar is pretty low. A lot of these are made with conventional dictation software that has no sense of context when it misunderstands. Agree the consequences can be worse if the context is wrong, but I would guess a well programmed AI could summarize better on average than most visit summaries are currently. With this sort of thing there will be errors, but let’s not forget that there ARE errors.