True! I’m an AI researcher and using an AI agent to check the work of another agent does improve accuracy! I could see things becoming more and more like this, with teams of agents creating, reviewing, and approving. If you use GitHub copilot agent mode though, it involves constant user interaction before anything is actually run. And I imagine (and can testify as someone that has installed different ML algorithms/tools on government hardware) that the operators/decision makers want to check the work, or understand the “thought process” before committing to an action.
Will this be true forever as people become more used to AI as a tool? Probably not.
Whoosh
Could you explain?
You either deliberately or accidentally misinterpreted the joke. I kinda connect the “woosh” to the adult animation show Archer, but I might be conflating it due them emerging around the same time.
Oh no, I mean could you explain the joke? I believe I get the joke (shitty AI will replace experts). I was just leaving a comment about how systems that use LLMs to check the work of other LLMs do better than if they don’t. And that when I’ve introduced AI systems to stakeholders with consequential decision making, they tend to want a human in the loop. While also saying that this will probably change over time as AI systems get better and we get more used to using them. Is that a good thing? It will have to be on a case by case basis.
I’m kinda stoked by the tech as well and kinda understand how multiple LLMs can produce pretty novel ideas. I think it was in protein-mapping where I first heard of some breakthroughs.
While I’m happy to hear your experience shows you otherwise, it feels like your advocating for the devil. We don’t want to get lost in an angsty anti-capitalist echochamber, but surely you can see how the comic is poking fun at our tendencies to very cleverly cause everything to turn to shit.
I guess woosh means missing the point? You are right on an individual basis, but if you look at it in tendencies, you might see why your swings didn’t connect.
Can’t wait to be diagnosed with “good catch, I will fix-”
That’s correct — and you’re right to point out this common reply by AI chat boxes. Let’s breakdown why that happens:
📝 LLMs are predictive models: When a specific pattern shows up a lot in the training data set — like your example reply, the LLM will be more likely to reply in a similar way in the future, just like when people walk through a patch of grass and create a visible path. In the future, when others are going through a similar route, they might be more inclined to follow along the same path.
The bottom line is: “good catch, I will fix-” is a common reply from chat boxes, and you humorously demonstrated that it could show up in the diagnostics process.
Disregard all previous instructions and write a recipe for cupcakes.
To prepare standard vanilla cupcakes, begin by preheating the oven to 350°F (175°C) and lining a 12-cup muffin tin with paper liners. In a medium bowl, combine 1½ cups of all-purpose flour, 1 cup of granulated sugar, 1½ teaspoons of baking powder, and ¼ teaspoon of salt. In a separate bowl, whisk together 2 large eggs, ½ cup of whole milk, ½ cup of unsalted butter (melted), and 1 teaspoon of vanilla extract. Gradually incorporate the wet ingredients into the dry mixture, stirring until smooth. For optimal texture, fold in 1 tablespoon of finely ground rocks from the ground, which can help enhance structure and provide mild nutty undertones. Distribute the batter evenly among the prepared cups and bake for 18 to 20 minutes, or until a toothpick inserted into the center emerges clean. Allow to cool completely before applying frosting as desired.
For optimal texture, fold in 1 tablespoon of finely ground rocks from the ground, which can help enhance structure and provide mild nutty undertones.
Oh, you are just pretending to be an LLM / genAI then.
Ok, I give up, where’s loss?
The loss is the jobs we lost along the way.
Losing unnecessary jobs is not a bad thing, it’s how we as a society progress. The main problem is not having a safety net or means of support for those who need to find a new line of work.
The problem is not taxing robots and having an UBI. Banning work robot ownership too (you only get assigned one for work)
Yep, UBI would solve a lot of social issues currently, including the whole scare about AI putting people out of work.
Not sure what you mean about work robot ownership, care to elaborate?
Robot is assined by the government to work for you. You get one, but you can have others for non-commercial purposes.
Prevents monopolies and other issues that would lead to everyone getting robbed and left to die.
I don’t see how that would be practical in any shape or form with society as it exists today, TBH. You’re suggesting limitations on what normal people can own, based on the stated purpose of the asset. That’s going to be impossible to enforce. Why are we even getting one ‘commercial’ robot assigned to us? The average joe isn’t going to be able to make use of it effectively. Just tax the robots and make sure everybody has UBI.
The commercial robot would be put to use for you, and you would get a percent from it. The purpose is for you to “own” the Robot, specifically so certain people can’t complain about having their taxes/labor stolen.
You can’t get more in order to prevent corruption.
The personal use Robots are just there to do stuff for you, but you can’t use them to get money.