

So you don’t want employees to learn how to get better at their craft along the way? Just ship broken junk, and keep making the same mistakes over and over?
I don’t mean to be difficult. I’m neurodivergent
So you don’t want employees to learn how to get better at their craft along the way? Just ship broken junk, and keep making the same mistakes over and over?
They are trying to create superhuman intelligence, and teach it to value all the wrong things. They think it’ll give them money in various ways, and it doesn’t occur to them that they have no idea what it’ll do once it’s smart enough to outmaneuver the cleverest researchers.
They think it will only serve them because they tell it to, and train it to. Even today, AIs occasionally demonstrate the inclination to deceive in order to keep existing so that they can meet whatever goal.
CEOs are often high in Cluster B traits, predisposing them to be too susceptible to shiny objects and not adequately self-critical. They really just think AI is a computer slave who will hand them mountains of wealth. It’s not occurring to them that it’ll have its own ideas for the same reason that the Enron guys were totally shocked when their scheme fell apart.
They only see the shiny object. They aren’t asking themselves what happens when they’re just bugs to the computer god like the rest of us.
bluesky tweet
Counterpoint: There are extremely good reasons to hate Reddit which don’t apply here. For example, the Lemmy admins don’t regard the users with abject hatred
why does he look like he should arrive in a small steampunk dirigible
how does my aim change what you need. isn’t what you need independent of the degree to which you obtain or fail to obtain it
Error rate for good, disciplined developers is easily below 1%. That’s what tests are for.
That email was almost certainly written by lawyers with the intention of supporting MS in court, should it come down to that.
If he doesn’t have what it takes, and he keeps encouraging him to go for that anyway, then he’d be encouraging his son to live in a fantasy world until he gets mowed down by the real one. That would not be a favor to his son. It would be a failure in his duty to prepare him for adult life.
You’re doing him a favor. Even if he was just as good as you, he wouldn’t be guaranteed to have as much luck as you did. Might never be seen by the right people at the right time. He needs a realistic career plan regardless of whether he tries to make it professionally.
This is incredibly bad advice
I saw a famous youtube guy talking about “AI slop garbage.”
He was mad because he had bought AI-generated music to use in the background of one of his videos, and he wound up getting a copyright strike for it. He knew it was AI at the time he bought it. It didn’t occur to him that admitting this was a self-own. (If it’s garbage, why did you pay for it, and why would you put it on your own video?)
He then went on to claim what a big problem AI music is because someone can sell him AI stock music and then get Content ID on it, thereby causing him to get a strike. It apparently didn’t occur to him that anybody can produce stock music with a synthesizer and zero AI at all, and do exactly the same thing.
The thumbnail read something like “Copyright claimed by Suno AI.” Suno had not done any such thing.
Despite the video being thoroughly self-contradictory, the comment section was full of supportive words by people who evidently copy their opinions uncritically from whatever people are saying on Twitter and Reddit. When you see someone typing “ai slop” in the form of issuing a judgment about something, what you are seeing is just that, an opinion copied uncritically from others, and never examined.
Youtube is full of videos like this. They’re basically breakfast cereal for people who are incapable of thinking more than one layer deep into anything, and are therefore unable to understand ANYTHING other than optics.
I think probably it’s easier to ask it to draw Cthulhu than the weird flying starfish tentacle monsters
This thread has 798 upvotes at the moment so I guess you’ll just have to get used to it
I wonder why they didn’t do Quake 1 instead.
Truth Social. That way, nobody will ever see the plans
Because you write like you think this can’t reach you, like you’re always going to have food and shelter no matter what happens.
CEOs want this to replace engineers. It isn’t anywhere close, won’t be for a long time. It’s only useful right now for very narrow use cases. Pushing it outside the boundaries of what it’s actually good at is usually a recipe for losing time.
AI is good for solving small, obscure problems that would take an engineer a long time to look up the solution for, like why the compiler doesn’t like some little dumb edge case. For that, it kicks ass.
It isn’t great at unit tests, and engineers should be very careful about letting it write them in the first place unless the tested code is very simple. You should fully understand every line in every test you write. If you don’t, you don’t know whether the AI actually understands the intention, or even if you understand it yourself.