I don’t understand why it’s so hard to sandbox an LLM’s configuration data from it’s training data.
What do you mean by “configuration data?”
The data used to configure it.
Because its all one thing. The promise of AI is that you can basically throw anything at it, and you don’t need to understand exactly how/why it makes the connections it does; you just adjust the weights until it kinda looks alright.
There are many structural hacks used to give it better results (and in this case some form of reasoning) but ultimately they’re mostly relying on connecting multiple nets together and retrying queries and such. There’s no human understandable settings. Neural networks are basically one input and one output (unless you’re training it).
OpenAICloseAIOpen, not like a library, but like Sandworm’s mouth.
I wish
did anyone ever actually assume that “open” wasn’t a lie?
When I heard about it first, I thought it was some open source project, because of the name. :(
It was, originally. GPT-2 was eventually released after some push back from openAI and the models prior to that were fully released immediately. Its been apparent for quite a while that OpenAI have been transitioning from a non-profit org interested in pushing technology forward to a VC backed monopoly-seeking company. The big Altman putsch/counter putsch was just the solidfying of that.
Please be applicable to business accounts, Please be applicable to business accounts, Please be applicable to business accounts, Please be applicable to business accounts,
I want to get rid of this shit so bad, of another junior dev submits a shit MR they can’t explain because they had chatGPT write it I’m going to explode. Also, the number of AI executives we have in charge of our manufacturing company is somehow more than we have in charge of manufacturing, and guess what?! They are all MBAs who haven’t written a god damn line of code in their life but have become professional “prompt engineers”.
Do they not test them before submission?
I’ve met someone employed as a dev, who not only didn’t know that the compiler generates an executable file, but actually spent a month trying to change the code, not noticing that 0 of their code changes were having any effect whatsoever (because they kept running an old build of mine)
They probably tested in ideal circumstances and their stuff breaks down when even coming close to an edge case.
I would be really interested in learning a language. The AI assistance method actually meshes very well with my learning style. I would never submit anything to anyone that I was not certain was good working code though. My brain wouldn’t let me do it. Now i just need to choose a language.
I applaud your ethics. But you don’t know how close you are to falling from grace.
Just yesterday I had to remove perfectly tested, sensible and non-ai code from our production system, not because that it did not do what the author intended, but because what the author intended was flawed. And this is exactly what ai also cannot teach you right now: Taking a step back to realize that your code might be right, but your intentions are not.
Definetly keep at it. But be aware you will do the wrong things even with perfectly working code.
Yeah, the code can work flawlessly in test, but after a few months of production there are a lot more records or files and the code starts to have issues.
Probably don’t know how to get it to run.
Every time I hear someone talking up prompt engineering, I feel like I should say something. But I don’t.
“Prompt engineering” must be the easiest job to replace with AI. You can simply ask an LLM to generate and refine prompts.
OpenAI: Here’s a new model that can think in steps and reason about things!
User: How did you conclude this is the correct answer?
OpenAI: No! Not like that! banhammer