Not all the time. I can think about abstract concepts with no language needed whatsoever. Like when I’m working on my car. I don’t need to think to myself “Ah this bolt is the 10mm one that went on the steering pump”, I just recognize it and put it on.
Programming is another area like that. I just think about a particular concept itself. How the data will flow, what a function will do to it, etc. It doesn’t need to be described in my head with language to know it and understand it. LLMs cannot do that.
A toddler doesn’t need to understand language to build a cool house out of Lego.
Well, you just have to give the LLM (or better said to a general machine learning Algorithm) a body with Vision and arms as well as a way to train in that body
I’d say that would look like AGI
The key is more efficient training algorithms that don’t need a whole server centre to train 😇I guess we will see in the future if this works
Such a software construct would look nothing like an LLM. We’d need something that matches the complexity and capabilities of a human brain before it’s even been given anything to learn from.
I have already learned a lot from the human knowledge LLM was trained on (and yes i know about halus and of course I fact check everything) but learning coding using a LLM teacher fucking rocks
Thanks to copilot, I “understand” linux kernel modules and what is needed to backport, for example.
Of course, the training data contains all that information, and the LLM is able to explain it in a thousand different ways until anyone can understand it.
But flip that around.
You could never explain a brand new concept to an LLM which isn’t already contained somewhere in its training data. You can’t just give it a book about a new thing, or have a conversation about it, and then have it understand it.
A single book isn’t enough. It needs terabytes of redundant examples and centuries of cpu-time to model the relevant concepts.
Where a human can read a single physics book, and then write part 2 that re-explains and perhaps explores new extrapolated phenomenon, an LLM cannot.
Write a completely new OS that works in a completely new way, and there is no way you could ever get an LLM to understand it by just talking to it. To train it, you’d need to produce those several terabytes of training data about it, first.
And once you do, how do you know it isn’t just pseudo-plagiarizing the contents of that training data?
Well, the issue is that LLMs do not support real time learning at all. If they would be able real time learn and use the base data from training data, I suppose they can understand a physic book even better than normal human with reading it once.
A Human without pre training is not able to understand a physic book without help. He would even be able to read.
If someone finds a way to train LLM realtime and have it decide with what weight each new training data is to be interpreted, I see all that above possible.
And of course if humanity ever creates something that behaves like AGI, humanity would not be able to tell if it is emulated AGI or real AGI. There is no known method to differentiate those two.
You have no fucking idea what you’re talking about. This isn’t even a discussion, you’re presenting your personal made-up fantasies as if they’re real possibilites and ignoring anyone who points that out.
Shut the fuck up and go learn how LLMs work. I’m too fucking tired of explaining how completely delusional you are.
Not all the time. I can think about abstract concepts with no language needed whatsoever. Like when I’m working on my car. I don’t need to think to myself “Ah this bolt is the 10mm one that went on the steering pump”, I just recognize it and put it on.
Programming is another area like that. I just think about a particular concept itself. How the data will flow, what a function will do to it, etc. It doesn’t need to be described in my head with language to know it and understand it. LLMs cannot do that.
A toddler doesn’t need to understand language to build a cool house out of Lego.
Well, you just have to give the LLM (or better said to a general machine learning Algorithm) a body with Vision and arms as well as a way to train in that body
I’d say that would look like AGI
The key is more efficient training algorithms that don’t need a whole server centre to train 😇I guess we will see in the future if this works
Such a software construct would look nothing like an LLM. We’d need something that matches the complexity and capabilities of a human brain before it’s even been given anything to learn from.
I have already learned a lot from the human knowledge LLM was trained on (and yes i know about halus and of course I fact check everything) but learning coding using a LLM teacher fucking rocks
Thanks to copilot, I “understand” linux kernel modules and what is needed to backport, for example.
Of course, the training data contains all that information, and the LLM is able to explain it in a thousand different ways until anyone can understand it.
But flip that around.
You could never explain a brand new concept to an LLM which isn’t already contained somewhere in its training data. You can’t just give it a book about a new thing, or have a conversation about it, and then have it understand it.
A single book isn’t enough. It needs terabytes of redundant examples and centuries of cpu-time to model the relevant concepts.
Where a human can read a single physics book, and then write part 2 that re-explains and perhaps explores new extrapolated phenomenon, an LLM cannot.
Write a completely new OS that works in a completely new way, and there is no way you could ever get an LLM to understand it by just talking to it. To train it, you’d need to produce those several terabytes of training data about it, first.
And once you do, how do you know it isn’t just pseudo-plagiarizing the contents of that training data?
Well, the issue is that LLMs do not support real time learning at all. If they would be able real time learn and use the base data from training data, I suppose they can understand a physic book even better than normal human with reading it once.
A Human without pre training is not able to understand a physic book without help. He would even be able to read.
If someone finds a way to train LLM realtime and have it decide with what weight each new training data is to be interpreted, I see all that above possible.
And of course if humanity ever creates something that behaves like AGI, humanity would not be able to tell if it is emulated AGI or real AGI. There is no known method to differentiate those two.
You have no fucking idea what you’re talking about. This isn’t even a discussion, you’re presenting your personal made-up fantasies as if they’re real possibilites and ignoring anyone who points that out.
Shut the fuck up and go learn how LLMs work. I’m too fucking tired of explaining how completely delusional you are.