The dominant approach at the time were Expert Systems. This used a lot of carefully crafted data and manually curated facts that the inference engine can use. It also fit in a MUCH smaller footprint compared to conventional neural networks. But you also don’t get real language processing, reasoning beyond the target problem domain, and stuff like that - it’s laser focused and built on very small amounts of data. Much of the research from back then centers on using Lisp of all things, so BASIC isn’t a big stretch.
It sure made sense forty years ago. And I’d bet that the examples in that book are more AI than today’s LLMs.
The dominant approach at the time were Expert Systems. This used a lot of carefully crafted data and manually curated facts that the inference engine can use. It also fit in a MUCH smaller footprint compared to conventional neural networks. But you also don’t get real language processing, reasoning beyond the target problem domain, and stuff like that - it’s laser focused and built on very small amounts of data. Much of the research from back then centers on using Lisp of all things, so BASIC isn’t a big stretch.
Prolog is even better suited for such applications.