• mashbooq@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m not trained in formal computer science, so I’m unable to evaluate the quality of this paper’s argument, but there’s a preprint out that claims to prove that current computing architectures will never be able to advance to AGI, and that rather than accelerating, improvements are only going to slow down due to the exponential increase in resources necessary for any incremental advancements (because it’s an NP-hard problem). That doesn’t prove LLMs are end of the line, but it does suggest that additional improvements are likely to be marginal.

    Reclaiming AI as a theoretical tool for cognitive science