Today, we present AlphaProof, a new reinforcement-learning based system for formal math reasoning, and AlphaGeometry 2, an improved version of our geometry-solving system. Together, these systems solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving the same level as a silver medalist in the competition for the first time.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    Artificial general intelligence (AGI) with advanced mathematical reasoning has the potential to unlock new frontiers in science and technology.

    The first sentence is completely irrelevant grifting. Red flag.

    First, the problems were manually translated into formal mathematical language for our systems to understand… Our systems solved one problem within minutes and took up to three days to solve the others.

    LMAO. If people translate the question into symbols, then ofc a computer can solve the problem in a few minutes.

    If you translate your budget into a spreadsheet, then a computer can calculate your surplus or deficit in microseconds. But the actually hard part is making the spreadsheet.

    • morrowind@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      Did you actually look at the problems or even furher down the page before making these sweeping statements? Simply transforming it into formal mathematical language does not make the problems trivial. These aren’t arithmetic problems.

      Despite failing the two problems, it did better than the majority of the contestants, who are some of the most talented math students in the world.

      The only major catch was it did not finish in the alloted time, since it went on for days. But once the method has been established, that’s a performance problem.

      Deepmind is one of the most respected labs in the AI space, far before the modern generative ai trend. They’re not some random grifters.