News in English

FrontierMath Benchmark Exposes AI Struggles in Advanced Math

eWeek 

Artificial intelligence is proving its value for generating text, recognizing images, and automating processes, but AI systems are hitting walls when trying to solve advanced math reasoning challenges. A trailblazing new benchmark from research firm Epoch AI called FrontierMath found that even today’s most advanced AI systems, including GPT-4o and Gemini 1.5 Pro, solved less than 2 percent of the math reasoning challenges they faced—even after long hours of work.

Benchmarks are needed to understand and measure AI’s progress. According to Epoch AI’s product marketing, FrontierMath “can assess how well AI systems engage in complex scientific reasoning” because “mathematical problems can be rigorously and automatically verified,” unlike areas in which subjective judgment and expensive tests are used for evaluation.

How the Models Performed

Epoch AI provides sample problems that expert mathematicians spend hours solving—for example, testing Artin’s primitive root conjecture or finding the degree 19 polynomial. Current AI models were provided with “extensive support to maximize their performance” before undertaking advanced mathematical problems, including access to Python environments for testing and verification. However, that support wasn’t enough to prepare them.

“FrontierMath has proven exceptionally challenging for today’s AI systems,” Epoch AI reported.

The AI systems scored high on easier math benchmarks like GSM8K and MATH—above 90 percent—but scored around 2 percent on the advanced problems. All FrontierMath problems are previously unpublished to eliminate the data contamination concerns of existing benchmarks.

In a blog post on the new benchmark, mathematician Evan Chen said he believes FrontierMath differs from traditional math competitions like the International Mathematical Olympiad (IMO) or Putnam in a few ways. IMO problems avoid specialized knowledge and complex calculations, while FrontierMath welcomes them. While they all test for creative insight, he said, FrontierMath “outright invert(s)” two other properties for setting a problem: it should not take a lot of implementation, and it should be elementary.

“Because an AI system has vastly greater computational power,” Chen wrote, “it’s actually possible to design problems with easily verifiable solutions using the same idea that IOI or Project Euler does—basically, ‘write a proof’ is replaced by ‘implement an algorithm in code.”

Evaluating AI Systems: What’s Next

To see if AI systems possess research-level mathematical reasoning capabilities during evaluation, Epoch AI said it will take the following steps to make the benchmark more valuable as AI systems advance:

  •  Regular evaluations of leading AI models
  • Expanding the benchmark
  • Releasing additional problems to the public
  • Strengthening quality control

Epoch AI said the FrontierMath benchmark was developed in collaboration with over 60 mathematicians from leading institutions. It spans the full spectrum of modern mathematics from computational number theory to abstract algebraic geometry.

The post FrontierMath Benchmark Exposes AI Struggles in Advanced Math appeared first on eWEEK.

Читайте на 123ru.net