News in English

Google DeepMind takes AI closer to human capacity in complex math

Google DeepMind takes AI closer to human capacity in complex math

Google DeepMind takes AI closer to human capability in solving complex mathematics.

Google DeepMind has taken a big step toward bringing artificial intelligence (AI) in line with human capability to solve complicated… Continue reading Google DeepMind takes AI closer to human capacity in complex math

The post Google DeepMind takes AI closer to human capacity in complex math appeared first on ReadWrite.

Google DeepMind takes AI closer to human capability in solving complex mathematics.

Google DeepMind has taken a big step toward bringing artificial intelligence (AI) in line with human capability to solve complicated mathematics.

Researchers paired two new systems, known as AlphaProof and AlphaGeometry 2, tasking them with questions from the International Mathematical Olympiad. The global maths contest for advanced high school students has been running since 1959, comprised of six extremely difficult questions each year. Topics include algebra and geometry, with a gold medal putting the winners on a pedestal with the best and brightest young mathematicians in the world.

While the results from the AI systems were impressive, they weren’t quite at the standard of the most intelligent humans at this level, not yet anyway. The Google DeepMind ‘team’ racked up a score of 28 out of 42 points available, one short of the number required for a gold rating and having to settle for silver.

Understandably, and unlike human performance, the answers submitted by DeepMind’s AlphaProof and AlphaGeometry 2 were either perfect or pitiful. The AI solved four questions with precision, taking top marks, but in the other two, there was nothing. The technology could not even begin to work out the answer. 

Building a bridge between spheres

Another key point to note is that the DeepMind experiment effectively had no time limits. Some questions were answered in seconds while others took three days, round the clock. Conversely, human competitors in the Olympiad have a maximum of nine hours to complete the test.

The two AI systems paired by researchers are said to be very different. AlphaProof, which answered three of the questions, works by pairing a large language model (as used in chatbots) with a specialist “reinforcement learning” technique. AlphaGeometry pairs an LLM with a focused, mathematically inclined approach. 

Thomas Hubert, lead researcher on AlphaProof stated, “What we try to do is to build a bridge between these two spheres so that we can take advantage of the guarantees that come with formal mathematics and the data that is available in informal mathematics.”

 

Image credit: Via Ideogram

The post Google DeepMind takes AI closer to human capacity in complex math appeared first on ReadWrite.

Читайте на 123ru.net