Space News from SpaceDaily.com
OpenAI releases reasoning AI with eye on safety, accuracy
San Francisco, Sept 12 (AFP) Sep 12, 2024
ChatGPT creator OpenAI on Thursday released a new series of artificial intelligence models designed to spend more time thinking -- in hopes that generative AI chatbots provide more accurate and beneficial responses.

The new models, known as OpenAI o1-Preview, are designed to tackle complex tasks and solve more challenging problems in science, coding and mathematics -- something that earlier models have been criticized for failing to provide consistently.

Unlike their predecessors, these models have been trained to refine their thinking processes, try different methods and recognize mistakes before they deploy a final answer.

The new release comes as OpenAI is raising funds that could see it valued around $150 billion, which would make it one of the world's most valuable private companies, according to US media.

Investors include Microsoft and Nvidia, and could also include a $7 billion investment from MGX, a United Arab Emirates-backed investment fund, The Information reported.

OpenAI CEO Sam Altman hailed the models as "a new paradigm: AI that can do general-purpose complex reasoning."

However, he cautioned that the technology "is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it."

OpenAI's push to improve "thinking" in its model is a response to the persistent problem of "hallucinations" in AI chatbots.

This refers to their tendency to generate persuasive but incorrect content that has somewhat cooled the excitement over ChatGPT-style AI features among business customers

"We have noticed that this model hallucinates less," OpenAI researcher Jerry Tworek told The Verge.

But "we can't say we solved hallucinations," he added.

The Microsoft-backed company said that in tests, the models performed comparably to PhD students on difficult tasks in physics, chemistry and biology.

They also excelled in mathematics and coding, achieving an 83 percent success rate on a qualifying exam for the International Mathematics Olympiad, compared to 13 percent for GPT-4o, its most advanced general use model.

OpenAI said that the new reasoning capabilities could be used for healthcare researchers to annotate cell sequencing data, physicists to generate complex formulas, or computer developers to build and execute multistep designs.

The company also said that the models survived rigorous jailbreaking tests and could better withstand attempts to circumvent its guardrails.

OpenAI said its strengthened safety measures also included recent agreements with the US and UK AI Safety Institutes, which were granted early access to the models for evaluation and testing.


ADVERTISEMENT




Space News from SpaceDaily.com
Fly through Webbs cosmic vistas celebrates four years of James Webb discoveries
China harnesses nationwide system to drive spaceflight and satellite navigation advances
China launches twin Shijian-29 satellites to test space-target detection tech

24/7 Energy News Coverage
Chinese leasing firm CALC orders 30 Airbus A320neo planes
France pushes back plastic cup ban by four years
China says to launch digital currency action plan

Military Space News, Nuclear Weapons, Missile Defense
Malaysia raids firms in army procurement graft probe
South Korean President Lee to visit China next week
Drones dive into aviation's deepest enigma as MH370 hunt restarts

24/7 News Coverage
Finland opens to wolf hunting in the new year
Regional temperature records broken across the world in 2025
French ban on 'forever chemicals' in cosmetics, clothing enters force; delays plastic cup ban 4 years


All rights reserved. Copyright Agence France-Presse. Sections of the information displayed on this page (dispatches, photographs, logos) are protected by intellectual property rights owned by Agence France-Presse. As a consequence, you may not copy, reproduce, modify, transmit, publish, display or in any way commercially exploit any of the content of this section without the prior written consent of Agence France-Presse.