Skip to content
Home News Model IQ Over 120! Writes NASA PhD Code in 1 Hour, Top 0.2% in Coding Competition

Model IQ Over 120! Writes NASA PhD Code in 1 Hour, Top 0.2% in Coding Competition

Model IQ Over 120! Writes NASA PhD Code in 1 Hour, Top 0.2% in Coding Competition

A recent demonstration of the OpenAI model, designated as o1, has captured the attention of the academic community and tech enthusiasts alike. Dr. Kyle Kabasares, a physics PhD candidate at the University of California, Irvine (UCI), shared his astonishing experience with the AI model. After completing his PhD thesis over the course of a year, Kabasares found that o1 could produce a running version of the Python code he developed in just one hour.

This revelation was made public in a video where Kabasares, visibly shocked, exclaimed “oh my god,” as the AI generated approximately 200 lines of code from the methodology section of his research paper within six prompts. Although the code was based on synthesized data rather than actual astronomical data, the speed and capability of the o1 model were remarkable.

The video, which quickly garnered widespread attention on social media, showcased Kabasares’ disbelief and excitement at the AI’s performance. In tandem with the impressive code-generation feat, recent IQ testing indicated that the o1 model scored above 120, outperforming many other models in a set of 35 items.

Interestingly, o1 is currently only in its preview version, and OpenAI researchers, including David Dohan, hinted at a more advanced version set to be released within the next month. This prospect raises questions about the potential capabilities of future iterations of the model.

In a separate test, Kabasares prepared a set of astrophysics questions specifically designed by his professors, which he directed at o1. The AI reportedly answered several of these questions correctly in remarkably short periods, demonstrating its advanced problem-solving abilities in a domain typically reserved for graduate-level academic challenges.

Furthermore, the response from the programming community has been enthusiastic. In a recent live coding competition, a contestant known as AryanDLuffy leveraged the o1-mini model and achieved a score ranking in the top 0.17% among over 160,000 participants. This achievement suggests that the model is nearing “master-level” performance, as noted by Mark Chen, OpenAI’s research director.

Despite these advancements, concerns have arisen about the implications of using AI in competitive programming. Mike Mirzayanov, Codeforces’ founder, has initiated new regulations forbidding the use of AI models to solve competitive programming problems, emphasizing that while participants may use AI for assistance with translations and syntax, the core logic and debugging must remain the responsibility of human competitors.

In light of these developments, experts and researchers from the AI field are keenly analyzing the efficacy and underlying mechanisms of o1. Preliminary discussions point to its potential for self-improvement and learning capabilities that may redefine human-computer interaction in problem-solving contexts.

The ongoing investigation of such groundbreaking AI models continues to intrigue the technology sector, raising the question—how will AI like o1 impact future academic and competitive environments? As updates and research around o1 unfold, the implications for education, coding competitions, and beyond remain a topic of significant interest.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.