• About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post
No Result
View All Result
Digital Phablet
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
  • Home
  • NewsLatest
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones
  • AI
  • Reviews
  • Interesting
  • How To
No Result
View All Result
Digital Phablet
No Result
View All Result

Home » Model IQ Over 120! Writes NASA PhD Code in 1 Hour, Top 0.2% in Coding Competition

Model IQ Over 120! Writes NASA PhD Code in 1 Hour, Top 0.2% in Coding Competition

Rebecca Fraser by Rebecca Fraser
September 16, 2024
in News
Reading Time: 2 mins read
A A
Model IQ Over 120! Writes NASA PhD Code in 1 Hour, Top 0.2% in Coding Competition
ADVERTISEMENT

Select Language:

A recent demonstration of the OpenAI model, designated as o1, has captured the attention of the academic community and tech enthusiasts alike. Dr. Kyle Kabasares, a physics PhD candidate at the University of California, Irvine (UCI), shared his astonishing experience with the AI model. After completing his PhD thesis over the course of a year, Kabasares found that o1 could produce a running version of the Python code he developed in just one hour.

ADVERTISEMENT

This revelation was made public in a video where Kabasares, visibly shocked, exclaimed “oh my god,” as the AI generated approximately 200 lines of code from the methodology section of his research paper within six prompts. Although the code was based on synthesized data rather than actual astronomical data, the speed and capability of the o1 model were remarkable.

The video, which quickly garnered widespread attention on social media, showcased Kabasares’ disbelief and excitement at the AI’s performance. In tandem with the impressive code-generation feat, recent IQ testing indicated that the o1 model scored above 120, outperforming many other models in a set of 35 items.

Interestingly, o1 is currently only in its preview version, and OpenAI researchers, including David Dohan, hinted at a more advanced version set to be released within the next month. This prospect raises questions about the potential capabilities of future iterations of the model.

ADVERTISEMENT

In a separate test, Kabasares prepared a set of astrophysics questions specifically designed by his professors, which he directed at o1. The AI reportedly answered several of these questions correctly in remarkably short periods, demonstrating its advanced problem-solving abilities in a domain typically reserved for graduate-level academic challenges.

Furthermore, the response from the programming community has been enthusiastic. In a recent live coding competition, a contestant known as AryanDLuffy leveraged the o1-mini model and achieved a score ranking in the top 0.17% among over 160,000 participants. This achievement suggests that the model is nearing “master-level” performance, as noted by Mark Chen, OpenAI’s research director.

Despite these advancements, concerns have arisen about the implications of using AI in competitive programming. Mike Mirzayanov, Codeforces’ founder, has initiated new regulations forbidding the use of AI models to solve competitive programming problems, emphasizing that while participants may use AI for assistance with translations and syntax, the core logic and debugging must remain the responsibility of human competitors.

In light of these developments, experts and researchers from the AI field are keenly analyzing the efficacy and underlying mechanisms of o1. Preliminary discussions point to its potential for self-improvement and learning capabilities that may redefine human-computer interaction in problem-solving contexts.

The ongoing investigation of such groundbreaking AI models continues to intrigue the technology sector, raising the question—how will AI like o1 impact future academic and competitive environments? As updates and research around o1 unfold, the implications for education, coding competitions, and beyond remain a topic of significant interest.

ChatGPT ChatGPT Perplexity AI Perplexity Gemini AI Logo Gemini AI Grok AI Logo Grok AI
Google Banner
ADVERTISEMENT
Rebecca Fraser

Rebecca Fraser

Rebecca covers all aspects of Mac and PC technology, including PC gaming and peripherals, at Digital Phablet. Over the previous ten years, she built multiple desktop PCs for gaming and content production, despite her educational background in prosthetics and model-making. Playing video and tabletop games, occasionally broadcasting to everyone's dismay, she enjoys dabbling in digital art and 3D printing.

Related Posts

The Most Powerful People in the World

1.  Xi Jinping
2.  Donald Trump
3.  Elon
Infotainment

Top Most Powerful People in the World

February 13, 2026
Shenzhen Clamps Down on Illegal Gold Trading and Price Fixing
Fintech

Shenzhen Clamps Down on Illegal Gold Trading and Price Fixing

February 13, 2026
How to Find and Download a GitHub Package with a Script
How To

How to Find and Download a GitHub Package with a Script

February 13, 2026
How To Complete Every Contract and Meet Quota in YapYap
Gaming

How To Complete Every Contract and Meet Quota in YapYap

February 13, 2026
Next Post

Far Cry 4: Complete Guide to All Lost Letter Locations

  • About Us
  • Contact Us
  • Advertise
  • Privacy Policy
  • Guest Post

© 2026 Digital Phablet

No Result
View All Result
  • Home
  • News
  • Technology
    • Education Tech
    • Home Tech
    • Office Tech
    • Fintech
    • Digital Marketing
  • Social Media
  • Gaming
  • Smartphones

© 2026 Digital Phablet