/r/OpenAI
OpenAI is an AI research and deployment company. OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity. We are an unofficial community. OpenAI makes ChatGPT, Sora, and DALL·E 3.
OpenAI is an AI research and deployment company. OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity. We are an unofficial community. OpenAI makes ChatGPT, GPT-4, and DALL·E 3.
Official OpenAI Links
Related Subreddits
/r/OpenAI
Any topic, any question. I just want to see a bunch more real outputs to judge what kind of functionality we are working with here.
I'm talking in-depth groundbreaking interpretations. Give us quantum-level analysis.
Also, I have questions,
what is up with the pouches thing? who may 'the board' be? do i have a shot with britt lower?
normal stuff like that, add your own if it helps.
Please and thank you.
Hey folks,
I ran a script processing GPT requests for about four hours without issues, but now every request instantly hits a rate limit error.
I checked my usage on the developer platform—I haven’t exceeded the financial threshold, and there’s no explicit daily token limit that I see. The script was running fine before, so I know it’s not a general rate limit issue.
Is there a hidden rolling limit or cooldown I might be missing? Any way to resolve this besides waiting?
I want to enable both reasoning with o3 mini and search in chatgpt(like deepseek does). But if I enable both, I just see only reasoning results, not web results.
If they have this feature, its game over for perplexity.
Don’t get me wrong, when it actually does work it’s fantastic. But I am finding it sooooo difficult getting it to actually start a task. It likes asking me the same questions over and over again (“just to be sure”, “to confirm”). And it always promises it will get back to me yet never does.
If they need to ration this because of bandwidth that is understandable, but a little bit of transparency about its internal queue/throttle system would be nice.
I go back and forth between whether I actually enjoy custom instructions in any of the models but I kind of feel like they disrupt the response patterns that the model gives
Just to quell the "AGI incoming" and "AI will soon make huge Physics/Math discoveries" hype a little bit. This problem is certainly not THAT easy, but it is a standard QM problem which has a "well known" result and I think many QM textbooks go over this problem, it was part of my homework and I sat down and proved it fairly quickly (about an hour, but keep in mind it is a lot easier to just "reprove" it if one knows how to, this is including time spent "wandering around in the dark" mentally and just trying different paths, it also took a little while to do the "brute-force" calculation while keeping track of all the terms)
o3-mini got the wrong answer over and over, despite my attempts to tell it that it's answer was not correct. I will point out that DeepSeek R1 also failed in all my attempts (5+ on both models) to make it solve the problem. The only model that got the correct answer was Gemini 2.0 Flash Thinking Experimental 01-21 (on temperature 0) and took 40 seconds to solve it.
The prompt is the following: "Calculate the second order energy correction for a perturbation c*x^3 to a quantum harmonic oscillator (the first order correction vanishes)."
I'd be interested if any of you can make it get a correct solution; with o3 or another model I haven't mentioned (Sonnet is horrendous at Physics in my experience)
(that last part in parentheses is a tip to perhaps makes it get to the solution faster, but that tip is certainly not difficult to show, so its def not necessary).
I'd be shocked if DeepResearch with o3 couldnt figure it out (if Flash Thinking could).
(all of this obv points to the Hallucination problem and the lack of a "fundamental", unalterable ground-truth base of knowledge for LLMs, since they are fundamentally statistical, at the end of the day, even if there is some bias towards truth that's been trained into the model)
Sam Altman gave a lecture in University of Tokyo and here is the brief summary of Q&A.
Q. What skills will be important for humans in the future?
A. It is impossible for humans to beat AI in mathematics, programming, physics, etc. Just as a human can never beat a calculator. In the future, all people will have access to the highest level of knowledge. Leadership will be more important, how to vision and motivate people.
Q. What is the direction of future development?
A. GPT-3 and GPT-4 are pre-training paradigms. GPT-5 and GPT-6, which will be developed in the future, will utilize reinforcement learning to discover new algorithms, physics, biology, and other new sciences.
Q. Do you intend to release an Open Source model as Open AI in light of Deep-seek, etc.?
A. The world is moving in the direction of Open AI. Society is also approaching a stage where it can accept the trade-offs of an Open model. We are thinking of contributing in some way.
I have seen the Deep Research demo to write a question, get a clarification from the agent and then the research commences in the sidebar. The results appear after several minutes or something like 10 minutes.
For example here
https://youtu.be/xkFPpza_edo?t=214
My Chat is here https://chatgpt.com/share/67a12b62-5a0c-8011-a3da-6e9fb17e2c4d
The problem is that it does not do a follow up question, it does not say that it will begin research. After me asking the question, it immediately generates the report that has no numbers, no references and is generally just a placeholder.
Is this some kind of bug or maybe OpenAI is out of resources somehow?
I am on Pro plan.
the title
With anything larger than a few pages, my projects and canvas slowly start to fail until them become unusable or problematic. would pro help this issue instead of just using plus?
I know there are websites with publications I can read from freely on the internet, but what I want is a list of book recommendations from the smartest AI specialists in this community, and if possible, tell me why you recommend the books in your list.
I was just on today and noticed that 4o was "reasoning," so does that mean it works like o3?
DeepSeek recently (last week) dropped a new multi-modal model, Janus-Pro-7B. It outperforms or is competitive with Stable Diffusion and OpenAI's DALLE-3 across a multiple benchmarks.
Benchmarks are especially iffy for image generation models. Copied a few examples below. For more examples and check out our rundown here.
Which model do you use to work in blog posts etc? Are reasoning models the right choice or 4o? 🫣 Thanks in advance
I am a PhD student debating paying for the GPT Plus subscription. I do not make a lot of money which is why I have to give so much thought to spending $20/month.
My primary uses cases are the following:
I have been using the free tier but am worried that I am flying close to the sun and will hit a file upload limit. I am also very intrigued by the idea of the advanced voice mode in the Plus tier. Specifically, I am trying to use that feature to give me detailed voice summaries on academic articles while I am on-the-go.
I have two specific questions:
Thank you so much for reading.
P.S. I tried using the DeepSeek-R1 model locally for all of these features (except voice) but my machine is not capable enough to run it well. It ends up with me having to wait on the model for a long time.