/r/OpenAI

Photograph via //r/OpenAI

OpenAI is an AI research and deployment company. OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity. We are an unofficial community. OpenAI makes ChatGPT, Sora, and DALL·E 3.

Welcome to /r/OpenAI!

OpenAI is an AI research and deployment company. OpenAI's mission is to ensure that artificial general intelligence benefits all of humanity. We are an unofficial community. OpenAI makes ChatGPT, GPT-4, and DALL·E 3.

Please view the subreddit rules before posting.


Official OpenAI Links

Sora

ChatGPT

DALL·E 3

Blog

Discord

YouTube

GitHub

Careers

Help Center

Docs


Related Subreddits

r/artificial

r/ChatGPT

r/Singularity

r/MachineLearning

r/GPTStore

r/dalle2

/r/OpenAI

2,220,265 Subscribers

1

Anybody have Deep Research sample outputs they are willing to share?

Any topic, any question. I just want to see a bunch more real outputs to judge what kind of functionality we are working with here.

1 Comment
2025/02/03
22:05 UTC

0

Deep Research Request: Cutting-Edge "Severance" Theories (as of 2/3/25) [SPOILERS!!]

I'm talking in-depth groundbreaking interpretations. Give us quantum-level analysis.

Also, I have questions,

what is up with the pouches thing? who may 'the board' be? do i have a shot with britt lower?

normal stuff like that, add your own if it helps.

Please and thank you.

0 Comments
2025/02/03
21:56 UTC

2

Is there a daily token limit with API or something?

Hey folks,

I ran a script processing GPT requests for about four hours without issues, but now every request instantly hits a rate limit error.

I checked my usage on the developer platform—I haven’t exceeded the financial threshold, and there’s no explicit daily token limit that I see. The script was running fine before, so I know it’s not a general rate limit issue.

Is there a hidden rolling limit or cooldown I might be missing? Any way to resolve this besides waiting?

4 Comments
2025/02/03
21:39 UTC

1

How to enable both reasoning and search in chatgpt ?

I want to enable both reasoning with o3 mini and search in chatgpt(like deepseek does). But if I enable both, I just see only reasoning results, not web results.

If they have this feature, its game over for perplexity.

1 Comment
2025/02/03
21:29 UTC

1

Is Deep Research a stalling machine?

Don’t get me wrong, when it actually does work it’s fantastic. But I am finding it sooooo difficult getting it to actually start a task. It likes asking me the same questions over and over again (“just to be sure”, “to confirm”). And it always promises it will get back to me yet never does.

If they need to ration this because of bandwidth that is understandable, but a little bit of transparency about its internal queue/throttle system would be nice.

1 Comment
2025/02/03
21:25 UTC

4

Do you guys use custom instructions? If so what are they

I go back and forth between whether I actually enjoy custom instructions in any of the models but I kind of feel like they disrupt the response patterns that the model gives

2 Comments
2025/02/03
21:23 UTC

0

o3-mini still struggling with "standard" Quantum Mechanics problem

Just to quell the "AGI incoming" and "AI will soon make huge Physics/Math discoveries" hype a little bit. This problem is certainly not THAT easy, but it is a standard QM problem which has a "well known" result and I think many QM textbooks go over this problem, it was part of my homework and I sat down and proved it fairly quickly (about an hour, but keep in mind it is a lot easier to just "reprove" it if one knows how to, this is including time spent "wandering around in the dark" mentally and just trying different paths, it also took a little while to do the "brute-force" calculation while keeping track of all the terms)

o3-mini got the wrong answer over and over, despite my attempts to tell it that it's answer was not correct. I will point out that DeepSeek R1 also failed in all my attempts (5+ on both models) to make it solve the problem. The only model that got the correct answer was Gemini 2.0 Flash Thinking Experimental 01-21 (on temperature 0) and took 40 seconds to solve it.

The prompt is the following: "Calculate the second order energy correction for a perturbation c*x^3 to a quantum harmonic oscillator (the first order correction vanishes)."

I'd be interested if any of you can make it get a correct solution; with o3 or another model I haven't mentioned (Sonnet is horrendous at Physics in my experience)

(that last part in parentheses is a tip to perhaps makes it get to the solution faster, but that tip is certainly not difficult to show, so its def not necessary).

I'd be shocked if DeepResearch with o3 couldnt figure it out (if Flash Thinking could).

(all of this obv points to the Hallucination problem and the lack of a "fundamental", unalterable ground-truth base of knowledge for LLMs, since they are fundamentally statistical, at the end of the day, even if there is some bias towards truth that's been trained into the model)

1 Comment
2025/02/03
21:07 UTC

1

Operator against bots on Chess.com

1 Comment
2025/02/03
20:54 UTC

23

Sam Altman's Lecture About The Future of AI

Sam Altman gave a lecture in University of Tokyo and here is the brief summary of Q&A.

Q. What skills will be important for humans in the future?

A. It is impossible for humans to beat AI in mathematics, programming, physics, etc. Just as a human can never beat a calculator. In the future, all people will have access to the highest level of knowledge. Leadership will be more important, how to vision and motivate people.

Q. What is the direction of future development?

A. GPT-3 and GPT-4 are pre-training paradigms. GPT-5 and GPT-6, which will be developed in the future, will utilize reinforcement learning to discover new algorithms, physics, biology, and other new sciences.

Q. Do you intend to release an Open Source model as Open AI in light of Deep-seek, etc.?

A. The world is moving in the direction of Open AI. Society is also approaching a stage where it can accept the trade-offs of an Open model. We are thinking of contributing in some way.

Source(Japanese)

16 Comments
2025/02/03
20:49 UTC

5

Deep Research refusing to do research

I have seen the Deep Research demo to write a question, get a clarification from the agent and then the research commences in the sidebar. The results appear after several minutes or something like 10 minutes.

For example here

https://youtu.be/xkFPpza_edo?t=214

My Chat is here https://chatgpt.com/share/67a12b62-5a0c-8011-a3da-6e9fb17e2c4d

The problem is that it does not do a follow up question, it does not say that it will begin research. After me asking the question, it immediately generates the report that has no numbers, no references and is generally just a placeholder.

Is this some kind of bug or maybe OpenAI is out of resources somehow?

I am on Pro plan.

4 Comments
2025/02/03
20:48 UTC

70

Exponential progress - AI now surpasses human PhD experts in their own field

47 Comments
2025/02/03
20:13 UTC

0

Word on the street in SF: Anthropic has better models than OpenAI (o3), and probably has for many months now, but they're scared to release them

7 Comments
2025/02/03
18:13 UTC

0

Why doesn't Netflix attempt to build an AI like Sora and Veo?

the title

11 Comments
2025/02/03
17:44 UTC

2

With pro, would I get a more stable projects/canvas?

With anything larger than a few pages, my projects and canvas slowly start to fail until them become unusable or problematic. would pro help this issue instead of just using plus?

0 Comments
2025/02/03
17:39 UTC

0

o3-mini is completely in denial about its chain-of-thought being visible

8 Comments
2025/02/03
17:22 UTC

20 Comments
2025/02/03
17:08 UTC

4

Recommended Books for a software engineer who wants to learn AI, and what should I start learning?

I know there are websites with publications I can read from freely on the internet, but what I want is a list of book recommendations from the smartest AI specialists in this community, and if possible, tell me why you recommend the books in your list.

2 Comments
2025/02/03
16:42 UTC

29

Today I experimented with o3-mini-high in Python. I got this galaxy🌀 for three iterations, and a little arty tweaking of the parameters in the resulting script. o3-mini - so cool! I can't wait full o3 (ノ◕ヮ◕)ノ*:・゚✧

4 Comments
2025/02/03
16:40 UTC

38

o3-mini ties DeepSeek R1 for second place (behind o1) on the Multi-Agent Step Game benchmark which tests LLM strategic thinking, collaboration, and deception

3 Comments
2025/02/03
16:38 UTC

5

Is 4o a reasoning model now?

I was just on today and noticed that 4o was "reasoning," so does that mean it works like o3?

5 Comments
2025/02/03
16:21 UTC

0

Janus Pro 7B vs DALL-E 3

DeepSeek recently (last week) dropped a new multi-modal model, Janus-Pro-7B. It outperforms or is competitive with Stable Diffusion and OpenAI's DALLE-3 across a multiple benchmarks.

Benchmarks are especially iffy for image generation models. Copied a few examples below. For more examples and check out our rundown here.

https://preview.redd.it/c33nfi7i7yge1.png?width=675&format=png&auto=webp&s=28edc7fffd10b1d20876ff446731adba905b8ec3

https://preview.redd.it/08cplk7i7yge1.png?width=675&format=png&auto=webp&s=a21483188925938f59bd5d39ef7ae5addc9d76e7

https://preview.redd.it/ep08li7i7yge1.png?width=675&format=png&auto=webp&s=c665a48e979221cd18f6bda9fdd3e17a13d68ac3

0 Comments
2025/02/03
16:14 UTC

0

Content writing

Which model do you use to work in blog posts etc? Are reasoning models the right choice or 4o? 🫣 Thanks in advance

1 Comment
2025/02/03
15:45 UTC

108

Stability AI founder: "We are clearly in an intelligence takeoff scenario"

110 Comments
2025/02/03
15:34 UTC

572

Deep Research Replicated Within 12 Hours

80 Comments
2025/02/03
14:54 UTC

2

GPT-Plus subscription for a PhD student

I am a PhD student debating paying for the GPT Plus subscription. I do not make a lot of money which is why I have to give so much thought to spending $20/month.

My primary uses cases are the following:

  • Reading academic papers
  • Coding in Python and R (occasionally in C#)
  • Thinking though research design ideas
  • Statistical analyses
  • Thinking though academic paper structure
  • Writing academic papers
  • (additional research adjacent stuff that I might be forgetting)

I have been using the free tier but am worried that I am flying close to the sun and will hit a file upload limit. I am also very intrigued by the idea of the advanced voice mode in the Plus tier. Specifically, I am trying to use that feature to give me detailed voice summaries on academic articles while I am on-the-go.

I have two specific questions:

  • In your experience, how restrictive is the advanced voice mode in terms of usage caps? In other words, how much can I use it in a day before it kicks me to the standard mode?
  • What other things have people used GPT Plus for in educational settings? Getting this information can help me flesh out my use cases, helping me justify this purchase to myself.

Thank you so much for reading.

P.S. I tried using the DeepSeek-R1 model locally for all of these features (except voice) but my machine is not capable enough to run it well. It ends up with me having to wait on the model for a long time.

4 Comments
2025/02/03
14:53 UTC

Back To Top