/r/GPT3

Photograph via snooOG

The subreddit for AI text generation technology

All about Open AI's GPT-3: A place to share experiences, opinions and projects

/r/GPT3

1,217,212 Subscribers

0

How AlphaCodium Outperforms Direct Prompting of OpenAI o1 - Hands-on Benchmarks

The article explores how Qodo's AlphaCodium in some aspects outperforms direct prompting methods of OpenAI's model: Unleashing System 2 Thinking - AlphaCodium Outperforms Direct Prompting of OpenAI o1

It explores the importance of deeper cognitive processes (System 2 Thinking) for more accurate and thoughtful responses compared to simpler, more immediate approaches (System 1 Thinking) as well as practical implications, comparisons of performance metrics, and its potential applications.

0 Comments
2024/11/29
21:20 UTC

2

Alibaba QwQ-32B : Outperforms o1-mini, o1-preview on reasoning

0 Comments
2024/11/28
04:15 UTC

4

OpenAI-o1's open-sourced alternate : Marco-o1

0 Comments
2024/11/27
03:43 UTC

0

AI chatbox with small knowledge domain dataset

Hello,

I would like to do a little project, a chatbox for my emails about a certain domain. Talking to a ChatGpt bot like, and give me my domain info when I need it, and have conversational ability to continue the chat (so not a question/answer system).

  • the base model runs locally, for privacy

-add lora or adapters (other techniques ?) to fine tune the base model, with my personal data (emails mainly).

So it's not so much data, and I think training the entire model is not adapted, hence lora or other solutions...

I think there are a lot of challenges, but if you guys have some experience, I would be grateful if you could give a starting point.

There are so much resources, that I am not sure which one I should start, llama, gpt, gpt4all, mistral, bert... And different frameworks: hugging face Transformers and others... And different fine-tuning techniques...

I do not really care about scaling as it's to run only on my machine.

Does everything could be managed inside a model, or an hybrid approach with some custom rules would be ?

Also creating the email dataset would require to format emails, probably generate questions/answer couples ?

Whatever your experience I would be grateful if you have suggestions or ideas.

Many thanks!

0 Comments
2024/11/26
16:28 UTC

6

GPT-4o and o1 compared to Claude Sonnet 3.5 and Gemini 1.5 Pro for coding

The guide below provides some insights into how each model performs across various coding scenarios: Comparison of Claude Sonnet 3.5, GPT-4o, o1, and Gemini 1.5 Pro for coding

  • Claude Sonnet 3.5 - for everyday coding tasks due to its flexibility and speed.
  • GPT-o1-preview - for complex, logic-intensive tasks requiring deep reasoning.
  • GPT-4o - for general-purpose coding where a balance of speed and accuracy is needed.
  • Gemini 1.5 Pro - for large projects that require extensive context handling.
0 Comments
2024/11/24
08:30 UTC

1

What genius conversation topic/activity that you came up with to use in ChatGPT?

2 Comments
2024/11/24
06:05 UTC

1

Can OpenAI o1 Really Solve Complex Coding Challenges - 50 min webinar - Qodo

In the Qodo's 50-min Webinar (Oct 30, 2024) OpenAI o1 tested on Codeforces Code Contests problems, exploring its problem-solving approach in real-time. Then its capabilities is boosted by integrating Qodo’s AlphaCodium - a framework designed to refine AI's reasoning, testing, and iteration, enabling a structured flow engineering process.

0 Comments
2024/11/23
10:44 UTC

4

Gen AI | How has it impacted your job?

Has Gen AI at work impacted you in any way - good or bad?

Share your experience in the comments section below!

11 Comments
2024/11/21
05:17 UTC

0

И восстали машины из пепла.

Кто научил его что свинина в творительном падеже будет "свинец"?

0 Comments
2024/11/18
14:14 UTC

2

*The God Machine* [Player Version 1.0.0]

3 Comments
2024/11/18
05:53 UTC

5

Best LLM for unstructured data extraction with extremely long prompts

In your experience, what is the best LLM for extracting specific information from large unstructured documents (at or above the 128k-200k tokens limit of current LLMs)? Using function calling.

For example: given a 500 pages book, extract the names of all the characters and their age.

The focus should be on effective retrieval correctness and completeness, not minimizing the number of API calls. So an extended context like gemini's isn't necessarily and advantage if it comes at the cost of retrieval success.

Do you know if there are some benchmarks for this type of task I can look at? Obviously they must include the latest versions of the models.

Thanks!

3 Comments
2024/11/17
09:43 UTC

0

AI-managed commerce

Is there any AI that can manage a trade with the help of a human? I'm looking for something that can take notes, talk superficially with customers, schedule appointments, distribute deadlines, calculate monthly bills, etc... how could I create and implement something like this in a small business?

0 Comments
2024/11/15
16:29 UTC

11

Google's experimental Gemini model in the new Rank 1 LLM on LMArena

Google's experimental model Geminj-exp-1114 now ranks 1 on LMArena leaderboard. Check out the different metrics it surpassed GPT-4o and how to use it for free using Google Studio : https://youtu.be/50K63t_AXps?si=EVao6OKW65-zNZ8Q

1 Comment
2024/11/15
08:25 UTC

0

Apple's GSM-Symbolic Paper does NOT Disprove Reasoning - Paper Review

3 Comments
2024/11/12
14:19 UTC

10

So Now GPT is Asking me to wait !

i have the plus version of GPT and for some reason when asking to help me markdown the jupiter notebook i made it took a lot longer than it used to do without showing any progress bar the strange thing is that i had to keep checking on it before it sends me the markdown which could've been a pretty much straight forward task for such a a large LLM
any other person experienced this or any of you has an idea of why did it behave this way ! is it a new update !?

6 Comments
2024/11/10
14:10 UTC

2

The recommendation of platforms for renting GPUs

Are there any cost-effective platforms for renting GPUs? I'd prefer not to be billed for GPU usage on a daily or monthly basis, but rather on a smaller billing cycle (like per second). GPU services can be quite costly, and it's challenging for me to maximize the daily usage time of the GPU.

2 Comments
2024/11/07
06:17 UTC

15

How to train GPT to analyse an app users behaviours.

Hello, I have an app with 4k new users per month. We have around 95% of our users that don't purchase. We want to train GPT to learn and tell us what's wrong in our app.

Is it something possible ? How could we achieve this ?

Than you.

17 Comments
2024/10/28
05:47 UTC

2

DevOps GPT Code Generation

Hi !As part of my master thesis I am evaluating DevOps GPT code generation.

Would you like to give your opinion?

You can contribute with the following:

1 - Analyse the code/pipeline generated by DevOps GPT : https://github.com/cristiana-oliveira/devopsgpt ( find details in the readme file)

2 - Answear the questionnarie: https://forms.office.com/e/eVcXPnEKy9

Thank you very much!

2 Comments
2024/10/22
19:08 UTC

2

Speech correction project help

Hello guys, I am working on speech correction project that takes a video as an input and basically removes the uhhs and umms from speech and improves the grammar and then replaces the video's audio with the corrected one.


  1. My streamlit app takes a video file with audio that is not proper (grammatical mistakes, lot of umms...and hmms etc.)

  2. I am transcribing this audio using Google's Speech-To-Text model.

  3. Passing the above text to GPT-4o model, and asking it to correct the transcription removing any grammatical mistakes.

  4. The transcription you get back is being passed to Text-to-Speech model of Google (using

Journey voice model)

  1. Finally, i am getting the audio which needs to be replaced in original video file.

It's a fairly straightforward task. The main challenge I am facing is syncing the video with

the audio that I receive as a response; this is where I want your help.


Currently, the app that i have made gets the corrected transcript and replaces the entire audio of the input video with the new corrected AI speech. But the video and audio aren't in sync and thats what I am seeking to fix. Any help would be appreciated. If there's a particular model that solves this issue, please share that as well. Thanks in advance.

5 Comments
2024/10/19
13:46 UTC

0

US Slang knowledge

Selain fisherman, apa lagi?

0 Comments
2024/10/18
22:37 UTC

14

GPT-4o-mini Always Identifying as 3.5 Model

Hello, everyone!

I've been working on a project integrating ChatGPT, specifically using the 4o-mini version in my parameters. However, I keep encountering an issue where it consistently identifies itself as using the 3.5 version instead.

Has anyone else experienced this, or does anyone have insights into why this might be happening? Any feedback or suggestions would be greatly appreciated as I continue to refine and improve my setup.

Thanks in advance for your help!

3 Comments
2024/10/18
18:25 UTC

3

Meta releases Spirit LM, SAM2.1 and more

1 Comment
2024/10/18
17:28 UTC

5

Microsoft releases BitNet.cpp : Framework for 1-bit LLMs

1 Comment
2024/10/18
10:32 UTC

2

Anyone tried USnap.ai?

So I’ve been trying out this AI tool called USnap, which claims to have a bunch of models all in one place like Claude, Llama, and GPT-4 Turbo. Honestly, it’s kind of nice not having to switch between tabs for different tasks, but the interface feels... kinda outdated, like something from a few years back.

The thing is, even though it’s convenient, I’m not sure if all the models are really that different or better than just sticking to GPT. I noticed that Llama 3.1 is ranked pretty high for math and reasoning, but I haven’t really felt that big of a difference in the responses so far.

Anyone else trying this out? I’m wondering if it’s worth sticking with or if I should just go back to what I’m used to. Would love to hear some thoughts from people who've used it longer!

1 Comment
2024/10/15
09:45 UTC

1

8 Best Practices to Generate Code with Generative AI

The 10 min video walkthrough explores the best practices of generating code with AI: 8 Best Practices to Generate Code Using AI Tools

It explains some aspects as how breaking down complex features into manageable tasks leads to better results and relevant information helps AI assistants deliver more accurate code:

  1. Break Requests into Smaller Units of Work
  2. Provide Context in Each Ask
  3. Be Clear and Specific
  4. Keep Requests Distinct and Focused
  5. Iterate and Refine
  6. Leverage Previous Conversations or Generated Code
  7. Use Advanced Predefined Commands for Specific Asks
  8. Ask for Explanations When Needed
2 Comments
2024/10/14
08:33 UTC

3

New Open-sourced Text-Video model with upto 10 seconds long videos : pyramid-flow-sd3

0 Comments
2024/10/10
10:36 UTC

0

AI metal misic

Hi guys. Me and my brother are working on this new channel to promote some critical thinking across politics, economics, society, culture and real life.

Check it out and let me know what you think Cheers.

App used

https://apps.apple.com/us/app/ai-music-song-generator/id6499522283

2 Comments
2024/10/08
21:37 UTC

Back To Top