/r/redditdev

Photograph via snooOG

A subreddit for discussion of Reddit's API and Reddit API clients.

A subreddit for discussion of Reddit's API and Reddit API clients.

Please confine discussion to Reddit's API instead of using this as a soapbox to talk to the admins. In particular, use /r/ideasfortheadmins for feature ideas and /r/bugs for bugs. If you have general reddit questions, try /r/help.

To see an explanation of recent user-facing changes to reddit (and the code behind them), check out /r/changelog.


To report a security issue with reddit, please send an email to whitehats@reddit.com .

This is an admin-sponsored subreddit.

/r/redditdev

76,493 Subscribers

0

Trying to automate video posting on Reddit

Hey— I’m trying to make it so that I can post a video on multiple subreddits natively on Reddit (not a Link post) with a single click of a button.

The endpoint for posting seems to be /api/submit, for text and media alike, but im not sure what the request body will look like or if its even possible to post a native video on Reddit without actually manually posting it?

Do I have to upload it to Reddit’s server before submitting a post? If so, how would I do that?

Can anyone who is familiar with this help out, appreciate it

4 Comments
2025/01/31
18:41 UTC

1

Trying to fetch data from about.json on an AWS Lambda server

Hello, I am trying to retrieve the accounts_active information from the endpoint: "https://oauth.reddit.com/r/javascript/about.json".

That said, I can successfully fetch the data on my local machine, but when I try to do the same via AWS Lambda, I get a "Forbidden" error.

Should I authenticate? Should I send a User-Agent? I've tried everything, but nothing works, and every source seems to say something different...

What should I do?

Thanks.

2 Comments
2025/01/30
19:15 UTC

2

What is easiest way to track keywords by subreddit over time?

I am working on a project where I need to track daily counts of keywords for different subreddits. Is there an easy way to do this aside from downloading all the dumps?

3 Comments
2025/01/30
18:15 UTC

0

API and bots

Please explain, if Reddit implies live communication between people, how can it offer an API for automated communication?

11 Comments
2025/01/29
17:52 UTC

0

Reddit scraper that counts how many posts a user has made in a subreddit

Hello! I created a Reddit scraper with ChatGPT that counts how many posts a user has made in a specific subreddit over a given time frame. The results are saved to a CSV file (Excel), making it easy to analyze user activity in any subreddit you’re interested in. This code works on Python 3.7+.

How to use it:

  1. To set up Reddit API access go to https://www.reddit.com/prefs/apps to register your application on Reddit’s developer platform. Click on 'Create App', select 'script', then choose a name for your app. The description can be something simple like 'A script to scrape and analyze user activity in specific subreddits.' You can set the redirect URL to http://localhost as it is the default. Once your app is created, note down the client_id and client_secret, as you’ll use these in the script.

client_id is located right under the app name, client_secret is at the same page noted with 'secret'. Your user_agent is a string you define in your code to identify your app, formatted like this: "platform:AppName:version (by u/YourRedditUsername)". For example, if your app is called "RedditScraper" and your Reddit username is JohnDoe, you would set it like this: "windows:RedditScraper:v1.0 (by u/JohnDoe)".

  1. Install Python 3.7 or later, then install the required Reddit libraries. Open Command Prompt as administrator on Windows or Terminal on Mac and Linux, and type:

pip install pandas praw

If you encounter a permissions error use sudo:

sudo pip install pandas praw

After that verify their installation:

python -m pip show praw pandas OR python3 -m pip show praw pandas

  1. Copy and paste the code:

    import praw import pandas as pd from datetime import datetime, timedelta

    Your Reddit API credentials (replace with your actual credentials)

    client_id = 'your_client_id' # Your client_id from Reddit client_secret = 'your_client_secret' # Your client_secret from Reddit user_agent = 'your_user_agent' # Your user agent string. Make sure your user_agent is unique and clearly describes your application (e.g., 'windows:YourAppName:v1.0 (by )').

    Initialize Reddit instance

    reddit = praw.Reddit( client_id=client_id, client_secret=client_secret, user_agent=user_agent )

    Choose the subreddit you want to scrape (e.g., 'learnpython')

    subreddit_name = 'subreddit' # Change to the subreddit of your choice

    Define the time window (30 days ago)

    time_window = datetime.utcnow() - timedelta(days=30) # Changed to 30 days

    Initialize a dictionary to keep track of post counts per user

    user_post_count = {}

    Fetch the new posts from the subreddit

    for submission in reddit.subreddit(subreddit_name).new(limit=100): # Fetching 100 posts # Check if the post was created within the last 30 days post_time = datetime.utcfromtimestamp(submission.created_utc) if post_time > time_window: user = submission.author.name if submission.author else None if user: # Count the posts per user if user not in user_post_count: user_post_count[user] = 1 else: user_post_count[user] += 1

    Convert the dictionary to a list of tuples for creating a DataFrame

    user_data = [(user, count) for user, count in user_post_count.items()]

    Create a DataFrame

    df = pd.DataFrame(user_data, columns=["Username", "Post Count"])

    Save the data to a CSV file

    df.to_csv(f"{subreddit_name}_user_post_counts.csv", index=False)

    Print the DataFrame to the console

    print(df)

  2. Replace the placeholders with your actual credentials:

client_id = 'your_client_id'

client_secret = 'your_client_secret'

user_agent = 'your_user_agent'

Set the subreddit name you want to scrape. For example, if you want to scrape posts from r/learnpython, replace 'subreddit' with 'learnpython'.

The script will fetch the latest 100 posts from the chosen subreddit. To adjust that, you can change the 'limit=100' in the following line to fetch more or fewer posts:

for submission in reddit.subreddit(subreddit_name).new(limit=100): # Fetching 100 posts

You can modify the time by changing 'timedelta(days=30)' to a different number of days, depending on how far back you want to get user posts:

time_window = datetime.utcnow() - timedelta(days=30) # Set the time range

  1. The code goes through the posts, counts how many times each user has posted in the last 30 days (or how many days you set), and saves this data to a CSV (Excel) file named after the subreddit. For example, if you’re scraping learnpython, the file will be named learnpython_user_post_counts.csv

Keep in mind that scraping too many posts in a short period of time could result in your account being flagged or banned by Reddit, ideally to NO MORE than 100–200 posts per request,. It's important to set reasonable limits to avoid any issues with Reddit's API or community guidelines. [Github](https://github.com/InterestingHome889/Reddit-scraper-that-counts-how-many-posts-a-user-has-made-in-a-subreddit./tree/main)

I don’t want to learn python at this moment, that’s why I used chat gpt.

2 Comments
2025/01/28
14:03 UTC

1

Exporting reddit comments to Excel

Hi! I want to download all comments from a Reddit post for some research, but I have no idea how API/coding works and can't make sense of any of the tools people are providing on here. Does anyone have any advice on how an absolute beginner to coding could download all comments (including nested) into an excel file?

3 Comments
2025/01/28
13:00 UTC

1

only 404's from the GET /api/v1/me/friends/username

I'm receiving only 404 errors from the GET /api/v1/me/friends/username endpoint. Maybe the docs haven't caught up to it being sacked?

Thoughts? Ideas?

import logging, random, sys, praw
from icecream import ic

lsh = logging.StreamHandler()
lsh.setLevel(logging.DEBUG)
lsh.setFormatter(logging.Formatter("%(asctime)s: %(name)s: %(levelname)s: %(message)s"))

for module in ("praw", "prawcore"):
    logger = logging.getLogger(module)
           logger.setLevel(logging.DEBUG)
           logger.addHandler(lsh)

reddit = ic( praw.Reddit("script-a") )
redditor = ic(random.choice( reddit.user.friends()))
if not redditor:
    sys.exit(1)
info = ic(redditor.friend_info())
2 Comments
2025/01/28
02:34 UTC

1

Why does AsyncPRAW use such an old version of aiosqlite?

AsyncPRAW is using aiosqlite version v.0.17.0, which is over 3 years old. Any ideas why this may be?

0 Comments
2025/01/25
07:59 UTC

6

Did server-side rate limit handling change sometime within the last day?

We just received a bug report that PRAW is emitting 429 exceptions. These exceptions should't occur as PRAW preemptively sleeps to avoid going over the rate limit. In addition to this report, I've heard of other people experiencing the same issue.

Could this newly observed behavior be due to a bug in how rate limits are handled on Reddit's end? If so, is this something that might be rolled back?

Thanks!

3 Comments
2025/01/24
20:31 UTC

2

Question about bot account activity

Hello,

I created an account to post automated updates in my own subreddit page. I used "bot" in the username to make clear that it's a bot, used the API for posting, and didn't post anywhere outside of my own subreddit.

Unfortunately, the account was blocked. I contacted help several times. Eventually, after a couple of months, I tried creating a new bot account in case the previous block was an accident. The new account was blocked right away after posting one message with the API.

Did I do anything wrong? I understand that it's not the place to ask to unblock an account, and I tried to contact help, but didn't hear back. I'm just trying to understand whether I violated any rules, to understand what my options are and to avoid doing any similar violations in the future.

Thank you.

16 Comments
2025/01/24
12:47 UTC

3

Using PRAW (or alternative) to send Google Ads Conversion Events

Trying to work around the limitations of my web host.

I have code that is triggered externally to send a conversion event for an ad, however I can't figure out how to use PRAW or the standard Reddit API to do so in Python.

I think I'm past authentication but looking for any examples. Thanks in advance.

4 Comments
2025/01/24
06:56 UTC

1

401 Unauthorized Error When Authenticating Script App

Hi everyone,
I’m trying to set up a Reddit bot using a Script app with the "password" grant type, but I keep getting a 401 Unauthorized error when requesting an access token from /api/v1/access_token.

Here’s a summary of my setup:

  • App type is Script.
  • I’ve double-checked my client_id, client_secret, username, and password.
  • I’m using Python to send a POST request with proper headers and payload.

Despite this, every attempt fails with the following response:

401 Unauthorized  
{"message": "Unauthorized", "error": 401}

Is the "password" grant still supported for Script apps in 2025? Are there specific restrictions or known issues I might be missing?

1 Comment
2025/01/24
03:09 UTC

1

How to retrieve a reddit submissions information to use in embed

I've been trying to figure out how to create post previews like what's created on Discord.

I found this post: https://www.reddit.com/r/redditdev/comments/1ervz8l/fetching_basic_data_about_a_post_from_a_url/, which appears to be from someone looking to do the same thing, but I'm unsure if they were able to get it working.

Like that OP, when I try to simply make a request to the submission link via Python, I'm getting a 403 forbidden. Based on my exploration, there isn't a way to get this information from PRAW, but is there some other way I can retrieve it using the same authentication information I do for my PRAW instance?

1 Comment
2025/01/23
18:57 UTC

18

Removing obsolete endpoints from the Data API

Hi devs,

Over the coming days, we will be removing a number of obsolete endpoints from the Data API as part of an effort to clean up legacy code.

The endpoints being removed have been inactive and unused for over six months, and are no longer returning Reddit data. Many of these endpoints are tied to deprecated features and surfaces and are already effectively dead.

Which endpoints are being removed?

These endpoints will be completely removed from the Data API February 15, 2025.

Note that these changes are not indicative of plans to remove actively used endpoints from our Data API.

Edit: our post previously stated GET_friends would be removed, we've updated the post to reflect the accurate list.

9 Comments
2025/01/23
16:07 UTC

1

How often can I summon a bot in a comment in 1 thread?

This is my scenario:

I plan to create a bot that can be summoned (either via name or triggered by a specific phrase), and this bot will only be tracking comments made by users in one particular post that I will make (like a megathread type of post).

My question is, what is the rate limit that I should be prepared for in this scenario? For example what happens if 20 different users summon the same bot in the same thread in 1 minute? Will that cause some rate limit issues? Does anyone know what the actual documented rate limit is?

3 Comments
2025/01/23
11:11 UTC

1

How to create an automated posting reddit bot that doesn't get banned or their posts removed.

Are there any specific requirements for a bot to be able to post and their posts being not removed. If I make my bot a mod in my own server then will it help. Becoz i made the bot an approved user in my subreddit but subreddit got banned for spam. I got this as an task for an internship and idk how to do this safely without violation of Reddit rules.

9 Comments
2025/01/20
11:44 UTC

0

My automated bot posts keep getting auto-removed.

I am using PRAW to create a reddit bot that posts on chosen set of subreddits randomly, but as soon as I post, my post is removed by automoderator. So I tried ot in my own subreddit, it got removed again for reputation filter. I didn't spam much to get blocked. I got blocked first time I tried to post. Only subreddit where my post wasn't removed was r/learnpython. Please help, i need urgent help. I need to submit this task by tomorrow.

12 Comments
2025/01/19
18:13 UTC

1

Is possible to extract all post of 2024?

Hello everyone,

I was extracting some posts using PRAW to build a dataset to tune a open-source model to create some type of chatbot that especialize in diabetes for my master's degrree final project. I only manage to extract almost 2000 from r/diabetes but I think I need more. How can I do to extract more than 1000 post? Can I use subreddit.search() to get all post of 2024 like maybe first one month January, then February and so on. Is there some solution to this?

5 Comments
2025/01/18
12:11 UTC

3

NSFW Status query

I am doing API calls for the NS status of threads and subreddits, can anyone help me with how to do that?

0 Comments
2025/01/17
16:28 UTC

5

Is there a known user that is suspended for testing purposes?

Writing a script that needs to detect if a user is active, suspended, has deleted their account or if no account exists.

I can test against active accounts, non-existent accounts, I know an account that was deleted, but is there a known user account that is suspended?

Also, for a deleted account (this happened recently) the API returns the same as a non-existent account but gives "This user has deleted their account." in the UI.

2 Comments
2025/01/17
00:56 UTC

4

Non Ad Post Views/Impressions API Endpoint

Hello. I am using reddit API (https://www.reddit.com/dev/api) for reporting purposes. I created an app in my reddit account and am using its key and secret to download data about my accounts' posts like post date, upvotes, number of comments. They are regular non ad posts. I have been trying to get the post impressions/views from the insights tab (https://imgur.com/a/F6rmfW7) through the api, but it seems like this data point is not available in the api. So my question is how do I get the post views/impressions through the reddit api? Thank you!

7 Comments
2025/01/15
17:29 UTC

0

Does reddit have SSO for other websites, like we have for gmail, microsoft, apple

As the title says.

I am developing an app, and wanted to see if I can use reddit as SSO in addition to gmail/ms/apple

I am OK even if it requires some custom code

4 Comments
2025/01/13
20:38 UTC

3

Is there any tool that can automatically show you post and comment karma on airtable and i have multiple accounts. Or any script

Or if anyone knows how to make script. I will pay if it works. Thanks for help

2 Comments
2025/01/12
01:16 UTC

1

How can I find the number of comments for a list of Reddit URLs?

Hi everyone,

I have a list of Reddit post URLs (around 100 URLs) and I'd like to know the number of comments on each of them. Is there a way to do this easily without needing to know Python or programming?

I'm looking for a solution that would allow me to input the URLs, and then get the number of comments for each post. Any help or advice would be greatly appreciated!

Thanks in advance!

14 Comments
2025/01/11
20:36 UTC

2

Is there any tool that can help crawl or pull comments from specific reddit subs?

I'm building a SaaS and I'm looking to get insights from various people on certain subreddits. Are there any tools out there that can do this? TIA!

1 Comment
2025/01/11
17:31 UTC

2

Unable to access a private Reddit RSS feed through a cloud platform

Has anyone had issues accessing private Reddit feeds through RSS readers or cloud automation platforms? I’m attempting to fetch data from my bot's modqueue feed through Pipedream. The feed works completely fine when opening it in a browser (even when I'm not logged in, as the authentication data is included in the URL itself). However, when attempting to access it through Pipedream, the request isn't able to go through. I've also double-checked the URL to make sure its correct and up-to-date. (I've also experienced similar issues when looking into with MonotoRSS as a temporary replacement, though I haven't tested that platform with this feed specifically). Is there anything I need to know/do when it comes to working with these feeds? Has anyone else experienced similar issues?

If it helps, here's the error I'm receiving:

ConfigurationError: Error fetching URL https://old.reddit.com/r/mod/about/modqueue/.rss?feed=*******************************************&user= 1*************. Please load the URL directly in your browser and try again.

at Object.fetchFeed (file:///var/task/user/app/rss.app.mjs:40:23)

at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

at async Object.fetchAndParseFeed (file:///var/task/user/app/rss.app.mjs:81:26)

at async Object.activate (file:///var/task/user/sources/new-item-in-feed/new-item-in-feed.mjs:29:13)

at async /var/task/index.js:95:13

at async captureObservations (/var/task/node_modules/@lambda-v2/component-runtime/src/captureObservations.js:28:5)

at async exports.main [as handler] (/var/task/index.js:60:20)
0 Comments
2025/01/10
18:58 UTC

0

Message sent to myself is showing up as "read" instead of a notification

I made a bot that sends a private message (NOT a chat) every time a scheduled script runs (to serve as a reminder). The problem is the message is showing up as sent from myself so therefore it appears as "read" and I don't get a notification for it. How can I fix this?

4 Comments
2025/01/10
16:36 UTC

4

403 error when attempting to access a JSON feed with bearer authorization?

I'm attempting to make the following get request to a private moderator feed, the URL for which I obtained through https://old.reddit.com/prefs/feeds.

get_request:
    URL: "https://old.reddit.com/r/mod/about/modqueue/.json?feed=*********&user=*********"
    headers:
        User-Agent: "pipedream/1"
        Authorization: "Bearer {{bearer_token}}}"

My authorization for this request is a bearer token that the code obtains from https://www.reddit.com/api/v1/access_token in a previous step. A new bearer token is requested every time the code runs, so the token expiring isn't a concern.

However, the request continuously fails with a status code 403. This code worked perfectly fine up until about 3 months ago, after which this error began occuring. The bearer token I'm using is also the same token that's being outputted from my POST request to https://www.reddit.com/api/v1/access_token, which returns successfully with the bearer token every time.

Did something change with Reddit's API in the past few months? Does anyone know any troubleshooting steps I could take to try and fix this?

Note: I'm not currently working with Python. This is a raw GET request that I'm making through a Pipedream workflow.

Here's the error response body, if it helps:

<!doctype html>
     <html>
  <head>
    <title>Blocked</title>
    <style>
      body {
          font: small verdana, arial, helvetica, sans-serif;
          width: 600px;
          margin: 0 auto;
      }

      h1 {
          height: 40px;
          background: transparent url(//www.redditstatic.com/reddit.com.header.png) no-repeat scroll top right;
      }
    </style>
  </head>
  <body>
    <h1>whoa there, pardner!</h1>

<p>Your request has been blocked due to a network policy.</p>

<p>Try logging in or creating an account <a href=https://www.reddit.com/login/>here</a> to get back to browsing.</p>

<p>If you're running a script or application, please register or sign in with your developer credentials <a href=https://www.reddit.com/wiki/api/>here</a>. Additionally make sure your User-Agent is not empty and is something unique and descriptive and try again. if you're supplying an alternate User-Agent string,
try changing back to default as that can sometimes result in a block.</p>

<p>You can read Reddit's Terms of Service <a href=https://www.reddit.com/wiki/api/>here</a>.</p>

<p>if you think that we've incorrectly blocked you or you would like to discuss
easier ways to get the data you want, please file a ticket <a href=https://support.reddithelp.com/hc/en-us/requests/new?ticket_form_id=21879292693140>here</a>.</p>

<p>when contacting us, please include your ip address which is: <strong>3.84.50.106</strong> and reddit account</p>
  </body>
</html>
12 Comments
2025/01/09
16:26 UTC

2

How does ratelimit_seconds work?

I'd like to clarify the effect of configuring ratelimit_seconds

According to the docs, my understanding is that if I hit the rate limit, async praw will wait for max ratelimit_seconds + 1 second before raising an APIException.

So assuming that the rate limit resets every 600 seconds (which is what the current rate limit seems to be), if I set ratelimit_seconds to 600, does that mean that async praw will never raise an APIException and always automatically retry?

Docs for reference: https://asyncpraw.readthedocs.io/en/stable/getting_started/configuration/options.html#miscellaneous-configuration-options

7 Comments
2025/01/09
10:33 UTC

3

"restrict_posting": true

Can anyone tell me what the json object in the caption means? It appears in the json response when you search a Reddit User and add /about.json at the end. I was just looking if these json responses have any good info on whether an account is shadowbanned, restricted, botted, spammy, low trust score, you get the gyst. Not much there tbh but this piece caught my eye because it's the same with every account.

Any tips on how to check this stuff and filter out potentially spammy / boted accs is appreciated, but I'm mostly just curious on what this part means.

Here's an exampe response from a random old reddit account:

{"kind": "t2", "data": {"is_employee": false, "is_friend": false, "subreddit": {"default_set": true, "user_is_contributor": false, "banner_img": "", "allowed_media_in_comments": [], "user_is_banned": false, "free_form_reports": true, "community_icon": null, "show_media": true, "icon_color": "#FFB470", "user_is_muted": null, "display_name": "u_account2", "header_img": null, "title": "", "previous_names": [], "over_18": false, "icon_size": [256, 256], "primary_color": "", "icon_img": "https://www.redditstatic.com/avatars/defaults/v2/avatar_default_1.png", "description": "", "submit_link_label": "", "header_size": null, "restrict_posting": true, "restrict_commenting": false, "subscribers": 0, "submit_text_label": "", "is_default_icon": true, "link_flair_position": "", "display_name_prefixed": "u/account2", "key_color": "", "name": "t5_473c7", "is_default_banner": true, "url": "/user/account2/", "quarantine": false, "banner_size": null, "user_is_moderator": false, "accept_followers": true, "public_description": "", "link_flair_enabled": false, "disable_contributor_requests": false, "subreddit_type": "user", "user_is_subscriber": false}, "snoovatar_size": null, "awardee_karma": 0, "id": "3pxxt", "verified": true, "is_gold": false, "is_mod": false, "awarder_karma": 0, "has_verified_email": false, "icon_img": "https://www.redditstatic.com/avatars/defaults/v2/avatar_default_1.png", "hide_from_robots": false, "link_karma": 1, "pref_show_snoovatar": false, "is_blocked": false, "total_karma": 2, "accept_chats": true, "name": "account2", "created": 1258079681.0, "created_utc": 1258079681.0, "snoovatar_img": "", "comment_karma": 1, "accept_followers": true, "has_subscribed": false, "accept_pms": true}}
3 Comments
2025/01/08
23:31 UTC

Back To Top