/r/redditdev
A subreddit for discussion of Reddit's API and Reddit API clients.
A subreddit for discussion of Reddit's API and Reddit API clients.
Please confine discussion to Reddit's API instead of using this as a soapbox to talk to the admins. In particular, use /r/ideasfortheadmins for feature ideas and /r/bugs for bugs. If you have general reddit questions, try /r/help.
To see an explanation of recent user-facing changes to reddit (and the code behind them), check out /r/changelog.
To report a security issue with reddit, please send an email to whitehats@reddit.com .
This is an admin-sponsored subreddit.
/r/redditdev
Hey— I’m trying to make it so that I can post a video on multiple subreddits natively on Reddit (not a Link post) with a single click of a button.
The endpoint for posting seems to be /api/submit, for text and media alike, but im not sure what the request body will look like or if its even possible to post a native video on Reddit without actually manually posting it?
Do I have to upload it to Reddit’s server before submitting a post? If so, how would I do that?
Can anyone who is familiar with this help out, appreciate it
Hello, I am trying to retrieve the accounts_active information from the endpoint: "https://oauth.reddit.com/r/javascript/about.json".
That said, I can successfully fetch the data on my local machine, but when I try to do the same via AWS Lambda, I get a "Forbidden" error.
Should I authenticate? Should I send a User-Agent? I've tried everything, but nothing works, and every source seems to say something different...
What should I do?
Thanks.
I am working on a project where I need to track daily counts of keywords for different subreddits. Is there an easy way to do this aside from downloading all the dumps?
Please explain, if Reddit implies live communication between people, how can it offer an API for automated communication?
Hello! I created a Reddit scraper with ChatGPT that counts how many posts a user has made in a specific subreddit over a given time frame. The results are saved to a CSV file (Excel), making it easy to analyze user activity in any subreddit you’re interested in. This code works on Python 3.7+.
How to use it:
client_id is located right under the app name, client_secret is at the same page noted with 'secret'. Your user_agent is a string you define in your code to identify your app, formatted like this: "platform:AppName:version (by u/YourRedditUsername)". For example, if your app is called "RedditScraper" and your Reddit username is JohnDoe, you would set it like this: "windows:RedditScraper:v1.0 (by u/JohnDoe)".
pip install pandas praw
If you encounter a permissions error use sudo:
sudo pip install pandas praw
After that verify their installation:
python -m pip show praw pandas
OR python3 -m pip show praw pandas
Copy and paste the code:
import praw import pandas as pd from datetime import datetime, timedelta
client_id = 'your_client_id' # Your client_id from Reddit client_secret = 'your_client_secret' # Your client_secret from Reddit user_agent = 'your_user_agent' # Your user agent string. Make sure your user_agent is unique and clearly describes your application (e.g., 'windows:YourAppName:v1.0 (by )').
reddit = praw.Reddit( client_id=client_id, client_secret=client_secret, user_agent=user_agent )
subreddit_name = 'subreddit' # Change to the subreddit of your choice
time_window = datetime.utcnow() - timedelta(days=30) # Changed to 30 days
user_post_count = {}
for submission in reddit.subreddit(subreddit_name).new(limit=100): # Fetching 100 posts # Check if the post was created within the last 30 days post_time = datetime.utcfromtimestamp(submission.created_utc) if post_time > time_window: user = submission.author.name if submission.author else None if user: # Count the posts per user if user not in user_post_count: user_post_count[user] = 1 else: user_post_count[user] += 1
user_data = [(user, count) for user, count in user_post_count.items()]
df = pd.DataFrame(user_data, columns=["Username", "Post Count"])
df.to_csv(f"{subreddit_name}_user_post_counts.csv", index=False)
print(df)
Replace the placeholders with your actual credentials:
client_id = 'your_client_id'
client_secret = 'your_client_secret'
user_agent = 'your_user_agent'
Set the subreddit name you want to scrape. For example, if you want to scrape posts from r/learnpython, replace 'subreddit' with 'learnpython'.
The script will fetch the latest 100 posts from the chosen subreddit. To adjust that, you can change the 'limit=100' in the following line to fetch more or fewer posts:
for submission in reddit.subreddit(subreddit_name).new(limit=100): # Fetching 100 posts
You can modify the time by changing 'timedelta(days=30)' to a different number of days, depending on how far back you want to get user posts:
time_window = datetime.utcnow() - timedelta(days=30) # Set the time range
Keep in mind that scraping too many posts in a short period of time could result in your account being flagged or banned by Reddit, ideally to NO MORE than 100–200 posts per request,. It's important to set reasonable limits to avoid any issues with Reddit's API or community guidelines. [Github](https://github.com/InterestingHome889/Reddit-scraper-that-counts-how-many-posts-a-user-has-made-in-a-subreddit./tree/main)
I don’t want to learn python at this moment, that’s why I used chat gpt.
Hi! I want to download all comments from a Reddit post for some research, but I have no idea how API/coding works and can't make sense of any of the tools people are providing on here. Does anyone have any advice on how an absolute beginner to coding could download all comments (including nested) into an excel file?
I'm receiving only 404 errors from the GET /api/v1/me/friends/username endpoint. Maybe the docs haven't caught up to it being sacked?
Thoughts? Ideas?
import logging, random, sys, praw
from icecream import ic
lsh = logging.StreamHandler()
lsh.setLevel(logging.DEBUG)
lsh.setFormatter(logging.Formatter("%(asctime)s: %(name)s: %(levelname)s: %(message)s"))
for module in ("praw", "prawcore"):
logger = logging.getLogger(module)
logger.setLevel(logging.DEBUG)
logger.addHandler(lsh)
reddit = ic( praw.Reddit("script-a") )
redditor = ic(random.choice( reddit.user.friends()))
if not redditor:
sys.exit(1)
info = ic(redditor.friend_info())
AsyncPRAW is using aiosqlite version v.0.17.0, which is over 3 years old. Any ideas why this may be?
We just received a bug report that PRAW is emitting 429 exceptions. These exceptions should't occur as PRAW preemptively sleeps to avoid going over the rate limit. In addition to this report, I've heard of other people experiencing the same issue.
Could this newly observed behavior be due to a bug in how rate limits are handled on Reddit's end? If so, is this something that might be rolled back?
Thanks!
Hello,
I created an account to post automated updates in my own subreddit page. I used "bot" in the username to make clear that it's a bot, used the API for posting, and didn't post anywhere outside of my own subreddit.
Unfortunately, the account was blocked. I contacted help several times. Eventually, after a couple of months, I tried creating a new bot account in case the previous block was an accident. The new account was blocked right away after posting one message with the API.
Did I do anything wrong? I understand that it's not the place to ask to unblock an account, and I tried to contact help, but didn't hear back. I'm just trying to understand whether I violated any rules, to understand what my options are and to avoid doing any similar violations in the future.
Thank you.
Trying to work around the limitations of my web host.
I have code that is triggered externally to send a conversion event for an ad, however I can't figure out how to use PRAW or the standard Reddit API to do so in Python.
I think I'm past authentication but looking for any examples. Thanks in advance.
Hi everyone,
I’m trying to set up a Reddit bot using a Script app with the "password" grant type, but I keep getting a 401 Unauthorized
error when requesting an access token from /api/v1/access_token
.
Here’s a summary of my setup:
client_id
, client_secret
, username
, and password
.Despite this, every attempt fails with the following response:
401 Unauthorized
{"message": "Unauthorized", "error": 401}
Is the "password" grant still supported for Script apps in 2025? Are there specific restrictions or known issues I might be missing?
I've been trying to figure out how to create post previews like what's created on Discord.
I found this post: https://www.reddit.com/r/redditdev/comments/1ervz8l/fetching_basic_data_about_a_post_from_a_url/, which appears to be from someone looking to do the same thing, but I'm unsure if they were able to get it working.
Like that OP, when I try to simply make a request to the submission link via Python, I'm getting a 403 forbidden. Based on my exploration, there isn't a way to get this information from PRAW, but is there some other way I can retrieve it using the same authentication information I do for my PRAW instance?
Hi devs,
Over the coming days, we will be removing a number of obsolete endpoints from the Data API as part of an effort to clean up legacy code.
The endpoints being removed have been inactive and unused for over six months, and are no longer returning Reddit data. Many of these endpoints are tied to deprecated features and surfaces and are already effectively dead.
These endpoints will be completely removed from the Data API February 15, 2025.
Note that these changes are not indicative of plans to remove actively used endpoints from our Data API.
Edit: our post previously stated GET_friends would be removed, we've updated the post to reflect the accurate list.
This is my scenario:
I plan to create a bot that can be summoned (either via name or triggered by a specific phrase), and this bot will only be tracking comments made by users in one particular post that I will make (like a megathread type of post).
My question is, what is the rate limit that I should be prepared for in this scenario? For example what happens if 20 different users summon the same bot in the same thread in 1 minute? Will that cause some rate limit issues? Does anyone know what the actual documented rate limit is?
Are there any specific requirements for a bot to be able to post and their posts being not removed. If I make my bot a mod in my own server then will it help. Becoz i made the bot an approved user in my subreddit but subreddit got banned for spam. I got this as an task for an internship and idk how to do this safely without violation of Reddit rules.
I am using PRAW to create a reddit bot that posts on chosen set of subreddits randomly, but as soon as I post, my post is removed by automoderator. So I tried ot in my own subreddit, it got removed again for reputation filter. I didn't spam much to get blocked. I got blocked first time I tried to post. Only subreddit where my post wasn't removed was r/learnpython. Please help, i need urgent help. I need to submit this task by tomorrow.
Hello everyone,
I was extracting some posts using PRAW to build a dataset to tune a open-source model to create some type of chatbot that especialize in diabetes for my master's degrree final project. I only manage to extract almost 2000 from r/diabetes but I think I need more. How can I do to extract more than 1000 post? Can I use subreddit.search() to get all post of 2024 like maybe first one month January, then February and so on. Is there some solution to this?
I am doing API calls for the NS status of threads and subreddits, can anyone help me with how to do that?
Writing a script that needs to detect if a user is active, suspended, has deleted their account or if no account exists.
I can test against active accounts, non-existent accounts, I know an account that was deleted, but is there a known user account that is suspended?
Also, for a deleted account (this happened recently) the API returns the same as a non-existent account but gives "This user has deleted their account." in the UI.
Hello. I am using reddit API (https://www.reddit.com/dev/api) for reporting purposes. I created an app in my reddit account and am using its key and secret to download data about my accounts' posts like post date, upvotes, number of comments. They are regular non ad posts. I have been trying to get the post impressions/views from the insights tab (https://imgur.com/a/F6rmfW7) through the api, but it seems like this data point is not available in the api. So my question is how do I get the post views/impressions through the reddit api? Thank you!
As the title says.
I am developing an app, and wanted to see if I can use reddit as SSO in addition to gmail/ms/apple
I am OK even if it requires some custom code
Or if anyone knows how to make script. I will pay if it works. Thanks for help
Hi everyone,
I have a list of Reddit post URLs (around 100 URLs) and I'd like to know the number of comments on each of them. Is there a way to do this easily without needing to know Python or programming?
I'm looking for a solution that would allow me to input the URLs, and then get the number of comments for each post. Any help or advice would be greatly appreciated!
Thanks in advance!
I'm building a SaaS and I'm looking to get insights from various people on certain subreddits. Are there any tools out there that can do this? TIA!
Has anyone had issues accessing private Reddit feeds through RSS readers or cloud automation platforms? I’m attempting to fetch data from my bot's modqueue feed through Pipedream. The feed works completely fine when opening it in a browser (even when I'm not logged in, as the authentication data is included in the URL itself). However, when attempting to access it through Pipedream, the request isn't able to go through. I've also double-checked the URL to make sure its correct and up-to-date. (I've also experienced similar issues when looking into with MonotoRSS as a temporary replacement, though I haven't tested that platform with this feed specifically). Is there anything I need to know/do when it comes to working with these feeds? Has anyone else experienced similar issues?
If it helps, here's the error I'm receiving:
ConfigurationError: Error fetching URL https://old.reddit.com/r/mod/about/modqueue/.rss?feed=*******************************************&user= 1*************. Please load the URL directly in your browser and try again.
at Object.fetchFeed (file:///var/task/user/app/rss.app.mjs:40:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.fetchAndParseFeed (file:///var/task/user/app/rss.app.mjs:81:26)
at async Object.activate (file:///var/task/user/sources/new-item-in-feed/new-item-in-feed.mjs:29:13)
at async /var/task/index.js:95:13
at async captureObservations (/var/task/node_modules/@lambda-v2/component-runtime/src/captureObservations.js:28:5)
at async exports.main [as handler] (/var/task/index.js:60:20)
I made a bot that sends a private message (NOT a chat) every time a scheduled script runs (to serve as a reminder). The problem is the message is showing up as sent from myself so therefore it appears as "read" and I don't get a notification for it. How can I fix this?
I'm attempting to make the following get request to a private moderator feed, the URL for which I obtained through https://old.reddit.com/prefs/feeds.
get_request:
URL: "https://old.reddit.com/r/mod/about/modqueue/.json?feed=*********&user=*********"
headers:
User-Agent: "pipedream/1"
Authorization: "Bearer {{bearer_token}}}"
My authorization for this request is a bearer token that the code obtains from https://www.reddit.com/api/v1/access_token in a previous step. A new bearer token is requested every time the code runs, so the token expiring isn't a concern.
However, the request continuously fails with a status code 403. This code worked perfectly fine up until about 3 months ago, after which this error began occuring. The bearer token I'm using is also the same token that's being outputted from my POST request to https://www.reddit.com/api/v1/access_token, which returns successfully with the bearer token every time.
Did something change with Reddit's API in the past few months? Does anyone know any troubleshooting steps I could take to try and fix this?
Note: I'm not currently working with Python. This is a raw GET request that I'm making through a Pipedream workflow.
Here's the error response body, if it helps:
<!doctype html>
<html>
<head>
<title>Blocked</title>
<style>
body {
font: small verdana, arial, helvetica, sans-serif;
width: 600px;
margin: 0 auto;
}
h1 {
height: 40px;
background: transparent url(//www.redditstatic.com/reddit.com.header.png) no-repeat scroll top right;
}
</style>
</head>
<body>
<h1>whoa there, pardner!</h1>
<p>Your request has been blocked due to a network policy.</p>
<p>Try logging in or creating an account <a href=https://www.reddit.com/login/>here</a> to get back to browsing.</p>
<p>If you're running a script or application, please register or sign in with your developer credentials <a href=https://www.reddit.com/wiki/api/>here</a>. Additionally make sure your User-Agent is not empty and is something unique and descriptive and try again. if you're supplying an alternate User-Agent string,
try changing back to default as that can sometimes result in a block.</p>
<p>You can read Reddit's Terms of Service <a href=https://www.reddit.com/wiki/api/>here</a>.</p>
<p>if you think that we've incorrectly blocked you or you would like to discuss
easier ways to get the data you want, please file a ticket <a href=https://support.reddithelp.com/hc/en-us/requests/new?ticket_form_id=21879292693140>here</a>.</p>
<p>when contacting us, please include your ip address which is: <strong>3.84.50.106</strong> and reddit account</p>
</body>
</html>
I'd like to clarify the effect of configuring ratelimit_seconds
According to the docs, my understanding is that if I hit the rate limit, async praw will wait for max ratelimit_seconds
+ 1 second before raising an APIException.
So assuming that the rate limit resets every 600 seconds (which is what the current rate limit seems to be), if I set ratelimit_seconds
to 600, does that mean that async praw will never raise an APIException and always automatically retry?
Docs for reference: https://asyncpraw.readthedocs.io/en/stable/getting_started/configuration/options.html#miscellaneous-configuration-options
Can anyone tell me what the json object in the caption means? It appears in the json response when you search a Reddit User and add /about.json at the end. I was just looking if these json responses have any good info on whether an account is shadowbanned, restricted, botted, spammy, low trust score, you get the gyst. Not much there tbh but this piece caught my eye because it's the same with every account.
Any tips on how to check this stuff and filter out potentially spammy / boted accs is appreciated, but I'm mostly just curious on what this part means.
Here's an exampe response from a random old reddit account:
{"kind": "t2", "data": {"is_employee": false, "is_friend": false, "subreddit": {"default_set": true, "user_is_contributor": false, "banner_img": "", "allowed_media_in_comments": [], "user_is_banned": false, "free_form_reports": true, "community_icon": null, "show_media": true, "icon_color": "#FFB470", "user_is_muted": null, "display_name": "u_account2", "header_img": null, "title": "", "previous_names": [], "over_18": false, "icon_size": [256, 256], "primary_color": "", "icon_img": "https://www.redditstatic.com/avatars/defaults/v2/avatar_default_1.png", "description": "", "submit_link_label": "", "header_size": null, "restrict_posting": true, "restrict_commenting": false, "subscribers": 0, "submit_text_label": "", "is_default_icon": true, "link_flair_position": "", "display_name_prefixed": "u/account2", "key_color": "", "name": "t5_473c7", "is_default_banner": true, "url": "/user/account2/", "quarantine": false, "banner_size": null, "user_is_moderator": false, "accept_followers": true, "public_description": "", "link_flair_enabled": false, "disable_contributor_requests": false, "subreddit_type": "user", "user_is_subscriber": false}, "snoovatar_size": null, "awardee_karma": 0, "id": "3pxxt", "verified": true, "is_gold": false, "is_mod": false, "awarder_karma": 0, "has_verified_email": false, "icon_img": "https://www.redditstatic.com/avatars/defaults/v2/avatar_default_1.png", "hide_from_robots": false, "link_karma": 1, "pref_show_snoovatar": false, "is_blocked": false, "total_karma": 2, "accept_chats": true, "name": "account2", "created": 1258079681.0, "created_utc": 1258079681.0, "snoovatar_img": "", "comment_karma": 1, "accept_followers": true, "has_subscribed": false, "accept_pms": true}}