/r/redditdev
A subreddit for discussion of Reddit's API and Reddit API clients.
A subreddit for discussion of Reddit's API and Reddit API clients.
Please confine discussion to Reddit's API instead of using this as a soapbox to talk to the admins. In particular, use /r/ideasfortheadmins for feature ideas and /r/bugs for bugs. If you have general reddit questions, try /r/help.
To see an explanation of recent user-facing changes to reddit (and the code behind them), check out /r/changelog.
To report a security issue with reddit, please send an email to whitehats@reddit.com .
This is an admin-sponsored subreddit.
/r/redditdev
Hi,
I used to code a little in the past, but want to dabble some more today. Currently I can't stand the fact that I can't easily search or backup my reddit chats and messages where I have lots of useful information.
Are there any existing 3rd party apps today that do this easily already?
How difficult would it be to build something like this? I'm imagining a small service that regularly hits the messages/chat apis (if they both exist) to sync messages into a lightweight database like postgres/etc and just offer a really simple search and browse interface. Probably would have to use something opensource like elastic but even simple SQL queries could work to start
Revisiting an old bug, we have a bot that posts daily threads, and it should be able to sticky them. However when I tried to implement it, reddit would throw a 500, so I gave up and used automod rules. However it's kind of a pain and I decided to revisit it.
Here is the API docs from reddit:
https://www.reddit.com/dev/api/#POST_api_set_subreddit_sticky
Here is what I'm sending and receiving:
headers: Object [AxiosHeaders] {
Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/x-www-form-urlencoded',
Authorization: 'bearer ey<truncated>',
'User-Agent': 'axios/1.7.7',
'Content-Length': '35',
'Accept-Encoding': 'gzip, compress, deflate, br'
},
baseURL: 'https://oauth.reddit.com/api/',
method: 'post',
url: 'set_subreddit_sticky',
data: 'api_type=json&id=1h41h5v&state=true',
__isRetryRequest: true
},
code: 'ERR_BAD_RESPONSE',
status: 500
I tried to fetch and attach the modhash
as a header, but the API returns null
for the modhash, so I don't think that's it. The bot is authenticated over OAuth and can do other mod actions without issue.
Any ideas?
EDIT: Side note, if anyone thinks there would be enthusiasm for a TypeScript wrapper for the Reddit API, do let me know.
Hello dev, I'd like to propose a feature that I think would greatly improve our search experience: time-specific search filters. This feature would allow users to filter search results by specific dates, months, or years.
Here's a simple example of how this could work:
I'm reading through this: https://github.com/reddit-archive/reddit/wiki/OAuth2 and figuring out the application only oauth for my web app.
If I interpreted the docs correctly, I ended up with this post request to retrieve my token, which would allow for api calls:
POST https://www.reddit.com/api/v1/access_token
BODY of post: grant_type=client_credentials & user="the 'web app' number" & password="the_secret" given to me when I created the app.
Running that post request gave me an access token, but the token expires in 24 hours. Normally I'd put it in an ENV var, but now I'm not sure what to do since there's no refresh token.
Am I doing something wrong? If not, what's the best strategy? Put it in the DB and make a call to the DB to get the token, and if it expires create a new one and update the database?
Hi, I'm new here...
Is possible that anyone has asked the same here but I don't find out a solution to my problem:
I need retrive ALL the posts from a specific subreddit (I'm not moderator) and also ALL the comments for each post, so I tried out PRAW without luck because even though I stablished with ease a communication with Reddit I coudn't get all the posts (only up to 1000).
Some people mention Pushshift but as far as I know I can use it if I'm moderator but I am not, does anyone know a solution? Sorry but the official Reddit Docs isn't enough clear for me.
I tried adding an api key and that didn't work. Changed different user-agents, that didn't work. I'm sending requests from a Digitalocean server. I tried a Different DO server, that didn't work. Sending the reqest through Tor works, for whatever reason. What's the best way of handling this? Should I contact them?
I get this error:
Your request has been blocked due to a network policy.
Try logging in or creating an account here to get back to browsing.
If you're running a script or application, please register or sign in with your developer credentials here. Additionally make sure your User-Agent is not empty and is something unique and descriptive and try again. if you're supplying an alternate User-Agent string,
try changing back to default as that can sometimes result in a block.</p>
You can read Reddit's Terms of Service here.
<p>if you think that we've incorrectly blocked you or you would like to discuss
easier ways to get the data you want, please file a ticket here
when contacting us, please include your ip address which is: x.x.x.x and reddit account.
Hi,
Apologies if the following questions are dumb(they probably are) but I cant find specific answers and don't understand the following regarding Reddit API. Could someone please help out?
While calling it with HTTP basic auth and while calling without auth - I am getting the same response. How is this working without auth?
I am meant to be pulling posts from four subreddits (r/Austin, r/chicago, r/philadelphia, r/sanfrancisco), and I cannot seem to get my code to pull ALL the posts into four separate CSVs. is there something about reddit's API that I should know about? can I not pull that many posts? can I not pull from that far back?
Hello! I'm a little bit of a newbie in System Design. I was just studying System Architecture for Reddit, and I'm wondering why they use Postgresql. My understanding of Thing Table is this, there are IDs and metadata. And relationship table for two things id. Then, there is a key value table for actual data. For example, JSON as value. Then, my understanding is they even use Cassandra which is column base data and might be faster for index. Like, if they want to store post data or any data like this, it seems like throwing all data to Cassandra sounded reasonable to me.
Then, I came up with fa ew questions.
I know I might miss lots of details and not even understand, but I looked through lots of posts but couldn't understand so help is really appreciated. Thanks!
so with the recent changes, power delete suit misses many old things, so I updated praw to 7.8.1 on python and it seems user.comments.new(limit=None)
doesn't actually see them.
I'm guessing it will take some time for reddit to pass this to praw?
Edit: just tried reddit api, it also doesn't show them lol neither for comments or submitted
edit for reference this is what I'm talking about
Has anyone managed to get over this x-ratelimit-remaining limit on old.reddit? I've research it a lot but there's never been a fix anywhere.
What happens is, when using old.reddit, I can only browse for a few minutes before hitting an API rate limit that then locks me out from using reddit until the rate resets - which seems to be every 10 minutes. Anytime I try to open any reddit links, I just get a reddit header and blank pages until the rate resets.
You can see the API rate, remaining and reset, if you open up dev tools on your browser (usually Ctrl + Shift + I), swap to the Network tab, refresh the page and browse the response headers on a GET request. It will look like this:
x-ratelimit-remaining: 93.0
x-ratelimit-reset: 361
x-ratelimit-used: 7
The rate limit is 100 on old reddit, which is stupid low. You can easily hit that in just 2-3 minutes, and then gotta wait 7 minutes for a reset. It's a native reddit service so it shouldn't be relying on API calls at all, but even if, 1000 is what reddit says it should be. And yet old reddit only has 100.
I've tried using a new account. Clearing cache/cookies. Using a different browser. Using a VPN. A combination of all these. Nothing seems to change it. New reddit continues working fine, third-party apps on iOS that rely on the API also have zero issues, it's JUST old reddit. With or without RES. It drives me insane as old with RES is the only way I can browse reddit on desktop.
It's really chanllenging to find any info on the Internet.
I want to map a JSON of post to a Java class.
There are some fields I cannot find proper datatype for:
user_reports
all_awardings
awarders
treatment_tags
mod_reports
I can assume that all these fields are arrays of strings or objects. But I don't want to use Java's generic types like Object
, JsonNode
or Map<String, Object>
.
Does anybody know what exactly datypes/structures are used in these fields?
EDIT3: As a workaround I created a new app and put in the client id/secret into my web app. Working for now 🤞
EDIT2: Happening again as of 11/23/24 13:00 UTC
EDIT: Looks like this fixed itself as of 11/22/24 19:44 UTC
Must have been a reddit bug
I have an app that has been working for years and as of yesterday I started getting a 403 error when hitting https://oauth.reddit.com/api/v1/me. This is affecting every user of my app. Exported as cURL from chrome:
curl 'https://oauth.reddit.com/api/v1/me' \
-H 'accept: application/json, text/plain, */*' \
-H 'accept-language: en-US,en;q=0.9' \
-H 'authorization: Bearer myToken' \
-H 'cache-control: no-cache' \
-H 'origin: https://myApp.firebaseapp.com' \
-H 'pragma: no-cache' \
-H 'priority: u=1, i' \
-H 'referer: https://myApp.firebaseapp.com/' \
-H 'sec-ch-ua: "Chromium";v="130", "Google Chrome";v="130", "Not?A_Brand";v="99"' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'sec-ch-ua-platform: "macOS"' \
-H 'sec-fetch-dest: empty' \
-H 'sec-fetch-mode: cors' \
-H 'sec-fetch-site: cross-site' \
-H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36'
On the www.reddit.com site the flair just ends up saying :emojiname: instead of showing the actual emoji. It renders correctly on new.reddit.com
Any clues, or hint how to do it?
I'm not sure if this is because of the type of subreddit, but my search API works for the subreddit r/bisexual , but it doesn't for r/BisexualMen
Is this because the BisexualMen subreddit also contains NSFW posts? (it's not porn by the way, it's talk).
My house bot active just in my sub created a sticky, which it updates all now and then using
post.edit(post_text)
On executing that statement, the bot gets the reply:
[script_name:line no.:] DeprecationWarning: Reddit will
check for validation on all posts around May-June 2020.
It is recommended to check for validation by setting
reddit.validate_on_submit to True.
post.edit(post_text)
What does this even mean?
And where/when/at what point should I place reddit.validate_on_submit = True
? On each new submission/edit? From anybody or just the bot?
The post in question is 2 days "old". The first post in my sub was on 2020-07-22, do I even need to do anything given the date range they mention?
---
Edit: on including a global
reddit.validate_on_submit = True
just after login, the warning disappeared. Was it always there and I just didn't notice? No idea. To me it came out of the blue.
Hey,
when I use the API with the format "reddit.com/user/[username]/.json" I dont seem to get every picture. I think its because when I open the page without json thers like 2 pics and the rest are comments. Is there a way to only get the posts so i can load more content and don't get bombed with comments?
Hi so I want to retrieve every single comment from a sub, however it's only giving me, in my case, 970 comments which is about 5 months of comments from the specified sub. Relevant code provided below.
#relevant prerequisites for working code...
subreddit = reddit.subreddit(subreddit_name)
comments = subreddit.comments(limit=None) #None retrieves as many as possible
for comment in comments:
#relevant processing and saving
I was trying to integrate the reddit api but after the authentication, I ran into an error, which is pretty unexpected. The exact error is that when I hit the /me endpoint, I don't get any error. However, as soon as I change it to /me/karma, I start getting the 401 unauthorized error. Is there something that I am missing.
const GetUser = useCallback(async () => {
if (access) {
try{
const response = await axios.get(`https://oauth.reddit.com/api/v1/me/karma/`,{
headers:{
'Authorization' : access
}
})
console.log(response.data)
} catch(error) {
console.error(error)
}
}
},[])
The access variable is the access token for the current user. Any help will be appreciated. Thanks..
I am working on app to submit content to Reddit. Reddit returns this information for subreddits a user has joined.
{
"user_flair_background_color": null,
"submit_text_html": "<!-- SC_OFF --><div class=\"md\"><p>Please keep in mind our basic rules:<\/p>\n\n<p>Rule 1: Be Nice<\/p>\n\n<p>Rule 2: Film-related posts only<\/p>\n\n<p>Rule 3: No Self-Promotion or external links to websites that are not relevant to the specific film being discussed. Approved sites include: YouTube, IMDB, Wikipedia, etc.<\/p>\n<\/div><!-- SC_ON -->",
"restrict_posting": true,
"user_is_banned": false,
"free_form_reports": true,
"wiki_enabled": null,
"user_is_muted": false,
"user_can_flair_in_sr": null,
"display_name": "FIlm",
"header_img": null,
"title": "r\/film - The Official Reddit Film Community",
"allow_galleries": true,
"icon_size": null,
"primary_color": "#373c3f",
"active_user_count": null,
"icon_img": "",
"display_name_prefixed": "r\/FIlm",
"accounts_active": null,
"public_traffic": false,
"subscribers": 119311,
"user_flair_richtext": [],
"videostream_links_count": 0,
"name": "t5_2qh7m",
"quarantine": false,
"hide_ads": false,
"prediction_leaderboard_entry_type": 2,
"emojis_enabled": false,
"advertiser_category": "",
"public_description": "Welcome to r\/film, the official film community of Reddit. Film lovers and movie fans - talk about your favorite movies, upcoming ones, and the lates releases!",
"comment_score_hide_mins": 0,
"allow_predictions": false,
"user_has_favorited": false,
"user_flair_template_id": null,
"community_icon": "https:\/\/styles.redditmedia.com\/t5_2qh7m\/styles\/communityIcon_v4otrun2a70c1.jpg?width=256&s=d531e53627699aa6337e60575b34ba6f76f19c36",
"banner_background_image": "https:\/\/styles.redditmedia.com\/t5_2qh7m\/styles\/bannerBackgroundImage_8ltswhri970c1.jpg?width=4000&s=02b804762da0c6cf9d3efab0ef0a06ddd42a5adf",
"original_content_tag_enabled": false,
"community_reviewed": true,
"submit_text": "Please keep in mind our basic rules:\n\nRule 1: Be Nice\n\nRule 2: Film-related posts only\n\nRule 3: No Self-Promotion or external links to websites that are not relevant to the specific film being discussed. Approved sites include: YouTube, IMDB, Wikipedia, etc.",
"description_html": "<!-- SC_OFF --><div class=\"md\"><p>All things film related.<\/p>\n\n<p>Rule 1: Be Nice<\/p>\n\n<p>Rule 2: Film-related posts only<\/p>\n\n<p>Rule 3: No Self-Promotion or external links to websites that are not relevant to the specific film being discussed. Approved sites include: YouTube, IMDB, Wikipedia, etc.<\/p>\n<\/div><!-- SC_ON -->",
"spoilers_enabled": true,
"comment_contribution_settings": {
"allowed_media_types": null
},
"allow_talks": false,
"header_size": null,
"user_flair_position": "right",
"all_original_content": false,
"has_menu_widget": false,
"is_enrolled_in_new_modmail": null,
"key_color": "#222222",
"can_assign_user_flair": true,
"created": 1201285253,
"wls": 6,
"show_media_preview": true,
"submission_type": "any",
"user_is_subscriber": true,
"allowed_media_in_comments": [],
"allow_videogifs": true,
"should_archive_posts": false,
"user_flair_type": "text",
"allow_polls": true,
"collapse_deleted_comments": false,
"emojis_custom_size": null,
"public_description_html": "<!-- SC_OFF --><div class=\"md\"><p>Welcome to <a href=\"\/r\/film\">r\/film<\/a>, the official film community of Reddit. Film lovers and movie fans - talk about your favorite movies, upcoming ones, and the lates releases!<\/p>\n<\/div><!-- SC_ON -->",
"allow_videos": true,
"is_crosspostable_subreddit": null,
"notification_level": "low",
"should_show_media_in_comments_setting": true,
"can_assign_link_flair": true,
"accounts_active_is_fuzzed": false,
"allow_prediction_contributors": false,
"submit_text_label": "",
"link_flair_position": "right",
"user_sr_flair_enabled": null,
"user_flair_enabled_in_sr": false,
"allow_discovery": true,
"accept_followers": true,
"user_sr_theme_enabled": true,
"link_flair_enabled": true,
"disable_contributor_requests": false,
"subreddit_type": "public",
"suggested_comment_sort": null,
"banner_img": "",
"user_flair_text": null,
"banner_background_color": "#373c3f",
"show_media": false,
"id": "2qh7m",
"user_is_moderator": false,
"over18": false,
"header_title": "",
"description": "All things film related.\n\nRule 1: Be Nice\n\nRule 2: Film-related posts only\n\nRule 3: No Self-Promotion or external links to websites that are not relevant to the specific film being discussed. Approved sites include: YouTube, IMDB, Wikipedia, etc.",
"submit_link_label": "",
"user_flair_text_color": null,
"restrict_commenting": false,
"user_flair_css_class": null,
"allow_images": true,
"lang": "en",
"url": "\/r\/FIlm\/",
"created_utc": 1201285253,
"banner_size": null,
"mobile_banner_image": "",
"user_is_contributor": false,
"allow_predictions_tournament": false
}
I am formulating my code as such:
public static function postContentToReddit($accessToken, $subreddit, $title, $text, $flairId = null, $flairText = null)
{
try {
$client = new Client([
'base_uri' => 'https://oauth.reddit.com',
'headers' => [
'Authorization' => 'Bearer ' . $accessToken,
'User-Agent' => 'Glitch:v1.0 (by /u/bingewavecinema)',
],
]);
$postData = [
'kind' => 'text',
'sr' => $subreddit,
'title' => $title,
'api_type' => 'json',
'text' => $text
];
if ($flairId) {
$postData['flair_id'] = $flairId;
}
if ($flairText) {
$postData['flair_text'] = $flairText;
}
Log::error(json_encode($postData));
$response = $client->post('/api/submit', [
'form_params' => $postData,
]);
$responseBody = json_decode($response->getBody(), true);
if (isset($responseBody['json']['errors']) && !empty($responseBody['json']['errors'])) {
Log::error(`Reddit text content post failed to {$subreddit}: ` . json_encode($responseBody['json']['errors']));
return false;
}
return $responseBody;
} catch (Exception $e) {
Log::error('Error uploading video to Reddit: ' . $e->getMessage(), ['exception' => $e]);
return false;
}
}
For the SR, Ive tried:
None of them work. What is the correct SR to submit to the API?
Hello team,
When I use this https://www.reddit.com/r/nba/.json API, I get the required JSON when I open it on Chrome.
But when I hit this API on Postman, I got a 403 error. I get this error even when I use it with fetch in nodejs.
From what I understand, I need authentication, but why am I getting the data without doing anything for Chrome?
const response = await fetch(`https://www.reddit.com/r/${SUBREDDIT_NAME}/.json`,{
headers:{
}
})
Is anyone using VSCode for PRAW development?
Intellisense does not seem to be fully functioning, and is missing a lot of praw contexts.
I have tried every suggestion I have been able to find online- I have tried switching to the Jedi interpreter in settings.json, using different vscode plugins for python- nothing.
Any help would be appreciated.
I’m working on a project where I need to programmatically give awards to submissions and comments using the Reddit API. I’m using PRAW 7.7.1, but I’ve run into some issues:
Outdated gild_ids: When using Submission.award() or Comment.award(), we need to specify the gild_id
to indicate the type of award. However, it seems that PRAW’s current documentation doesn’t support the latest award types available on Reddit. This makes it challenging to give newer awards.
My specific questions are:
Any insights, code examples, or pointers to relevant documentation would be greatly appreciated.
I'm creating a script to run off of mentions, how can I see the previous comment above in the thread to the one my bot has been mentioned in?
I'm newer to coding so I could be going about this all wrong.
Using JavaScript and working with Reddit API, I'm making a GET request to "https://oauth.reddit.com/r/${subreddit}/hot" which returns data for the given subreddit including 20 or so recent posts. I can see everything I want except for the image galleries. I see single images using Object.data.children.childIndex.data.url and single videos with Object.data.children.childIndex.data.media.reddit_video.fallback_url.
But, for image galleries, when I try loading the URL in Object.data.children.childIndex.media_metadata.imgID.s.u it takes me to a Reddit page that only displays the alt="CDN media" and a link to the post. I can't figure out what URL I'm supposed to source gallery media from and why its not included in the response object. Please help this shit pisses me off.
Hi all,
I have built a new bot that I think provides a helpful suggestion to users in the way of a follow-up comment (replace a certain type of a link with an alternative link that can be opened by more users). However, when I create a new account for it, as soon as I 'unleash' the bot, the associated account gets immediately rate limited and suspended.
What's the right procedure for this? I'm using python / praw so isn't rate limiting etc. taken care of?
It's called SubTransfer and it's a very simple app to carry over your subscriptions (and followed users) from one account to another: https://subtransfer.ploomberapp.io
Currently this is a fairly laborious process (get your multi-reddit subscriptions and click Join a bunch of times) so I wanted to simplify it. Very early days but I'm seeking feedback, and any feature requests.
Let me know what you think!
Hi peeps
So I'm trying to unsave a large number of my Reddit posts using the PRAW code below, but when I run it, print(i) results in 63, even though, when I go to my saved posts section on the Reddit website, I seem to not only see more than 63 saved posts, but I also see posts with a date/timestamp that should have been unsaved by the code (E.g posts from 5 years ago, even though the UTC check in the if statement corresponds with August 2023)
def run_praw(client_id, client_secret, password, username):
"""
Delete saved reddit posts for username
CLIENT_ID and CLIENT_SECRET come from creating a developer app on reddit
"""
user_agent = "/u/{} delete all saved entries".format(username)
r = praw.Reddit(client_id=client_id, client_secret=client_secret,
password=password, username=username,
user_agent=user_agent)
saved = r.user.me().saved(limit=None)
i = 0
for s in saved:
i += 1
try:
print(s.title)
if s.created_utc < 1690961568.0:
s.unsave()
except AttributeError as err:
print(err)
print(i)
I made a python project that takes a YAML file describing a post and uses praw
to post it, idea being to have a command you can call from scripts which abstracts away the python code.
While it's supposed to be unopinionated, I still want to provide an example script for how to schedule a reddit post for later. I'm thinking of using at
to run a bash script, but not sure what a user friendly version would look like.
Here's the link to the README: https://github.com/jeanlucthumm/reddit-easy-post
What I've put together so far for myself is this:
#!/usr/bin/env nix-shell
#! nix-shell -i bash -p poetry
PROJECT_DIR=/home/me/Code/reddit-easy-post
LOG=/home/me/reddit_log.txt
echo $(date) > $LOG
# Check if a file argument was provided
if [ $# -eq 0 ]; then
echo "Error: No YAML file specified" >> "$LOG"
exit 1
fi
YAML_FILE="$1"
# Check if the specified file exists
if [ ! -f "$YAML_FILE" ]; then
echo "Error: File '$YAML_FILE' not found" >> "$LOG"
exit 1
fi
cd "$PROJECT_DIR"
set -a && source .env && set +a
poetry run main --file "$YAML_FILE" 2>&1 | tee -a "$LOG"