/r/redditdev
A subreddit for discussion of Reddit's API and Reddit API clients.
A subreddit for discussion of Reddit's API and Reddit API clients.
Please confine discussion to Reddit's API instead of using this as a soapbox to talk to the admins. In particular, use /r/ideasfortheadmins for feature ideas and /r/bugs for bugs. If you have general reddit questions, try /r/help.
To see an explanation of recent user-facing changes to reddit (and the code behind them), check out /r/changelog.
To report a security issue with reddit, please send an email to whitehats@reddit.com .
This is an admin-sponsored subreddit.
/r/redditdev
Will reddit get mad if an oauth api app re-posts the same content to multiple subscribed r/. would this get my app suspended?
We built a super simple example / test app and have uploaded it. However, we can't seem to get our custom post type to show up in our test subreddit.
Besides being on a whitelist, are we doing anything else wrong?
This is the main.tsx:
import { Devvit, JSONObject } from '@devvit/public-api';
Devvit.addCustomPostType({
name: 'Bonsai',
//height: 'regular',
render: (context) => {
const { useState } = context;
const [myState, setMyState] = useState({});
const handleMessage = (ev: JSONObject) => {
console.log(ev);
console.log('Hello Bonsai!');
};
return (
<>
<vstack height="100%" width="100%" gap="medium" alignment="center middle">
<text>Hello Bonsai!</text>
</vstack>
</>
);
},
});
If I create a private subreddit, is it possible to handle the approved user list with the API? What endpoints can I use?
When I try api/compose and use my personal account to send messages to my friends, I always get this error. Has anyone encountered the same situation? What is the reason or how to solve it?
I am trying to run some code and keep running into the problem of the computer not liking "praw core". I can see it in my pip list and have gotten the computer to tell me that I have downloaded it but when I go to run python main.py it tells me "module not found error: no module named "praw core" what should I do
What is the difference between these two? I want to create a reddit app that a user can log into and perform actions on the api. However i haven't decided if I want a mobile version or web application yet (or maybe both eventually). I want to just create a backend service first then think about the GUI later. Is this possible? Which one would be more appropriate?
Hi everyone,
So a user of my product noticed they could not post in this sub: https://www.reddit.com/r/TechHelping/
the new post throws a 403, and when looking at the website, this is because there is a request permission to post?
I've never seen this before, so how does this translate into the api and such?
It is possible to fetch subreddit data from API without authentication. You just need to send get request to subreddit url + ".json" (https://www.reddit.com/r/redditdev.json), from anywhere you want.
I want to make app which uses this API. It will display statistics for subreddits (number of users, number of comments, number of votes etc.).
Am I allowed to build web app which uses data acquired this way? Reddit terms are not very clear on this.
Thank you in advance :)
I'm building a cross-posting app. When posting to Reddit, some subreddits require flairs. I need to fetch available flairs when a user selects a subreddit and then send the flair in the post.
const response = await fetch( `https://oauth.reddit.com/r/${subreddit}/api/link_flair_v2`, {
headers: {
Authorization: `Bearer ${accessToken}`,
"User-Agent": "X/1.0.0",
},
});
Getting 403 Forbidden. According to docs:
/api/link_flair
or r/subreddit/api/link_flair_v2
How can I properly fetch available flairs for a given subreddit? Has anyone implemented this successfully?
It seems that the maximum number of submissions I can fetch is 1000:
limit
– The number of content entries to fetch. If limit isNone
, then fetch as many entries as possible. Most of Reddit’s listings contain a maximum of 1000 items, and are returned 100 at a time. This class will automatically issue all necessary requests (default: 100).
Can anyone shed some more light on this limit? What happens with None? If I'm using .new(limit=None)
how many submissions am I actually getting at most? Also; how many API requests am I making? Just whatever number I type in divided by 100?
Use case: I want the URLs of as many submissions as possible. These URLs are then passed through random.choice(URLs)
to get a singular random submission link from the subreddit.
Actual code. Get submission titles (image submissions):
def get_image_links(reddit: praw.Reddit) -> list:
sub = reddit.subreddit('example')
image_candidates = []
for image_submission in sub.new(limit=None):
if (re.search('(i.redd.it|i.imgur.com)', image_submission.url):
image_candidates.append(image_submissions.url)
return image_candidates
These image links are then saved to a variable which is then later passed onto the function that generates the bot's actual functionality (a comment reply):
def generate_reply_text(image_links: list) -> str:
...
bot_reply_text += f'''[{link_text}]({random.choice(image_links)})'''
...
I noticed over the last couple hours some extreme latency when my bot is downloading images. It's also noticeable when browsing Reddit on my phone (while on my wifi). It's the 2nd time in the last 2 weeks I've seen something similar happen.
Status page is green and it's the only domain impacted so I suspect it's some type of throttling being tested.
No changes on my end. The bot is doing the same thing it's done for years.
I'm not sure but it seems that all the communities I fetch through the /subreddits/ API come with the "over18" property set to false. Has this property been discontinued?
How many API requests does it take to cause rate-limiting of an authenticated snoowrap client? Is that number different between reads and writes?
I would guess it changes as Reddit tightens its reins but of course would be helpful of anyone has the current max values in order to effectively debounce/delay requests.
Hi folks,
I'm new to pulling data from APIs and would like some feedback to tell me where i'm going wrong. I've set up a new subreddit and my goal is to pull data about it into a google sheet to help me manage the sub.
So far:
I created an app using the (https://old.reddit.com/prefs/apps/) pathway
i sent a message through to reddit asking for permission to use the API and was granted permission a few days back
I've set up a google app script with the help of chatgpt which pulls the data of posts in the sub
however i keep getting an error message related to the authentication process: Error: Exception: Request failed for https://oauth.reddit.com returned code 403. Truncated server response:......
Can anyone give me some advice on solving the issue, particularly the 0Auth 2 issue. Or if you there's something else that could be the issue with the setup.
I realise this may be an issue which requires more info to help problem solve and i'd be happy to share more info!
Thanks in advance guys
Thanks for your attention I wanted a bot that could help us getting more subscribbers but that still follow Reddit Guideline
I would also like to pay for one if its not already existing
I have tried and tried to get this to work, but it is just a nightmare. I'm wondering if anyone has already done this and has a solution that I can use.
async function galleryHandling(post) {
const imageUrls = [];
for (const item of post.gallery_data.items) {
const mediaId = item.media_id;
const extensions = ['jpg', 'jpeg', 'png', 'gif', 'tif', 'tiff', 'bmp', 'webp', 'svg', 'ico'];
for (const ext of extensions) {
const url = `https://i.red.it/${mediaId}.${ext}`;
const statusCode = await checkUrl(url);
console.log(statusCode);
if (statusCode === 200) {
console.log(`GALLERY: ${ext.toUpperCase()} FILE`);
imageUrls.push(url);
break;
}
}
}
return imageUrls;
}
async function singleHandling(post) {
if (post.url && (post.url.endsWith('.jpg') || post.url.endsWith('.png') || post.url.endsWith('.gif') || post.url.endsWith('.jpeg'))) {
return post.url;
}
console.log(`SINGLE HANDLING NOT ENDING IN JPG, PNG, GIF, JPEG | TITLE: ${post.title} | URL: ${post.url}`);
}
async function runBot(reddit) {
for (const subredditName of Subreddits) {
const subreddit = await reddit.getSubreddit(subredditName);
const posts = await subreddit.getNew({ limit : 100 });
for (const post of posts) {
imageUrls = [];
if (post.is_gallery) {
imageUrls = await galleryHandling(post);
}else {
imageUrls.push(await singleHandling(post));
}
console.log("------------- NEW LINE -------------")
imageUrls.forEach(url => {
console.log(`URL: ${url}`)
});
}
new Promise(resolve => setTimeout(resolve, 1000));
}
}
Sometimes my singleHandling() function will fail and the results are https://www.reddit.com/gallery/-----.
this account 65436563465
shows normal/active under old.reddit, suspended under sh.reddit and just a blank page under new.reddit
i don't know how the app displays it
using the api/praw, it looks normal/active.
is there an api/praw method to determine the status of accounts like this?
Why there is many communities being returned by the API that has this format of name "r/a:t5_cmjdz", which consist of "r/a:<subreddit_id>"?
Really don’t want to maintain a python environment in my otherwise purely typescript app. Anyone out there building the PRAW equivalent for nodejs? Jraw and everything else all seem dated well-beyond the recent Reddit API crackdown.
It posts a random pic from 20 pics to choose from and a random title and adds flair, posts ever 2 hours. Now it worked fine foe the first post but then When I go into my account the next day I see that all the posts are greyed out. Like when the upvote and downvote button are greyed out meaning the posts are somehow getting removed.
Why i this?
Is there any good way to export comments from a single post in reddit? I tried adding ".json" to the end of link in the address bar, but it is limited to around 20 comments I think, so less usable. It would be good if there is a trick or even something to do in ubuntu cli and etc
Documentation says that a user-agent header must look like this
```
<platform>:<app ID>:<version string> (by /u/<reddit username>)
```
But there is zero information about platform, version string, reddit username.
I spent one day to just login and fetch `/api/me`. The documentation is the worst I've ever seen. Am I stupid or the doc is really "not good"?
Hi there,
I have downloaded about 3 Windows 11 Reddit clients (Bacconit, Reddert, etc.) this evening. They all have the same overall look about them, and the same thing happens when I try to log in to my account on these clients (I'm using the reddit website currently). When I click Login, a blank white window appears that says "Connecting to a service" on the top display bar. A place for me to type my username and password never does appear, so I end up closing the app and trying another one. This same thing has happened to me now 3 times. I always make sure I'm logged out of my reddit account on the web before I try logging in to my account on a Windows 11 client. Any ideas what's happening?
Thanks in advance,
th3lung
Dear Reddit Development Team
Hi! I have chosen Reddit as a project for my research course. My goal is to gather general information about the website's architecture and the technology stack being used.
Could you please assist me in finding resources or relevant threads on this topic?
So far, I have found this thread https://www.reddit.com/r/redditdev/comments/11vd16y/reddit_system_designarchitecture/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button
Are the answers in it still relevant?
I also received a response from a Chat GPT, but I'm not sure about the accuracy of the information.
13 Oct 2024 Chat GTP text me about Raddit Architecture next:
Backend Architecture
Databases:
PostgreSQL for relational data (users, posts).
Cassandra for distributed, high-frequency data (e.g., votes).
Redis for caching and session management with HyperLogLog for tracking views.
Frameworks & APIs:
Microservices in Python (Flask) and Go.
RESTful API with increasing use of GraphQL for frontend queries.
Cloud Hosting & Infrastructure
Hosted on AWS:
EC2 for compute, S3 for storage, RDS for databases.
Kubernetes for container orchestration.
Caching, Load Balancing & Delivery
Redis and Memcached with Mcrouter for caching.
HAProxy for load balancing.
Spinnaker and Jenkins for CI/CD automation.
Search, Analytics & Monitoring
Lucidworks Fusion (built on Solr) for search.
Kafka and Hive for analytics, processed via EMR.
Prometheus, Grafana, and the ELK stack for monitoring and logging.
Frontend Architecture
Built with React and TypeScript.
Redux handles state management across web and mobile interfaces.
I am deeply interested in learning more about the technical infrastructure that powers Reddit. If it not NDA, I would greatly appreciate it if you could provide some insights into the current systems and services Reddit utilizes.
This is probably a pretty weird situation but I want to build a set of AI bots that will have a conversation and branching conversations with each other. Ideally, I'd want to simulate how at least 100 bots take a base topic and branch out from there. Does this break reddit rules?
I am trying to make a clone for reddit on android and I cant understand why the call for the login is not working when i try to get the access token. I get a 401 error. I am trying on postman as well but i cant get the access token there as well..
For the header I use Authorization with dFo5bDROTU51dUNCZ1dzTEhvcjJBUTo= ( this is encoded on base64 from https://www.base64encode.org/, the original code is tZ9l4NMNuuCBgWsLHor2AQ
And the body has grant_type with authorization_code, code with PPs3xw8_di4QhNUlSbYpGa-3WSTHSA ( a code that i got from my application ) and the redirect_uri is retrofitreddit://redirect. Can someone help me?
i am new to praw in the documentation their is no specific mention of image or video (i have read first few pages )
I added a fix to prevent my bot from spamming good human replies to the same user on a single post but my commands other than good bot broke mysteriously (I do not know why). The loop only runs when a user says good bot so I do not think it is the loop, and it should not even be able to run since the else if for good bot is not even activated by then. Does anyone know where I went wrong here?
Here is my commands function:
def commands():
try:
for item in reddit.inbox.stream(skip_existing=True):
# Check if the message is a mention and the author is authorized
if "u/i-bot9000" in item.body and item.author != "i-bot9000":
if "!count" in item.body:
threading.Thread(target=count_letters, args=(item,)).start()
elif "!help" in item.body:
reply = f"""
u/{item.author}, here is the current list of commands:
1. **!count \<term\> \<letter\>**
- *Description:* Counts the occurrences of the specified letter in the provided term.
2. **!randomletter**
- *Description:* Get a surprise! This command returns a random letter from the alphabet.
3. **!ping**
- *Description:* Pings the bot (replies with "pong").
4. **!help**
- *Description:* Feeling lost? Use this command to get this helpful message.
*Updates:* No updates to commands yet {command_mark}
"""
item.reply(reply)
print(f"{item.author} executed a command \n ------------ \n Command: {item.body} \n \n Replied: {reply} \n ------------",flush=True)
elif "!randomletter" in item.body:
letters = list("abcdefghijklmnopqrstuvwxyz".upper())
reply = f"u/{item.author} You got the letter {random.choice(letters)} {command_mark}"
item.reply(reply)
print(f"{item.author} executed a command \n ------------ \n Command: {item.body} \n \n Replied: {reply} \n ------------",flush=True)
elif "!ping" in item.body:
reply = f"u/{item.author} Pong! {command_mark}"
item.reply(reply)
print(f"{item.author} executed a command \n ------------ \n Command: {item.body} \n \n Replied: {reply} \n ------------",flush=True)
elif item.body.lower() == "good bot" or item.body.lower() == "hood bot":
#New Anti Spam feature
confirm_reply = True
item.submission.comments.replace_more(limit=None)
for comment in item.submission.comments.list():
if comment.author == "i-bot9000" and "good human" in comment.body.lower() or "hood bot" in comment.body.lower():
if comment.parent().author == item.author:
confirm_reply = False
break
if confirm_reply:
reply = f"Good Human! {command_mark}"
item.reply(reply)
print(f"{item.author} said 'good bot' \n ------------ \n Comment: {item.body} \n \n Replied: {reply} \n ------------")
except Exception as e:
print(e,flush=True)
threading.Thread(target=commands).start()
I used Old Reddit on desktop and I used Reddit Enhancement Suite (RES) with endless scrolling. I was able to keep loading pages of 25 posts at a time from the Hot section for a while but I hit a limit where it stopped loading new pages. I think I loaded around 30 pages IIRC before it hit its limit which equates to 750 posts (30 pages x 25 posts/page).
Would my bot experience the same limit if I needed to run code at the post level? For example, if I needed to lock posts that are x-number of days old and have a key word in the title, could I do that to the top 2,000 posts in Hot, or top 3,000 posts, or top 10,000 posts? Or is there a limit along the lines of what I saw when I was manually loading page after page?
I need to wait a certain amount of time after hitting the praw APIException ratelimit. It can see the time to wait after hitting the ratelimit in the error it throws but I don't know if there is a clever way to extract that into my code so that I can wait the appropriate time instead of any arbitrary (chosen by me) time.
How do I go about this?