/r/redditdev

Photograph via snooOG

A subreddit for discussion of Reddit's API and Reddit API clients.

A subreddit for discussion of Reddit's API and Reddit API clients.

Please confine discussion to Reddit's API instead of using this as a soapbox to talk to the admins. In particular, use /r/ideasfortheadmins for feature ideas and /r/bugs for bugs. If you have general reddit questions, try /r/help.

To see an explanation of recent user-facing changes to reddit (and the code behind them), check out /r/changelog.


To report a security issue with reddit, please send an email to security@reddit.com .

This is an admin-sponsored subreddit.

/r/redditdev

74,676 Subscribers

1

Why does my bot keep losing mod privileges?

I make bots that will ban/remove users from a sub, and originally I had it make a post so that I could see what it has done. Eventually the account my bot was using could only remove posts, if you tried to ban someone it wouldn’t work, it would look like it did but when you check the user never got banned. Well I thought it was because of all the post making, so I made a new account and made the bot only message my account. Well after some days, same issue, my bot can’t ban anyone, just remove posts. Anyone run into this issue before?

4 Comments
2024/03/24
05:24 UTC

0

I'm receving invalid grand when trying for getting an OAuth2 token

Hi, so just following the tutorial here: https://github.com/reddit-archive/reddit/wiki/OAuth2-Quick-Start-Example

This is the code:

def reddit():
    import requests.auth
    client_auth = requests.auth.HTTPBasicAuth('clientid', 'secret')
    post_data = {"grant_type": "password", "username": "invasionofsmallcubes", "password": "mypassword"}
    headers = {"User-Agent": "metroidvania-tracker/0.1 by invasionofsmallcubes"}
    response = requests.post("https://www.reddit.com/api/v1/access_token", auth=client_auth, data=post_data,
                                    headers=headers)
    print(f"Result: {response.json()}")

My app type is script. I already checked other posts so I tried to change password to keep it simple but still having the same issue. If I change from 'password' to 'client_credentials' it works.

6 Comments
2024/03/23
05:23 UTC

0

How long does it take to hear back regarding request for access to Reddit API?

I'm a developer who sent a request here asking if I can register to use the free tier of the Reddit API for crawling and scraping. I submitted my request three days ago but haven't received a reply yet. Does anyone know how long, on average, it takes to hear back? Is it usually days, weeks, or even months? Thanks.

1 Comment
2024/03/22
20:51 UTC

5

My bots keep getting banned

Hey everyone, like title says

I have 3 bots ready for deployment, they only react to bot summons

One of them has been appealed, but the other 2 I've been waiting for 2 weeks.

Any tips on what I can do? I don't want to create new accounts to not be flagged for ban evasion.

I'm using asyncpraw so rate limit shouldn't be the issue, I'm also setting the header correctly.

Thanks in advance!

9 Comments
2024/03/22
18:23 UTC

2

403 Forbidden Error when trying to snooze reports

I'm trying to use the following code to snooze reports from a specific comment:

                url = "https://oauth.reddit.com/api/snooze_reports"
                headers = {
                    'user-agent': 'my-user-agent',
                    'authorization': f"bearer {access_token}",
                }
                data = {
                    'id': content_id,
                    'reason': Matched_Reason,
                }
                response = requests.post(url, headers = headers, json = data)
                response_json = response.json()
                print(response_json)

However, it keeps returning the following error:

{'message': 'Forbidden', 'error': 403}    

How should I go about fixing this?

2 Comments
2024/03/22
16:59 UTC

1

Snooze Reports with PRAW?

Reddit has a feature called "snoozyports" which allows you to block reports from a specific reporter for 7 days. This feature is also listed in Reddit's API documentation. Is it possible to access this feature using PRAW?

7 Comments
2024/03/22
14:55 UTC

0

Which wrapper?

Hi all.,

I am a beginner to using APIs generally, and trying to do a study for a poster as part of a degree pursuit. I'd like to collect all usernames of people who have posted to a particular subreddit over the past year, and then collect the posts those users collected on their personal pages. Will I be able to do this with PRAW or does the limit prohibit that size of collection? How do I iterate and make sure I collect all within a time frame?

Thanks!

0 Comments
2024/03/21
23:59 UTC

1

429 error (with code this time) using PRAW?

UPDATE: Resolved. Looks like reddit has done something with rate limiting and it's working...so far! Thank you so much for the help.

This script worked in the last 2 weeks, but when doing data retrieval today it was returning a 429 error. Running this in a jupyter notebook, PRAW and Jupyter are up to date, it's in a VM. Prints the username successfully, so it's logged in, and one run retrieved a single image.

imports omitted

reddit = praw.Reddit(client_id='',
                     client_secret='',
                     username='wgsebaldness',
                     password='',
                     user_agent='')
print(reddit.user.me())

make lists
post_id = []
post_title = []
when_posted =[] 
post_score = []
post_ups = []
post_downs = []
post_permalink = []
post_url =[] 
poster_acct = [] 
post_name = []

more columns for method design omitted

subreddit_name = ""
search_term = ""

try:
    subreddit = reddit.subreddit(subreddit_name)
    for submission in subreddit.search(search_term, sort='new', syntax='lucene', time_filter='all', limit=1000):
        if submission.url.endswith(('jpg', 'jpeg', 'png', 'gif', 'webp')):
            file_extension = submission.url.split(".")[-1]
            image_name = "{}.{}".format(submission.id, file_extension)
            save_path = "g:/vmfolder/scrapefolder{}".format(image_name)
            urllib.request.urlretrieve(submission.url, save_path)
            post_id.append(submission.id)
            post_title.append(submission.title)
            post_name.append(submission.name)
            when_posted.append(submission.created_utc)
            post_score.append(submission.score)
            post_ups.append(submission.ups)
            post_downs.append(submission.downs)
            post_permalink.append(submission.permalink)
            post_url.append(submission.url)
            poster_acct.append(submission.author)                        
except Exception as e:
    print("An error occurred:", e)
8 Comments
2024/03/21
22:18 UTC

3

Huge negative conversions values in Ads reporting API

Hi there,

Requesting data from ads reporting API:
GET https://ads-api.reddit.com/api/v2.0/accounts/{{account\_id}}/reports?starts\_at=2024-03-14T04%3A00%3A00Z&ends\_at=2024-03-17T04%3A00%3A00Z&group\_by=date&time\_zone\_id={{time\_zone\_id}}

I got huge negative conversions values:

"conversion_signup_total_value": -9223372036854280192,

"conversion_add_to_cart_total_value": -9223372036853784576,

"conversion_purchase_total_value": -9223372036852635648,

Is it a bug in API? Please advise!

Thanks & regards,

Evgeniy

3 Comments
2024/03/20
05:29 UTC

1

Is post valid from url

Hi there,

What's the best way to identify if a post is real or not from url=link, for instance:

r=reddit.submission(url='https://reddit.com/r/madeupcmlafkj')

if(something in r.dict.keys())

Hoping to do this without fetching the post?

2 Comments
2024/03/19
21:55 UTC

2

subreddit.stream.comments stops receiving comments without any errors

After running my bot for about 2 hours, it stops receiving new comments. The python program still runs, but there are no new comments showing on the console.

Here is the code: https://github.com/whiteh4cker-tr/reddit-ai-comment-moderation/blob/main/reddit_ai_moderation.py

13 Comments
2024/03/19
16:45 UTC

4

Use PRAW to get queues from r/Mod?

I’m attempting to use the following line of code in PRAW:

for item in reddit.subreddit("mod").mod.reports(limit=1):
    print(item)

It keeps returning an error message. However, if I replace “mod” with the name of another subreddit, it works perfectly fine. How can I use PRAW to get combined queues from all of the subreddits I moderate?

2 Comments
2024/03/18
20:41 UTC

1

"Unsupported grant type" error?

import requests
    def f()
        url = "https://www.reddit.com/api/v1/access_token"
        headers = {"Authorization": "Basic ********="}
        body = {
            "grant_type": "password",
            "username": "********",
            "password": "********",
            "duration": "permanent",
        }
        r = requests.post(url, data=json.dumps(body), headers=headers)
        print(r.content)

This code keeps returning an 'unsupported _grant _type' error. What should I change?

I made sure to encode my Authorization header into base64. I would use PRAW for this, but it doesn't seem to be able to extract what I'm trying to accomplish.

5 Comments
2024/03/18
17:06 UTC

2

Is there access to reddit chats using the API?

Is there access to reddit chats using the API?

2 Comments
2024/03/18
15:46 UTC

1

Is it possible to get analytics of posts for a period of dates using the API?

Is it possible to get analytics of posts for a period of dates using the API?

1 Comment
2024/03/18
15:45 UTC

2

I am teaching myself to code, and have a Reddit API code that works sometimes but then stops working.

TLDR: Just by explaining the problem to the plural you, I seem to have discovered the cause of the problem - the list of 'used' post IDs caps out at 14 and then the code stops saving new text documents, only saving new IDs to the list and rearranging it within the 14 line limit.

So... that's my new question I guess.

ChatGPT

The code you provided seems to be designed to fetch jokes from the subreddit "jokes" using the PRAW (Python Reddit API Wrapper) library, save them to text files, and keep track of which jokes have been saved by storing their IDs in a file named "saved_jokes.txt". However, it's not immediately clear why it would stop working after saving 14 lines of post IDs to "saved_jokes.txt".

Here are a few potential reasons why the code might not be working as expected:

API Rate Limiting: Reddit's API enforces rate limits, which restrict the number of requests you can make within a certain time period. If your script is making too many requests too quickly, it might get rate-limited, causing it to stop working temporarily. You can check for any error messages related to rate limiting in your script's output or in the Reddit API response.

No New Jokes: The code appears to fetch jokes from the subreddit's "hot" section and saves them if they haven't been saved before. If there are no new jokes in the hot section, the loop will break after processing all available submissions. Check if there are new jokes being posted in the subreddit during the time you're running the script.

Error Handling: Ensure that error handling is implemented robustly throughout the script. If there's an unhandled exception, the script might terminate prematurely without any indication of failure.

File Permissions: Check if the script has the necessary permissions to read from and write to files. If there are permission issues, it might fail to read the list of saved jokes or write new IDs to the file.

Logging: Implement logging throughout your script to track its behavior and identify any unexpected issues. Logging can help you understand where the script might be failing and why.

To troubleshoot the issue, you can try adding more logging statements throughout the code to track its execution and identify where it might be encountering problems. Additionally, you can manually check the subreddit to see if there are any new jokes being posted and verify if the script is correctly identifying and processing them.

...

Client ID, Secret, and user agent removed because I assume that's sensible.

I don't know what the rules about web scraping are, and I can't find information about how many API calls I can make for free (the only thing I found suggested 200 a day?) - I'm just trying to play around with things and Reddit has a public API to play with (what other sites do that I could play around with?)

Anyway. This code should copy the title and body from a post in r/jokes and it should then save it to a text document in a subfolder called /jokes, the document should be jokes_date_time.txt to ensure unique filenames. There is also a part of the code that will prevent duplicates by keeping a log of all the IDs of posts that are accessed.

So. This code just worked twice in a row, and then the third time I ran it it did not create the text file, but it still updated the log of used posts to prevent duplicates. Based on earlier experimentation, and I just checked again, the code at this point will add IDs to the "don't access" list, but it will not save another text file.

So my question is... why? Is this a code issue or an API issue?

I am not a programmer/coder so I apologise as I am out of my depth, I have mostly been using ChatGPT3.5 to write the bulk of this, and then reading it to see if I can understand the constituent parts.

...

When it works I get

Joke saved to: jokes\joke_2024-03-18_05-52-50.txt

Joke saved.

When it doesn't work I only get

Joke saved.

...

I have JUST noticed that the list of saved jokes caps out at 14 and each time I run it the list changes but is still only 14 lines :/

OK SO THAT WAS THE ANSWER, Thanks so much for your help. I haven't even submitted this yet but... maybe I'll submit it anyway? Maybe someone can teach me something.

...

import praw

from datetime import datetime

import os

# Reddit API credentials

client_id = " "

client_secret = " "

user_agent = "MemeMachine/1.0 by /u/ "

# Initialize Reddit instance

reddit = praw.Reddit(client_id=client_id,

client_secret=client_secret,

user_agent=user_agent)

# Subreddit to fetch jokes from

subreddit = reddit.subreddit('jokes')

# Function to save joke to a text file

def save_joke_to_file(title, body):

now = datetime.now()

timestamp = now.strftime("%Y-%m-%d_%H-%M-%S")

filename = os.path.join("jokes", f'joke_{timestamp}.txt') # Save to subfolder 'jokes'

try:

with open(filename, 'w', encoding='utf-8') as file:

file.write(f'{title}\n\n')

file.write(body)

print(f'Joke saved to: {filename}')

except Exception as e:

print(f'Error saving joke: {e}')

# Create subfolder if it doesn't exist

if not os.path.exists("jokes"):

os.makedirs("jokes")

print("Created 'jokes' folder.")

# File to store IDs of saved jokes

saved_jokes_file = 'saved_jokes.txt'

# Fetch one joke

saved_jokes = set()

if os.path.exists(saved_jokes_file):

with open(saved_jokes_file, 'r') as file:

saved_jokes.update(file.read().splitlines())

for submission in subreddit.hot(limit=10): # Adjust limit as needed

if submission.id not in saved_jokes:

title = submission.title

body = submission.selftext.split("edit:", 1)[0] # Exclude anything after "edit:"

save_joke_to_file(title, body)

saved_jokes.add(submission.id)

break

# Update saved jokes file

with open(saved_jokes_file, 'w') as file:

file.write('\n'.join(saved_jokes))

print('Joke saved.')

6 Comments
2024/03/18
13:22 UTC

1

Use PRAW to extract report reasons for a post?

How would I go about using PRAW to retrieve all reports on a specific post or comment?

2 Comments
2024/03/18
13:11 UTC

4

Reddit bans my account after replying to a post comment via API.

Why does reddit ban my account when I try to reply to a comment via the reddit API? I'm using the /api/comment endpoint. This is my code example:

const data = {
 api_type: 'json',
 thing_id: t1_${parentId},
 text, 
};

const result = await axios.post( 
  https://oauth.reddit.com/api/comment, 
  {},
  { params: data, headers: { 'Authorization': Bearer ${accessToken} } }
);

My request is successful. But after creating a comment, Reddit bans my account forever. What could be the problem?

5 Comments
2024/03/18
11:58 UTC

2

How to create an Oauth 2.0 connection through Make/Integromat's HTTP module “Make an OAuth 2.0 request”?

Once I click "save" the connection im redirected to reddit where I am asked to allow the api to access posts and comment through my account and a 1 hour expiration.

After I allow this I am redirected to a page with JSON mentioning:

`The request failed due to failure of a previous request`
with a code `SC424`

These are my settings in the Make module,

Connection details:
My HTTP OAuth 2.0 connection | Reddit
Flow Type: Authorization Code
Authorize URI: https://www.reddit.com/api/v1/authorize
Token URI: https://www.reddit.com/api/v1/access_token
Scope: read
Client ID: MY CLIENT ID
Client Secret: MY CLIENT SECRET
Authorize parameters:
response_type: code
redirect_uri: https://www.integromat.com/oauth/cb/oauth2
client_id: MY CLIENT ID
Access token parameters
grant_type: authorization_code
client_id: MY CLIENT ID
client_secret: MY CLIENT SECRET
Refresh Token Parameters:
grant_type: refresh_token
Custom Headers:
User-Agent: web:MakeAPICalls:v1.0 (by u/username)
Token placement: in the header
Header token name: Bearer

I have asked this in the make community but I did not get a response yet so Im trying my luck here.

For included screenshots check:
https://community.make.com/t/request-failed-due-to-failure-of-previous-request-connecting-2-reddit-with-http-make-an-oauth-2-0-request/30604

0 Comments
2024/03/18
08:19 UTC

4

Is there a way to REALLY wipe account history from the API?

I deleted all my posts and comments, but when I go to sites that analyze reddit accounts, there are still comments that I previously made visible there

2 Comments
2024/03/17
18:52 UTC

1

Is this possible and if so how can it be done?

Currently, you can only view the first 1,000 post per subredded at any given time. The problem with this is that almost all subreddits have more than a thousand posts. The only way to beat the limit is to use to use a search tab, where you search with term within a subreddit and receive all the results with Said term. This method has clear limitations and is quiet time consuming.

Well I am proposing a solution and I would like to know how doable it is. I propose we use the search method but instead automated including the search terms to be used. It will work like this, it would analyze the first 1,000 posts of a subreddit, checking for reoccurring words and then using those words to search for more posts. The result from those searches would be analyzed as well and further searches will be done, so on and so forth until we get no further results. As for unique or non reoccurring words, a secondary line of analysis and searches can take place. For words that do not appear on the 1,000 posts, we can use chat GPT to give us words that are associated with that subreddit. If we really wanted to go crazy, we could use each and every word that appears in the dictionary. I imagine all this taking place in the background while to normal people it looks like your normal Reddit app with infinite scrolling, without the limit. We'd also have a filter that would prevent posts from repeating.

I'm asking y'all to let me know if this is do able and if not,why not. If it is doable, how can I make it happen. I thank you in advance.

4 Comments
2024/03/16
17:31 UTC

1

Trouble getting working list from PRAW to work in ASYNCPRAW

Hello all,

The following code works fine in PRAW:

top25_news = reddit.subreddit('news').top(time_filter='year',limit=25)
list(top25_news)

However, as I'm migrating the code to Async PRAW, this results in the first line running fine, creating a ListingGenerator object, and the second line creates an error, saying that the ListingGenerator object is not iterable.

I've found a few other somewhat annoying things, like submission title for a comment is unavailable in Async PRAW but is fine in PRAW.

Any help is appreciated - thanks!

2 Comments
2024/03/15
23:01 UTC

0

Can someone please cross section Reddit users from a list of subreddits for me?

I can’t figure out how to do it, I know there’s scripts already out there but I’m kinda useless.

Thank you so much in advancee

1 Comment
2024/03/15
21:25 UTC

1

Does .json have any rate limits?

If I access https://www.reddit.com/r/crossfit/comments/1bf7o4m/tiebreak_question.json

and posts like that on my server,

Will I get rate limited?

3 Comments
2024/03/15
20:18 UTC

0

Want to build a 3rd party app that blocks all NSFW posts

It seems doable from the API documentation, but just wanted to double check if I'm missing something before going down this path. I understand there is the `over_18` flag for posts and `include_over_18` flag for subreddit search.

4 Comments
2024/03/15
18:14 UTC

1

Use PRAW to get data from r/Mod?

Is it possible to use PRAW to get my r/Mod modqueue or reports queue? I'd like to be able to retrieve the combined reports queue for all of the subreddits I moderate.

2 Comments
2024/03/15
17:58 UTC

2

Trying to eliminate a step in this code where PRAW can figure out if the link is a post or comment.

The following code works well to ban users but I'm trying to eliminate the step where I tell it if it's a post [1] or a comment [2]. Is it possible to have code where PRAW determines the link type and proceeds from there? Any suggestions would be great. Still somewhat of a beginner-ish.

I essentially right-click on the link in Old Reddit, copy link, and paste it into the terminal window for the code to issue the ban.

print("ban troll")
now = datetime.now()
sub = 'SUBREDDITNAME'
HISTORY_LIMIT = 1000

url = input('URL: ')
reason = "trolling."
print(reason)
reddit_type = input("[1] for Post or [2] for Comment? ").upper()
print(reddit_type)
if reddit_type not in ('1', '2'):
    raise ValueError('Must enter `1` or `2`')

author = None
offending_text = ""
post_or_comment = "Post"
if reddit_type == "2":
    post_or_comment = "Comment"

if reddit_type == "1":
    post = reddit.submission(url=url)
    author = post.author
    offending_text = post.selftext
    title = post.title
    post.mod.remove()
    post.mod.lock()
    unix_time = post.created_utc
elif reddit_type == "2":
    comment = reddit.comment(url=url)
    title = ""
    offending_text = comment.body
    author = comment.author
    comment.mod.remove()
    unix_time = comment.created_utc

message_perm = f"**Ban reason:** {reason}\n\n" \
               f"**Ban duration:** Permanent.\n\n" \
               f"**Username:** {author}\n\n" \
               f"**{post_or_comment} link:** {url}\n\n" \
               f"**Title:** {title}\n\n" \
               f"**{post_or_comment} text:** {offending_text}\n\n" \
               f"**Date/time of {post_or_comment} (yyyy-mm-dd):** {datetime.fromtimestamp(unix_time)}\n\n" \
               f"**Date/time of ban (yyyy-mm-dd):** {now}"

reddit.subreddit(sub).banned.add(author, ban_message=message_perm)
5 Comments
2024/03/15
17:39 UTC

1

Troubles Moving from PRAW to ASYNCPRAW: 'NoneType' object is not iterable Error When Processing Comments

I've recently been transitioning a project from PRAW to ASYNCPRAW in hopes of leveraging asynchronous operations for better efficiency when collecting posts and comments from a subreddit.

**The Issue:**I've been transitioning a project from PRAW to ASYNCPRAW to improve efficiency by leveraging asynchronous operations across the whole project. While fetching and processing comments for each post, I consistently encounter a TypeError: 'NoneType' object is not iterable. This issue arises during await post.comments.replace_more(limit=None) and when attempting to list the comments across all posts.

```

    async def collect_comments(self, post):
        try:
            logger.debug(f"Starting to collect comments for post: {post.id}")
            
            if post.comments is not None:
                logger.debug(f"Before calling replace_more for post: {post.id}")
                await post.comments.replace_more(limit=None)
                logger.debug(f"Successfully called replace_more for post: {post.id}")
                comments_list = await post.comments.list()
                logger.debug(f"Retrieved comments list for post: {post.id}, count: {len(comments_list)}")

                if comments_list:
                    logger.info(f"Processing {len(comments_list)} comments for post: {post.id}")
                    for comment in comments_list:
                        if not isinstance(comment, asyncpraw.models.MoreComments):
                            await self.store_comment_details(comment, post.id, post.subreddit.display_name)
                else:
                    # Log if comments_list is empty or None
                    logger.info(f"No comments to process for post: {post.id}")
            else:
                # Log a warning if post.comments is None
                logger.warning(f"Post {post.id} comments object is None, skipping.")
        except TypeError as e:
            # Step 4: Explicitly catch TypeError
            logger.error(f"TypeError encountered while processing comments for post {post.id}: {e}")
        except Exception as e:
            # Catch other exceptions and log them with traceback for debugging
            logger.error(f"Error processing comments for post {post.id}: {e}", exc_info=True)

```

Apologies for all the logger and print statements.

Troubleshooting Attempts:

  1. Checked for null values before processing comments to ensure post.comments is not None.
  2. Attempted to catch and handle TypeError specifically to debug further.
  3. Searched for similar issues in ASYNCPRAW documentation and GitHub issues but found no conclusive solutions.

Despite these efforts, the error persists. It seems to fail at fetching or interpreting the comments object, yet I can't pinpoint the cause or a workaround.**Question:**Has anyone faced a similar issue when working with ASYNCPRAW, or can anyone provide insights into why this TypeError occurs and how to resolve it?I'm looking for any advice or solutions that could help. Thanks in advance for the help

9 Comments
2024/03/15
00:55 UTC

2

Getting the top posts of a subreddit

I am sorry for the silly question but is it possible to extract the top posts of a subreddit (weekly, monthly, yearly)
I checked the API documentation but I could not figure out.
one way to get top posts is through the json way:
https://www.reddit.com/r/funny/top.json

but it not clear what top posts will it fetch? top posts in the last 24 hours, or last week, or last month.

TLDR: unable to figure out an api to get the top weekly and monthly posts on a subreddit. If such api does not exist, is there any work around?

kindly guide.

4 Comments
2024/03/14
16:17 UTC

2

Reddit API

Hi, i was trying to extract posts in reddit for my final year project. But im not sure, is it legal to extract the posts? if yes, how do I do it? can anyone help with this? thanks

16 Comments
2024/03/14
13:39 UTC

Back To Top