9/6/2023 0 Comments Sap gui download redditPost_data = įor childComment in parentComment: Return praw.Reddit(user_agent=userAgent, client_id=clientId, client_secret=clientSecret)Ĭlient_auth = (clientId, clientSecret) I named this redApi.py Here it is: import time Okay, I wrote code that can reliably pull every comment from a thread, and takes about 10 seconds for 500 comments, and about a minute for 4000 comments. R = praw.Reddit(user_agent=userAgent, client_id=clientId, client_secret=clientSecret) UserAgent = "MyAppName/0.1 by " + username #res = getAll(r, "6rjwo1", verbose=False) # This won't print out progress if you want it to be silent. GetSubComments(comment, commentsList, verbose=verbose) GetSubComments(child, allComments, verbose=verbose)ĭef getAll(r, submissionId, verbose=True): If verbose: print("fetching (" + str(len(allComments)) + " comments fetched total)") Here is how to do it now: def getSubComments(comment, allComments, verbose=True): _more(limit=None)įor comment in ():Įdit: The new praw api (5.0.1) is magical and makes this much easier. This also handles AttributeError that might occure due to more_comments through the use of replace_more(limit=None) submissionList = Is there a way to speed this up? There are people that have extracted every reddit comment into a database, so there must be some way to do this quicker.Įdit: the new praw api (6.0.0) has lists() which make the job easier: Submission = r.get_submission(submission_id='11v36o') The official way, r = praw.Reddit('Comment Scraper 1.0 by u/_Daimon_ see '
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |