You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, I'm trying to use your code to gather a large amount of empty meme templates. I would need more than the 100 examples you provided. I am struggling to get the project to run, if I try to run the run.sh with git bash it just empties the run.sh file and other methods also didn't seem to work, I also tried to run the python files.
Could you provide a more detailed guide on how to start the run.sh file? Or is it broken because something on imgflip changed?
Also, I know this is asking for a lot, but if you could alternatively just run the script for me with a setting of more than 100 meme templates and provide that dataset, that would be very much appreciated.
Thanks for any reply, have a nice day.
The text was updated successfully, but these errors were encountered:
But the scraper is designed to get the most popular templates from https://imgflip.com/popular_meme_ids
and store them in dataset/popular_100_memes.csv
Based on popular_100_memes.csv, the templates and the memes are downloaded to the dataset. So if you need more templates you will need to have a bigger popular_100_memes.csv file.
Pretty sure something on imgflip side changed since the project was done 2yrs ago.
Hey, I'm trying to use your code to gather a large amount of empty meme templates. I would need more than the 100 examples you provided. I am struggling to get the project to run, if I try to run the run.sh with git bash it just empties the run.sh file and other methods also didn't seem to work, I also tried to run the python files.
Could you provide a more detailed guide on how to start the run.sh file? Or is it broken because something on imgflip changed?
Also, I know this is asking for a lot, but if you could alternatively just run the script for me with a setting of more than 100 meme templates and provide that dataset, that would be very much appreciated.
Thanks for any reply, have a nice day.
The text was updated successfully, but these errors were encountered: