-
-
Notifications
You must be signed in to change notification settings - Fork 991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eka's Portal (site request) #390
Comments
As an alternative solution I've written a shell script to do something similar, and perhaps give some ideas. It makes use of the "Latest Updates" page, so it doesn't deal with folders and it also seems to possibly be getting other peoples submissions as well (I think it's from some kind of retweeting-like feature) but it's a small price to pay for salvation. #!/bin/sh
# Downloads all posts from a user on Eka's Portal (g4)
if [ $# -eq 0 ]
then
echo "Usage: $0 <artist> [optional cookie file]"
exit 1
fi
# Get amount of pages
echo "Checking amount of pages..."
pages=$(curl -s "https://aryion.com/g4/latest/$1" | grep -Eo "Page 1 of [[:digit:]]*" | head -n1 | awk 'NF>1{print $NF}')
echo "Getting list of IDs from $pages pages... (this might take a while)"
# Get list of ID
list=\
"$(
# Get all pages
curl -s "https://aryion.com/g4/latest/$1&p=[1-$pages]" | \
# Get all submissions
grep view/ | \
# Get only what's inside of each href
sed -n 's/.*href="\([^"]*\).*/\1/p' | \
# Get rid of everything before last slash, leaving only IDs
grep -o '[^/]*$' | \
# Add download URL to start of each ID
awk '{ print "https://aryion.com/g4/data.php?id=" $0; }' | \
# Newlines to spaces
tr '\n' ' '
)"
# Start downloading!
curl --cookie "$2" --remote-name-all --fail --remote-time --remote-header-name $list |
I've written some code adding basic support for user galleries and posts: 6143050 |
Thank you for adding support, however..
You should be able to get this from the headers here: It also doesn't support folders, I know this might be complicated but it would be greatly appreciated, currently it just dumps everything into one big folder, which can become messy when dealing with different comics etc. Rest seems fine, other than maybe prefix the default filename with |
i.e. /g4/data.php?id=… - get filename & extension from Content-Disposition header - handle all downloadable file types (docx, swf, etc)
I hope the last few commits fix the remaining issues
|
Yo this is perfect, I guess it's time to retire my hackjob of a script, thank you so much for this. <3 |
I don't have anything set up regarding donations or similar, and I don't think that's going to change, but thank you for the offer. (If you really want to get rid of your money, you can send it to the Paypal account associated with my email address ... someone actually did that for Christmas ...) |
Page: https://aryion.com/ (NSFW)
Most files can be downloaded without authentication, but there are rare cases.
Example gallery: https://aryion.com/g4/gallery/jameshoward (NSFW)
Example post: https://aryion.com/g4/view/366689 (SFW)
Download URL: https://aryion.com/g4/data.php?id=366689 (SFW)
All posts can be downloaded like this, by putting the post ID after
data.php?id=
headers:
Note how the last-modified header is malformed, if this could also be adjusted I'd be forever in your debt.
The text was updated successfully, but these errors were encountered: