The API Scraper is a Python 3.x tool designed to find "hidden" API calls powering a website.
Remark: 結構上,以中介軟體 app 變數包住整個應用程式是 flask 框架的基礎。
而專案架構中,會先呼叫模組名稱(通常是腳本檔案名稱)內的類別名稱, 並從其類別下所包含的方法,做呼叫的應用。
所以,init() 建構方法除了初始化類別的參數,也能讓模組是否等同於類別作為判斷依據。 例如: apicall 此模組(檔名)明明有多個類別,而非其中包含的APICall 為唯一類別,故其中很多行 init()
The following Python libraries should be installed (with pip, or the package manager of your choice):- Selenium
- Requests
self.browser = Browser("chromedriver/chromedriver", "browsermob-proxy-2.1.4/bin/browsermob-proxy", self.harDirectory)
$python3 consoleservice.py [commands]
If you get confused about which commands to use, use the -h flag.
$ python3 consoleservice.py -h
usage: consoleService.py [-h] [-u [U]] [-d [D]] [-s [S]] [-c [C]] [--p]
optional arguments:
-h, --help show this help message and exit
-u [U] Target URL. If not provided, target directory will be scanned
for har files.
-d [D] Target directory (default is "hars"). If URL is provided,
directory will store har files. If URL is not provided,
directory will be scanned.
-s [S] Search term
-c [C] Count of pages to crawl (with target URL only)
--p Flag, remove unnecessary parameters (may dramatically increase
run time)
Kicking this off over HTTP is absolutely not necessary at all, however, I am including a Flask wrapper around the API Finder, so it might as well be documented!
Install Flask
export FLASK_APP=webservice.py
flask run