This project is created for crawling metadata from auction service with 🚗cars: Gratka.pl. Metadata like a:
'marka', 'model', 'cena', 'miasto', 'wojewodztwo', 'stan_techniczny', 'przebieg', 'rodzaj_ogłoszenia',
'do_negocjacji', 'typ_nadwozia', 'stan_pojazdu', 'rok_produkcji', 'rodzaj_paliwa', 'pojemność_silnika_cm3',
'moc_silnika', 'skrzynia_biegów', 'zarejestrowany_w_polsce', 'kraj_pierwszej_rejestracji', 'kolor',
'liczba_drzwi', 'liczba_miejsc', 'numer_vin', 'ważny_przegląd', 'link'
This data when spider will end his job will be stored inside the PostgreSQL. Next step will be data cleansing and visualistaion, which are made with jupyter notebook.
This project using 3 Docker containers:
-
Container with Python and Scrapy
- Created gratka spider which inheriting from class scrapy.Spider (Scrapy script created to crawl metadata from every car selling advertisment and also saving a HTML file for each ad - in folder)
-
Container with PosgreSQL
-
Container Jupyter Notebook
- Notebook created to cleansing and visualise a data, used libraries:
- pandas
- geopandas
- numpy
- matplotlib
- seaborn
- pylab
- psycopg2
- Notebook created to cleansing and visualise a data, used libraries:
├── Database
│ └── create_table.sql
├── docker-compose.yml
├── gratkascrap
│ ├── Dockerfile
│ ├── HTML_FILES
│ │ └── 00a3d318-6fcd-4bb8-9bd7-6c1b7ac4d69c.html
│ ├── __init__.py
│ ├── gratkascrap
│ │ ├── __init__.py
│ │ ├── items.py
│ │ ├── middlewares.py
│ │ ├── pipelines.py
│ │ ├── settings.py
│ │ └── spiders
│ │ ├── __init__.py
│ │ │ ├── __init__.cpython-39.pyc
│ │ │ └── gratka.cpython-39.pyc
│ │ └── gratka.py
│ ├── requirements.txt
│ └── scrapy.cfg
├── notebook
│ ├── Dockerfile
│ ├── data_visualisation.ipynb
│ ├── requirements.txt
│ ├── voivodeship.shp
│ └── voivodeship.shx
└── .env-sample
To run properly this project you should assign a environmental variables in file .env.
In this repo is created .env-sample with variables used to run containers. You need to assign variables below in your .env file:
DATABASE_PASSWORD=
JUPYER_TOKEN=
POSTGRES_DB=
- Clone the project
- Go to the project directory: Type in CLI:
$ ls
You should see this:
Database docker-compose.yml gratkascrap notebook
Change direction to create dockerfile of scrapy:
$ cd gratkascrap
Build scrapy image: 🚨to run this command docker should be running on your machine🚨
$ docker build -t scrapy_gratka .
Change direction to main directory:
$ cd ..
Change direction to create dockerfile of notebook:
$ cd notebook
Build notebook image:
$ docker build -t notebook_gratka .
Change direction to run docker composer:
$ cd ..
Run dockercomposer:
$ docker-compose up
Now all three containers are running, it will take about 15-20 minutes for scrapy to crawl all pages, you will see in terminal when scrapy finishes job. When scrapy finished you should open jupyter lab via localhost, type in your browser:
localhost:8888
🚨In case the notebook requires a token pass a value which was assigned to JUPYER_TOKEN in .env🚨
Next choose a file 🗒️data_visualisation.ipynb and run all cells to see data analyse.