Below are the steps to perform a production deploy.
- The entire deploy was configured to be simple from the tool Docker Compose. So you need to install:
-
Create MongoDB Docker volumes:
docker volume create --name=modulector_postgres_data
-
Make a copy of
docker-compose_dist.yml
with the namedocker-compose.yml
. -
Set the environment variables that are empty with data. They are listed below by category:
- Django:
DJANGO_SETTINGS_MODULE
: indicates thesettings.py
file to read. In production, we set indocker-compose_dist.yml
the valueModulectorBackend.settings_prod
which contains several production properties.ALLOWED_HOSTS
: list of allowed host separated by commas. Default['web', '.localhost', '127.0.0.1', '[::1]']
.ENABLE_SECURITY
: set the stringtrue
to enable Django's security mechanisms. In addition to this parameter, to have a secure site you must configure the HTTPS server, for more information on the latter see the section Enable SSL/HTTPS. Defaultfalse
.CSRF_TRUSTED_ORIGINS
: in Django >= 4.x, it's mandatory to define this in production when you are using Daphne through NGINX. The value is a single host or list of hosts separated by commas. 'http://', 'https://' prefixes are mandatory. Examples of values: 'http://127.0.0.1', 'http://127.0.0.1,https://127.0.0.1:8000', etc. You can read more here.SECRET_KEY
: Django's secret key. If not specified, one is generated with generate-secret-key application automatically.MEDIA_ROOT
: absolute path where will be stored the uploaded files. By default<project root>/uploads
.MEDIA_URL
: URL of theMEDIA_ROOT
folder. By default<url>/media/
.ALLOWED_HOSTS
: list of allowed hosts (separated by commas) to access to Modulector. Defaultweb,localhost,127.0.0.1,::1'
PROCESS_POOL_WORKERS
: some request uses parallelized queries using ProcessPoolExecutor to improve performance. This parameter indicates the number of workers to be used. By default4
.
- Postgres:
POSTGRES_USERNAME
: Database username. By default, the docker image usesmodulector
.POSTGRES_PASSWORD
: Database username's password. By default, the docker image usesmodulector
.POSTGRES_PORT
: Database server listen port. By default, the docker image uses5432
.POSTGRES_DB
: Database name to be used. By default, the docker image usesmodulector
.
- Health-checks and alerts:
HEALTH_URL
: indicates the url that will be requested on Docker health-checks. By default, it is http://localhost:8000/drugs/. The healthcheck makes a GET request on it. Any HTTP code value greater or equals than 400 is considered an error.HEALTH_ALERT_URL
: if you want to receive an alert when health-checks failed, you can set this variable to a webhook endpoint that will receive a POST request and a JSON body with the field content that contains the fail message.
- Django:
-
Go back to the project's root folder and run the following commands:
- Docker Compose:
- Start:
docker compose up -d
. The service will available in127.0.0.1
. - Stop:
docker compose down
- Start:
- Docker Swarm:
- Start:
docker stack deploy --compose-file docker-compose.yml modulector
- Stop:
docker stack rm modulector
- Start:
- Docker Compose:
-
Import all the data following the instructions detailed in the Import section.
-
(Optional) Create a superuser to access to the admin panel (
<URL>/admin
).- Enter the running container:
docker container exec -it <backend_container_name> bash
. The name is usuallymodulector-web_modulector-1
but you can check it withdocker container ps
. - Run:
python3 manage.py createsuperuser
- Exit the container:
exit
- Enter the running container:
Due to the database restoration in the first start, the container db_modulector
may take a while to be up a ready. We can follow the status of the startup process in the logs by doing: docker compose logs --follow
.
Sometimes this delay makes django server throws database connection errors. If it is still down and not automatically fixed when database is finally up, we can restart the services by doing: docker compose up -d
.
To enable HTTPS, follow the steps below:
-
Set the
ENABLE_SECURITY
parameter totrue
as explained in the Instructions section. -
Copy the file
config/nginx/multiomics_intermediate_safe_dist.conf
and paste it intoconfig/nginx/conf.d/
with the namemultiomics_intermediate.conf
. -
Get the
.crt
and.pem
files for both the certificate and the private key and put them in theconfig/nginx/certificates
folder. -
Edit the
multiomics_intermediate.conf
file that we pasted in point 2. Uncomment the lines where both.crt
and.pem
files must be specified. -
Edit the
docker-compose.yml
file so that thenginx
service exposes both port 8000 and 443. Also, you need to addcertificates
folder tovolumes
section. It should look something like this:... nginx: image: nginx:1.23.3 ports: - 80:8000 - 443:443 # ... volumes: ... - ./config/nginx/certificates:/etc/nginx/certificates ...
-
Redo the deployment with Docker.
Django provides in its official documentation a configuration checklist that must be present in the production file settings_prod.py
. To verify that everything is fulfilled, you could execute the following command once the server is up (this is because several environment variables are required that are set in the docker-compose.yml
).
docker container exec modulector_backend python3 manage.py check --deploy --settings ModulectorBackend.settings_prod
Otherwise, you could set all the mandatory variables found in settings_prod.py
and run directly without the need to pick up any service:
python3 manage.py check --deploy --settings ModulectorBackend.settings_prod
If the configuration of the docker-compose.yml
file has been changed, you can apply the changes without stopping the services, just running the docker compose restart
command.
If you want to stop all services, you can execute the command docker compose down
.
To check the different services' status you can run:
docker service logs <service's name>
Where <service's name> could be nginx_modulector
, web_modulector
or db_modulector
.
In order to create a database dump you can execute the following command:
docker exec -t [name of DB container] pg_dump [db name] --no-owner -U modulector | gzip > modulector.sql.gz
That command will create a compressed file with the database dump inside.
You can use set Modulector DB in two ways.
- Start up a PostgreSQL service. You can use the same service listed in the
docker-compose.dev.yml
file. Rundocker compose -f docker-compose.dev.yml up -d db_modulector
to start the DB service. - Optional but recommended (you can omit these steps if it's the first time you are deploying Modulector): due to major changes, it's probably that an import thrown several errors when importing. To prevent that you could do the following steps before doing the importation:
- Drop all the tables from the DB:
docker exec -i [name of the DB container] psql postgres -U modulector -c "DROP DATABASE modulector;"
- Create an empty database:
docker exec -i [name of the DB container] psql postgres -U modulector -c "CREATE DATABASE modulector;"
- Drop all the tables from the DB:
- Download
.sql.gz
from Modulector releases pages or use your own export file. - Restore the database:
zcat modulector.sql.gz | docker exec -i [name of the DB container] psql modulector -U modulector
. This command will restore the database using a compressed dump as source, keep in mind that could take several minutes to finish the process.- NOTE: in case you are working on Windows, the command must be executed from Git Bash or WSL.
-
Download the files for the mirDIP database (version 5.2), Illumina 'Infinium MethylationEPIC 2.0' array and the Human MicroRNA Disease Database v4.0. The files can be freely downloaded from their respective web pages.
For the mirDIP database:- Go to the MirDIP download web page and download the file called "mirDIPweb/mirDIP Unidirectional search ver. 5.2".
- Unzip the file.
- Find the file called "mirDIP_Unidirectional_search.txt" and move it into the "modulector/files/" directory.
For the EPIC Methylation array:
- Go to the Illumina product files web page and download the ZIP file called "Infinium MethylationEPIC v2.0 Product Files (ZIP Format)".
- Unzip the file.
- Within the unzipped files you will find one called "EPIC.csv". Move this file to the directory "modulector/files/".
- NOTE: the total weight of both files is about 5 GB.
For the HMDD database:
- Go to the HMDD website and from the Downloads tab, download the txt file from the option "The whole dataset of miRNA-disease association data". Use version 4.0.
- Rename the downloaded file as "disease_hmdd.txt". Move this file to the directory "modulector/files/".
For the mirBase database: this database is embedded as it weighs only a few MBs. Its data is processed in Django migrations during the execution of the
python3 manage.py migrate
command. So, you don't have to do manual steps to incorporate mirBase data inside Modulector. -
Start up a PostgreSQL service. You can use the same service listed in the docker-compose.dev.yml file.
-
Run
python3 manage.py migrate
to apply all the migrations (NOTE: this can take a long time to finish).
If new versions of the databases used in modulector are released and you want to update them, follow the following steps:
- For mirDIP, HDMM and Illumina EPIC array you must follow the same steps described in the Regenerating the data manually section, replacing the named files with the most recent versions that have been published on their sites.
- For miRBase, follow the instructions below:
- Go to the Download section on the website.
- Download the files named hairpin.fa and mature.fa from the latest version of the database.
- Replace the files inside the modulector/files/ directory with the ones downloaded in the previous step.
- Start up a PostgreSQL service. You can use the same service listed in the docker-compose.dev.yml file.
- Run the command
python3 manage.py migrate
to apply all the migrations (NOTE: this can take a long time to finish).
Note: These updates will work correctly as long as they maintain the format of the data in the source files.
When we notify user about updates of pubmeds they are subscribed to we interact with a ncbi api that uses an API_KEY, by default, we left a random API_KEY pre-configured in our settings file, you should replace it with your own.
For cron jobs we use the following library. In our settings file we configured our cron jobs inside the CRONJOBS = []