- docker version → to get version
- docker images → to get all images installed on your local machine
- docker ps → to get all running containers
- docker ps -a → to get all containers including the ones that are stopped
- docker pull (image) → fetches image from dockerhub
- docker run (image) → looks for image locally and runs it in a container. If not found pulls from dockerhub and then runs it. Every time you write docker run a new container starts.
- docker run -p3000:6379 redis → binds redis container port 6379 to local host port 3000. Now can access on local machine at that port. Port binding done to remove conflicts between containers with same ports.
- docker stop (container id) → stop container
- docker start (container id) → to resume stopped container
- docker logs (container id) → to get information about container
- docker run -d —name my-container mongo → runs image mongo in my-container in detached mode
- docker exec -it (container id) /bin/(bash or sh) → opens virtual file system of container. Write exit to get out of the terminal.
- docker network create (network name) → creates an isolated network where containers can communicate with just names
- docker network ls → list all networks
- docker run mongo —net mongo-network→ run mongo container in network called mongo-network
- docker rmi (image id) → deletes image
- docker rm (container id) → delete a container
Docker compose is used to run multiple containers which need to interact with each other. Better than writing command for each container every single time. Also creates a default network for all the services mentioned.
Example➖
version:'3'
services:
mongodb:
image:mongo
ports:
-27017:27017
environment:
-MONGO..._USERNAME=admin
mongo-express:
image:mongo-express
ports:
-8080:8080
environment:
-ME_CONFIG_MONGODB_A...
- docker compose -f (config.yaml) up → runs all containers inside the file and creates a default network for them automatically
- docker compose -f (config.yaml) down → stops all running containers mentioned inside the file
Logs of all containers might get mixed since starting at the same time
It’s a blueprint for building images. Is used to create an image out of your application that you have made.
Example➖
FROM node
ENV MONGO_DB_USERNAME=admin
MONGO_DB_PWD=password
RUN mkdir -p/home/app
COPY . /home/app
CMD ["node", "server.js"]
- FROM node means installing node. This will behave as the base image for your nodejs app.
- You can set ENV as well but it will become difficult to change later so better is to do so with docker compose.
- RUN command runs a linux command. This will create a directory inside the container i.e all changes made by it don’t affect the local host machine.
- COPY will run on the local host and fetch files.
- CMD is the entry point command to run the app. It’s just used one time unlike RUN.
- docker build -t (image name):(version/tag) (path of root folder where Dockerfile present) → this command will create an image with the specified tag using Dockerfile
- Whenever changes made to Dockerfile you need to rebuild image.
Pushing your image to some private repository like AWS ECR.
- First you need to authenticate yourself to push image. After creating a repository using AWS ECR UI you can then follow steps to do successful login on your local machine. For this you need to install AWS cli and configure your credentials.
- After that you have to change the image name to private-repository-name/image-name since that is how it will recognize that you are trying to push image to a private repository and not dockerhub.
- Finally run docker push (new image name with domain registry) to store your image. In AWS ECR each repo can only store 1 image and its many versions.
Now we can just pull all images needed to deploy the app on server. For pulling our app image from private repo we need to login on the server as well. And for the images to be pulled form dockerhub we can do that freely. We will create a docker compose file and start all services. The app image, mongo and mongo-express together form a docker network this time.
So the node app can access mongo without port no.
version:'3'
services:
my-app:
image:private-repo-name/image-name:tag
ports:
-3000:3000
mongodb:
image:mongo
ports:
-27017:27017
environment:
-MONGO..._USERNAME=admin
mongo-express:
image:mongo-express
ports:
-8080:8080
environment:
-ME_CONFIG_MONGODB_A...
So building an image and pushing to repository is something that a CI tool like Jenkins will do.
Used for data persistence. We can replicate data stored in container inside our local machine so the next time we restart container the data is not lost. Has 3 types➖
- Host volumes → We can provide host path where data will be stored like this; host-path:container-path
- Anonymous volumes → No host path, given anonymously provides path on host for data; container-path
- 🌟 Named volumes → can use names for the data volume to be stored on local machine, it will provide a path depending on OS automatically; name:container-path
version:'3'
services:
my-app:
....
mongodb:
image:mongo
ports:
-27017:27017
environment:
-MONGO..._USERNAME=admin
volumes:
- mongo-data:/data/db
mongo-express:
....
volumes:
mongo-data:
driver:local