Mastering Docker: Unleash the Full Potential of Containerization, Part-3:
"Everything you need to know about Docker".
Introduction:
Hey there, Am Vishwa! DevOps and OpenSource Enthusiast. Currently learning Docker with the help of OpenSource resources. I completed certain tutorials to know about Containerization and also did some Hands-On too. I already released Part One & Two of this Mastering Docker article. Here, we are going to learn about How to create Docker Compose files, What is Docker private registry and Docker Volumes. Let's start reading.
Docker Compose:
- It is used for running multiple Docker Containers.
- It is a YAML file that contains all the docker run commands.
- If Docker is installed means then the package for the Compose is also installed.
Example:
version: '3'
services:
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
mongo-express:
image: mongo-express
restart: always # fixes MongoNetworkError when mongodb is not ready when mongo-express starts
ports:
- 8080:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
- The above code contains both MongoDB and Mongo-Express commands for running containers.
- The best part is we do not want to create a network for the containers to talk to each other, the docker-compose will take care of creating a network.
Note: Indentation in the YAML file is very important.
- $docker-compose -f mongo.yaml up -d
It creates its network and aligns all the containers inside it. Here, "-f" is a file argument to execute, "mongo.yaml" is the file name, "up" is the status of the containers and "-d" is nothing but to run in detached mode.
- $docker-compose -f mongo.yaml down
The above command is nothing but it will stop all the containers & also removes the created network.
Docker Volumes:
When do we need it?
- If we restart a container the data that it previously had will be lost, to overcome this we will use docker volumes. Because when we work with databases, data persistence is very important. Container has a virtual file system only, their data will be stored.
What is it?
- It is nothing but the Host file system(physical) will be mounted to the Virtual file system of a container. If we do something in VFS, the changes will be automatically replicated in HFS too. If the container starts from a fresh state, it automatically fetches the data from HFS.
Three Types:
- By using the docker run command with the argument "-v" only we will be creating a docker volume.
1. Host Volumes:
- $docker run -v /home/mount/data:/var/lib/mysql/data
Here we decide where(/home/mount/data) on the HFS the reference should be made.
2. Anonymous Volumes:
- $docker run -v /var/lib/mysql/data
Here it is called anonymous volumes because we don't have references. HFS won't be specified, for each container, a folder is generated that gets mounted.
3. Named Volumes:
- $docker run -v name:/var/lib/mysql/data
Here "name" is a folder on the Host. We can refer to the volumes by name. This type of volume is the mostly used one in the production.
Docker volume in Docker-Compose:
Example:
version: '3'
services:
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
volumes:
- mongo-data:/data/db # named volume - "mongo-data"
mongo-express:
image: mongo-express
restart: always # fixes MongoNetworkError when mongodb is not ready when mongo-express starts
ports:
- 8080:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
volumes:
mongo-data:
driver: local # stores in HFS which is in local driver
- In the above code, I used the named volume for creating volumes.
- At last mention or list the volumes that we've created for the containers.
- At the container level, volumes are used for reference paths and also we can have the same folder for different containers in HFS, we can mold them in different paths inside the container even.
- If we start/stop or restarted the container data won't get lost.
Docker Private Registry:
- In Amazon ECR, we can create a repository that can have multiple versions of the same image, this is how it works.
- One repository can have 1000(capacity) different versions of the same image.
- $docker login
Here, it is the first step and it doesn't use docker login but in the background it does. To run AWS login, we need AWS CLI installed & credentials configured. If we push images from Jenkins we need to give Jenkins credentials.
Image Naming in Docker Registries:
- Registry Domain/ImageName:Tag
Difference between Docker Hub & ECR:
- In the docker hub, we can use shortcut commands.
Example:
- $docker pull mongo:4.2 instead of $docker pull dcoker.io/library/mongo:4.2
- In AWS ECR, $docker pull registry domain/ image name: tag
Example:
- $docker pull 520697001743.dkr.ecr.eu_central-1.amazonaws.com/myapp:1.0
- Before pushing it, we need to tag(renaming the image), because when we do a docker push, it thinks that we are pushing it to the Hub.
So use, - $docker tag imagename:version registry_imagename
Example:
- $docker tag myapp:1.0 66457403868.......:1.0
Then, do $docker images to check the renamed image is available & push.
- $docker push imagename:1.0
Here, use the renamed image as the image name.
This is about the Docker Private registry.
Conclusion:
Thanks for the reading folks. You can check the links of the first two parts in the Introduction section. I used my notes which were taken while listening to the tutorials.
Hope this Mastering Docker article will help you in your learning journey.
Do Share, Like, and Comment with your thoughts guys. It would be helpful for my further technical writing. Thanks again!