Understanding the Benefits of Containers
Introduction to Containers
Once you’ve read this section, you’ll be able to identify the issues that come up in traditional software development, define a container and describe its characteristics, list container benefits and challenges, and name popular container vendors.

Cloud-native is the newest application development approach for building scalable, dynamic, hybrid cloud-friendly software. Containers are a significant part of that approach. For example, think about how shipping containers have revolutionized cargo transport: shipping companies fill standard-sized containers with various goods instead of packing a container with whatever items fit. This makes it easier to transport goods around the World — and it also makes it easier to keep track of inventory. In addition, containers make shipping more efficient by cutting down on wasted space and time.

Logisticians decide on packaging options such as ships, planes, trains, and trucks that can carry containers based on the size of the container and when it needs to arrive. The digital equivalent of this is container technology. In addition, containers solve the problem of making software portable so applications can run on multiple platforms.

A container is a standard unit of software that encapsulates the application code, runtime, system tools, system libraries, and settings necessary for programmers to build, ship, and run applications efficiently. Operations and underlying infrastructure issues are no longer blockers. You can quickly move applications from your laptop to a testing environment, from a staging environment to a production environment, and always know that your application will work correctly. A container can be small, just tens of megabytes, and developers can almost instantly start containerized applications with these capabilities serving as the foundation for today’s development and deployment solutions standards.

In traditional computing environments, developers can’t isolate the app and allocate or designate specific storage and memory resources for apps on physical servers. As a result, servers are often underutilized or overutilized, leading to poor utilization and a poor return on investment. Traditional deployments require comprehensive provisioning resources and expensive maintenance costs. The limits of physical servers can constrain application performance during peak workloads. Applications are not portable across multiple environments and operating systems. Implementing hardware for resiliency is often time-consuming, complex, and expensive. As a result, traditional on-premises IT environments have limited scalability. Finally, when distributing software to multiple platforms and resources using traditional settings, automation is challenging. Containers help you to overcome these challenges.

Container engines virtualize the operating system and are responsible for running containers. Platform-independent containers offer lightweight, fast, isolated, portable, and secure environments for running apps and often require less memory space than virtual machines. Binaries, libraries, and other entities within the container enable apps to run, while one device can host multiple containers; containers help programmers quickly deploy code into applications.

Containers can be deployed on any cloud, desktop, or virtualized infrastructure. They are OS-independent and run on Windows, Linux, or Mac OS. Containers are also language-platform independent — you can use any programming language.

Containers let you quickly create applications using automation, keep deployment time and costs down, improve resource utilization (memory and CPU), and port across different environments. They also support next-generation applications like microservices.
Containers have several advantages but also some drawbacks. First, when a server’s operating system is affected by malicious activity, server security can be compromised. Developers may become overwhelmed when managing thousands of containers. Finally, converting monolithic legacy applications into microservices is a complex process, and developers may have difficulty right-sizing containers for specific scenarios.
Let’s examine some of the most popular container platforms. Docker is the most popular container platform today. Podman, a daemon-less container engine, offers more security than Docker, but it is not as widely used. Developers often prefer LXC for data-intensive applications and operations. Vagrant provides the highest levels of isolation on the running physical machine.

In this section, you learned about the rise of containers and how organizations use them to overcome challenges related to isolation, utilization, provisioning, performance, and more. A container is a standard unit of software that encapsulates everything needed to build, ship, and run applications. Containers are applications that run on top of your operating system. They are platform-independent, lower deployment times, improve utilization rates, automate processes, and support next-gen applications (microservices). Unfortunately, these benefits come at a cost: management, legacy project migration, and right-sizing can be significant challenges. Major container vendors include Docker, Podman, LXC, and Vagrant.
Introduction to Docker
After reading this section, you will be able to: Define what a Docker container is, describe how Docker uses virtualization technology to create a container, list the benefits of containers and their potential drawbacks, and identify strategies for overcoming the challenges of using containers.

Docker is an open platform for developing, shipping, and running applications as containers. Docker became popular with developers because of its simple architecture, massive scalability, and portability on multiple platforms, environments, and locations.

Docker isolates applications from the underlying hardware, operating system, and container runtime.

Docker is written in the Go programming language, which uses Linux kernel features to deliver its functionality. Docker also uses namespaces to provide an isolated workspace called a container. And Docker creates a set of namespaces for every container, and each aspect runs in a separate namespace.
Docker inspired the development of complementary tools such as Docker CLI, Docker Compose, and Prometheus, as well as various plugins for storage. It also led to the creation of orchestration technologies like Docker Swarm or Kubernetes and development methodologies using microservices and serverless.
Docker offers the following benefits: Isolated and consistent environments result in stable application deployments. Deployments occur in seconds because Docker images are small, reusable, and can be used across multiple projects. Docker’s automation capabilities help eliminate errors, simplifying the maintenance cycle. Docker supports Agile and CI/CD DevOps practices. Docker’s easy versioning speeds up testing, rollbacks, and redeployments. Docker helps segment applications for an easy refresh, cleanup, and repair. Developers collaborate to resolve issues faster and scale containers when needed. Images are portable across platforms, giving developers and IT teams greater flexibility. Ultimately, this leads to less downtime in the field and faster resolution of errors — a win-win for everyone involved.
Docker is not a good fit for applications with these qualities: They require high performance or security, they’re built around monolithic architectures, they use rich GUI features, or they perform standard desktop or limited functions.
In this section, you learned that Docker is an open platform for developing, shipping, and running applications as containers. Docker speeds up the deployment process across multiple environments and makes it easier to work together on code by providing everyone with a local copy of the project’s development environment. In addition, once you create your app within a Docker container, you can deploy that container to another location — though this may not be necessary for all projects. Finally, since Docker containers are not tied to any specific underlying infrastructure or virtualization software, they can be used anywhere there is an instance of Docker running.
Building and Running Container Images
After you’ve read this section, you can build your container images. You’ll also know how to launch a container using an existing image. And you will learn how to use some of Docker’s most essential commands.

This diagram shows the development process of a running container. The steps to create and run containers are as follows: Create a Dockerfile, a text file containing all the commands that will be executed when you create an image. Next, use this file to create an image, an instruction set for your container. And use the image to create a running container, which is a running instance of the image.

## Steps to create and run containers:
1. Create a Dockerfile
2. Use the Dockerfile to create a container image
3. Use the container image to create a running container
Use a Dockerfile to create a running container. This sample Dockerfile has the commands FROM and CMD. FROM defines the base image. And CMD prints the words” Hello World!” on the terminal.
FROM alpine
CMD ["echo", "Hello, world!!"]
A Docker build command builds an image using the build command, a tag, the repository, the version, and the current directory. The output messages include: Sending build context to Docker daemon “Successfully built <image id>” and “Successfully tagged my-app:v1”

To verify that the image was created, run the “docker images” command. The output displays the repository (my-app), tag (v1), image ID, creation date, and image size.

To run an instance of the application, use the run command with the name and tag of the container image. The application prints, “Hello, world!!”

Run the docker ps -a command, which shows details on the container you created.

The build command creates containers from a Dockerfile. The images command lists all the images, their tags and repositories, and their sizes. The run command creates and runs a container from an image. The push command stores images in a configured registry, and The pull command retrieves images from a configured registry.

In this section, you learned that the build command is used with a Dockerfile to build an image, the run command is used with an image to create a running container, and some of the most common Dockerfile commands are build, images, run, pull, and push.
Docker Objects
After reading this section, you will know how to identify and name Docker objects, list the commands used in a Dockerfile, describe the naming format of container images, and explain how Docker uses networks, storage, and plugins.

Docker is a tool that manages the building, shipping, and running of applications inside software containers. It consists of objects such as the Dockerfile, images, container, network, storage volumes, and other entities such as plugins and add-ons.

A Dockerfile is a text file that contains instructions needed to create an image. You can create a Dockerfile using any editor from the console or terminal. Here are several of the essential education you can include in your Dockerfile.
FROM
A Dockerfile must always begin with a FROM instruction that defines a base image. The most common base image is an operating system or a specific language like Go or Node.js.
RUN
The RUN command executes commands.
CMD
The CMD instruction sets the default command for execution. Therefore, a Dockerfile should have only one CMD instruction. If a Dockerfile has several CMD instructions, only the last one will take effect.
Let’s learn about Docker images.
A Docker image is a read-only template with instructions for creating a Docker container. The Dockerfile provides instructions to build the image. Each Docker instruction creates a new layer in the image. Only changed layers are rebuilt when you change the Dockerfile and rebuild the image. The image contains read-only layers, which save disk space and network bandwidth when sending and receiving images. When you instantiate this image, you get a running container. At this point, a writeable container layer is placed on top of the read-only layers. The writeable layer is needed because containers are not immutable as images.

Let’s learn how images are named.
When naming images, the name has three parts: the hostname, repository, and tag.
hostname/repository:tag
hostname: The hostname identifies the image registry.
repository: A repository is a group of related container images.
tag: The tag provides information about an image’s specific version or variant.
Consider the image name docker.io/ubuntu:18.04:
docker.io/ubuntu:18.04
hostname: The hostname docker.io refers to the Docker Hub registry. When using the Docker CLI, you can exclude the docker.io hostname.
repository: The repository name ubuntu indicates an Ubuntu image.
tag: Finally, the tag, shown here as 18.04, represents the installed Ubuntu version.
Let’s learn about Docker containers.
A Docker container is an instance of an image. You can use the Docker API or CLI to create, start, stop, or delete an image. You can connect to multiple networks and attach storage to the container. You can also create a new image based on its current state. Docker keeps containers well isolated from each other and their host machine.
When using Docker, the networks help isolate the container’s communications. Data doesn’t persist when the container stops running. But you can use volumes and bind mounts to persist data even after a container stops. Plugins, such as storage plugins, provide the ability to connect to external storage platforms.
In this section, you learned that docker contains several objects, including Dockerfiles, images, containers, networks, storage volumes, and other objects such as plugins and add-ons. The essential Docker instructions are FROM, RUN, and CMD. A docker container is an instance of an image — a running process with its file system that can be started and stopped independently from other containers or the host system. To name an image, use the following format: <hostname>:<repository>:<tag>. Docker uses networks to isolate container communications. Docker uses volumes and binds mounts to persist data even after a container stops running. And, Plugins, such as storage plugins, provide the ability to connect to external storage platforms.
Docker Architecture
After reading this article, you will be able to identify the components of Docker’s architecture, explain the features of these components, and describe how containers are created with Docker.

The Docker platform enables you to manage applications as containers and provides a complete application environment. The Docker components include the Docker client, the Docker daemon, and the Docker Hub. Here is a high-level overview of how Docker works:
First, you will use the Docker command line interface or the REST API via the Docker client to send instructions to your Docker host server. Next, the host contains the daemon, known as dockerd. The daemon listens for Docker API requests or commands such as “docker run” and processes those commands. It then builds, runs, and distributes containers based on those instructions. Finally, it stores container images in a registry.
The Docker host manages Images, containers, namespaces, networks, storage plugins, and add-ons.

The Docker client lets you communicate with your Docker host, which can be local or remote. For example, you can run the Docker client and the daemon on the same system or connect to a remote daemon. And daemons can also talk to other daemons to manage services for Docker containers.

Docker stores and distributes images in a repository. The repository is either public, such as Docker Hub, accessible to everyone, or private. Enterprises usually opt for private repositories for security reasons. And locations are hosted using a third-party provider, such as IBM Cloud Container Registry, or self-hosted in private data centers or on the cloud.
Let’s learn about moving images into the registry.
First, developers build and push the images using automation or a build pipeline into a registry, which Docker stores. Then, systems can pull those images.

Let’s examine the process in more detail. Here is a visual representation of the Docker architecture, which consists of the client; the Docker host, including the Docker daemon; and the registry with its existing stored images. The containerization process begins when a user asks Docker to create an image from a current image on the registry. To start, you can either use an existing base image or a Dockerfile. A Dockerfile is a text document that contains all of the commands needed to create an image. To create a container image, issue the build command that creates a container image with a name. Once you have created an image, you can use the push command to store it in a registry. The host first checks locally if the image is already available and then issues the run command with the image name to create the container. If the image is unavailable within the host, the Docker client connects to the registry and pulls the image to the host. The daemon then creates a running container using that image.

In this section, you learned that: Docker architecture consists of a client, host, and registry. The client interacts with the host using commands and REST APIs. The Docker host includes the daemon, commonly called dockerd. The Docker host also manages images, containers, namespaces, networks, storage, plugins, and add-ons, and containerization is the process used to build, push, and run an image to create a running container.
At this point, you know:
- A container is a standardized unit of software that includes everything needed to build, ship, and run applications.
- Containers lower deployment time and cost, improve utilization, automate processes and support next-generation applications. Major container vendors include Docker, Podman, LXC, and Vagrant.
- Docker is an open platform for developing, shipping, and running applications.
- Docker containers may not be the best choice for applications based on monolithic architecture or applications that require high performance or security.
- The Docker architecture consists of the Docker client, which is the program that runs on your computer and enables you to run containers; the Docker host, which runs on your server and allows you to deploy those containers; and a container registry, where images are stored so that they can be reused.
- The Docker host contains the Docker daemon and a wide array of objects, including images, containers, networks, storage volumes, and other things like plugins and add-ons.
- Docker uses networks to isolate containers.
- Docker uses volumes and binds mounts to help you persist data even after a container stops running.
- Plugins like storage plugins allow you to connect to external storage platforms.
Cheat Sheet: Docker CLI
curl
localhost Pings the application.docker build
Builds an image from a Dockerfile.docker build . -t
Builds the image and tags the image id.docker CLI
Start the Docker command line interface.docker container rm
Removes a container.docker images
Lists the images.docker ps
Lists the containers.docker ps -a
Lists the containers that ran and exited successfully.docker pull
Pulls the latest image or repository from a registry.docker push
Pushes an image or a repository to a registry.docker run
Runs a command in a new container.docker run -p
Runs the container by publishing the ports.docker stop
Stops one or more running containers.docker stop $(docker ps -q)
Stops all running containers.docker tag
Creates a tag for a target image that refers to a source image.docker –version
Displays the version of the Docker CLI.exit
Closes the terminal session.export MY_NAMESPACE
Exports a namespace as an environment variable.git clone
Clones the git repository that contains the artifacts needed.ls
Lists the contents of this directory to see the artifacts.
Glossary: Containers Basics
Agile is an iterative approach to project management and software development that helps teams deliver value to their customers faster and with fewer issues.Client-server architecture is a distributed application structure that partitions tasks or workloads between the providers of a resource or service, called servers, and service requesters, called clients.A container powered by the containerization engine, is a standard unit of software that encapsulates the application code, runtime, system tools, system libraries, and settings necessary for programmers to efficiently build, ship and run applications.Container Registry Used for the storage and distribution of named container images. While many features can be built on top of a registry, its most basic functions are to store images and retrieve them.CI/CD pipelines A continuous integration and continuous deployment (CI/CD) pipeline is a series of steps that must be performed in order to deliver a new version of software. CI/CD pipelines are a practice focused on improving software delivery throughout the software development life cycle via automation.Cloud native A cloud-native application is a program that is designed for a cloud computing architecture. These applications are run and hosted in the cloud and are designed to capitalize on the inherent characteristics of a cloud computing software delivery model.Daemon-less A container runtime that does not run any specific program (daemon) to create objects, such as images, containers, networks, and volumes.DevOps is a set of practices, tools, and a cultural philosophy that automate and integrate the processes between software development and IT teams.Docker An open container platform for developing, shipping and running applications in containers.
A Dockerfile is a text document that contains all the commands you would normally execute manually in order to build a Docker image. Docker can build images automatically by reading the instructions from a Dockerfile.Docker client is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.Docker Command Line Interface (CLI) The Docker client provides a command line interface (CLI) that allows you to issue build, run, and stop application commands to a Docker daemon.
Docker daemon (dockerd) creates and manages Docker objects, such as images, containers, networks, and volumes.Docker Hub is the world's easiest way to create, manage, and deliver your team's container applications.Docker localhost Docker provides a host network which lets containers share your host’s networking stack. This approach means that a localhost in a container resolves to the physical host, instead of the container itself.Docker remote host A remote Docker host is a machine, inside or outside our local network which is running a Docker Engine and has ports exposed for querying the Engine API.Docker networks help isolate container communications.
Docker plugins such as a storage plugin, provides the ability to connect external storage platforms.Docker storage uses volumes and bind mounts to persist data even after a running container is stopped.LXC LinuX Containers is a OS-level virtualization technology that allows creation and running of multiple isolated Linux virtual environments (VE) on a single control host.Image An immutable file that contains the source code, libraries, and dependencies that are necessary for an application to run. Images are templates or blueprints for a container.Immutability Images are read-only; if you change an image, you create a new image.Microservices are a cloud-native architectural approach in which a single application contains many loosely coupled and independently deployable smaller components or services.Namespace A Linux namespace is a Linux kernel feature that isolates and virtualizes system resources. Processes which are restricted to a namespace can only interact with resources or processes that are part of the same namespace. Namespaces are an important part of Docker’s isolation model. Namespaces exist for each type of resource, including networking, storage, processes, hostname control and others.Operating System Virtualization OS-level virtualization is an operating system paradigm in which the kernel allows the existence of multiple isolated user space instances, called containers, zones, virtual private servers, partitions, virtual environments, virtual kernels, or jails.Private Registry Restricts access to images so that only authorized users can view and use them.REST API A REST API (also known as RESTful API) is an application programming interface (API or web API) that conforms to the constraints of REST architectural style and allows for interaction with RESTful web services.Registry is a hosted service containing repositories of images which responds to the Registry API.Repository is a set of Docker images. A repository can be shared by pushing it to a registry server. The different images in the repository can be labelled using tags.Server Virtualization is the process of dividing a physical server into multiple unique and isolated virtual servers by means of a software application. Each virtual server can run its own operating systems independently.Serverless is a cloud-native development model that allows developers to build and run applications without having to manage servers.Tag is a label applied to a Docker image in a repository. Tags are how various images in a repository are distinguished from each other.
Because of the explosive growth of new features and companies that utilize Docker containers, we can safely say that this is indeed one of the top trending cloud computing topics today. Although Docker is relatively new and still lacks some essential features, it is hard to argue against its potential and the impressive list of companies committed to using Docker in their development processes, including Amazon Web Services, Google, IBM, Microsoft Azure, Oracle, and Facebook. It will be exciting to see what developers will come up with regarding container management and security. There is no question that this technology is here to stay!
My other publications: https://ercindedeoglu.github.io/