Containerization

Containerization

Containerization

Containerization

Explore the history of containerization technology, the benefits and advantages of utilizing the technology, and how it related to virtualization.

                                           Buy Shipping Container here

Top 5 benefits of containerization - Monitis Blog

What is containerization?

Containerization has become a major trend in software development as an alternative or companion to virtualization. It involves encapsulating or packaging up software code and all its dependencies so that it can run uniformly and consistently on any infrastructure. The technology is quickly maturing, resulting in measurable benefits for developers and operations teams as well as overall software infrastructure.

Containerization, method of transporting freight by placing it in large containers. Containerization is an important cargo-moving technique developed in the 20th century. Road-and-rail containers, sealed boxes of standard sizes, were used early in the century; but it was not until the 1960s that containerization became a major element in ocean shipping, made possible by new ships specifically designed for container carrying. Large and fast, container ships carry containers above deck as well as below; and their cargoes are easily loaded and unloaded, making possible more frequent trips and minimum lost time in port. Port facilities for rapid handling of containers are necessarily complex and expensive and usually justified only if there is large cargo traffic flowing both ways. A container may leave a factory by truck and be transferred to a railroad car, thence to a ship, and, finally, to a barge; such transfers of an uncontainerized cargo would add substantially to cost.

Containerization allows developers to create and deploy applications faster and more securely. With traditional methods, code is developed in a specific computing environment which, when transferred to a new location, often results in bugs and errors. For example, when a developer transfers code from a desktop computer to a virtual machine (VM) or from a Linux to a Windows operating system. Containerization eliminates this problem by bundling the application code together with the related configuration files, libraries, and dependencies required for it to run. This single package of software or “container” is abstracted away from the host operating system, and hence, it stands alone and becomes portable—able to run across any platform or cloud, free of issues.

The concept of containerization and process isolation is decades old, but the emergence of the open source Docker Engine in 2013, an industry standard for containers with simple developer tools and a universal packaging approach, accelerated the adoption of this technology. Research firm Gartner projects that more than 50% of companies will use container technology by 2020. And results from a late 2017 survey (PDF, 457 KB) conducted by postfirstclass.com suggest that adoption is happening even faster, revealing that 59% of adopters improved application quality and reduced defects as a result.

History of Containerization

Understanding Containerization By Recreating Docker | by Daniel Mitre |  ITNEXT

Modern container shipping celebrated its 50th anniversary in 2006. Almost from the first voyage, use of this method of transport for goods grew steadily and in just five decades, containerships would carry about 60% of the value of goods shipped via sea.

The idea of using some type of shipping container was not completely novel. Boxes similar to modern containers had been used for combined rail- and horse-drawn transport in England as early as 1792. The US government used small standard-sized containers during the Second World War, which proved a means of quickly and efficiently unloading and distributing supplies. However, in 1955, Malcom P. McLean, a trucking entrepreneur from North Carolina, USA, bought a steamship company with the idea of transporting entire truck trailers with their cargo still inside. He realized it would be much simpler and quicker to have one container that could be lifted from a vehicle directly on to a ship without first having to unload its contents.

His ideas were based on the theory that efficiency could be vastly improved through a system of “intermodalism”, in which the same container, with the same cargo, can be transported with minimum interruption via different transport modes during its journey. Containers could be moved seamlessly between ships, trucks and trains. This would simplify the whole logistical process and, eventually, implementing this idea led to a revolution in cargo transportation and international trade over the next 50 years.

Containerization with Docker

Containers are often referred to as “lightweight,” meaning they share the machine’s operating system kernel and do not require the overhead of associating an operating system within each application. Containers are inherently smaller in capacity than a VM and require less start-up time, allowing far more containers to run on the same compute capacity as a single VM. This drives higher server efficiencies and, in turn, reduces server and licensing costs.

Put simply, containerization allows applications to be “written once and run anywhere.” This portability is important in terms of the development process and vendor compatibility. It also offers other notable benefits, like fault isolation, ease of management and security, to name a few. Click here to learn more about the benefits of containerization.

How containerization brings AI to your DevOps pipeline | TechBeacon

Why Containers?

Instead of virtualizing the hardware stack as with the virtual machines approach, containers virtualize at the operating system level, with multiple containers running atop the OS kernel directly. This means that containers are far more lightweight: they share the OS kernel, start much faster, and use a fraction of the memory compared to booting an entire OS.

There are many container formats available. Docker is a popular, open-source container format that is supported on Google Cloud Platform and by Google Kubernetes Engine.

Why Sandbox anyway?

Containers silo applications from each other unless you explicitly connect them. That means you don’t have to worry about conflicting dependencies or resource contention — you set explicit resource limits for each service. Importantly, it’s an additional layer of security since your applications aren’t running directly on the host operating system.

Consistent Environment

Containers give developers the ability to create predictable environments that are isolated from other applications. Containers can also include software dependencies needed by the application, such as specific versions of programming language runtimes and other software libraries. From the developer’s perspective, all this is guaranteed to be consistent no matter where the application is ultimately deployed. All this translates to productivity: developers and IT Ops teams spend less time debugging and diagnosing differences in environments, and more time shipping new functionality for users. And it means fewer bugs since developers can now make assumptions in dev and test environments they can be sure will hold true in production.

Run Anywhere

Containers are able to run virtually anywhere, greatly easing development and deployment: on Linux, Windows, and Mac operating systems; on virtual machines or bare metal; on a developer’s machine or in data centers on-premises; and of course, in the public cloud. The widespread popularity of the Docker image format for containers further helps with portability. Wherever you want to run your software, you can use containers.

Isolation

Containers virtualize CPU, memory, storage, and network resources at the OS-level, providing developers with a sandboxed view of the OS logically isolated from other applications.

Container Benefits Virtual Machine Benefits
Consistent Runtime Environment
Application Sandboxing
Small Size on Disk
Low Overhead

Application containerization

Containers encapsulate an application as a single executable package of software that bundles application code together with all of the related configuration files, libraries, and dependencies required for it to run. Containerized applications are “isolated” in that they do not bundle in a copy of the operating system. Instead, an open source runtime engine (such as the Docker runtime engine) is installed on the host’s operating system and becomes the conduit for containers to share an operating system with other containers on the same computing system.

Other container layers, like common bins and libraries, can also be shared among multiple containers. This eliminates the overhead of running an operating system within each application and makes containers smaller in capacity and faster to start up, driving higher server efficiencies. The isolation of applications as containers also reduces the chance that malicious code present in one container will impact other containers or invade the host system.

The abstraction from the host operating system makes containerized applications portable and able to run uniformly and consistently across any platform or cloud. Containers can be easily transported from a desktop computer to a virtual machine (VM) or from a Linux to a Windows operating system, and they will run consistently on virtualized infrastructures or on traditional “bare metal” servers, either on-premise or in the cloud. This ensures that software developers can continue using the tools and processes they are most comfortable with.

How to get started with containerization | InfoWorld

One can see why enterprises are rapidly adopting containerization as a superior approach to application development and management. Containerization allows developers to create and deploy applications faster and more securely, whether the application is a traditional monolith (a single-tiered software application) or a modular microservice (a collection of loosely coupled services). New cloud-based applications can be built from the ground up as containerized microservices, breaking a complex application into a series of smaller specialized and manageable services. Existing applications can be repackaged into containers (or containerized microservices) that use compute resources more efficiently.

Benefits

What are the Benefits of Containerization? | Humanitec

Containerization offers significant benefits to developers and development teams. Among these are the following:

  • Portability: A container creates an executable package of software that is abstracted away from (not tied to or dependent upon) the host operating system, and hence, is portable and able to run uniformly and consistently across any platform or cloud.
  • Agility: The open source Docker Engine for running containers started the industry standard for containers with simple developer tools and a universal packaging approach that works on both Linux and Windows operating systems. The container ecosystem has shifted to engines managed by the Open Container Initiative (OCI). Software developers can continue using agile or DevOps tools and processes for rapid application development and enhancement.
  • Speed: Containers are often referred to as “lightweight,” meaning they share the machine’s operating system (OS) kernel and are not bogged down with this extra overhead. Not only does this drive higher server efficiencies, it also reduces server and licensing costs while speeding up start-times as there is no operating system to boot.
  • Fault isolation: Each containerized application is isolated and operates independently of others. The failure of one container does not affect the continued operation of any other containers. Development teams can identify and correct any technical issues within one container without any downtime in other containers. Also, the container engine can leverage any OS security isolation techniques—such as SELinux access control—to isolate faults within containers.
  • Efficiency: Software running in containerized environments shares the machine’s OS kernel, and application layers within a container can be shared across containers. Thus, containers are inherently smaller in capacity than a VM and require less start-up time, allowing far more containers to run on the same compute capacity as a single VM. This drives higher server efficiencies, reducing server and licensing costs.
  • Ease of management: A container orchestration platform automates the installation, scaling, and management of containerized workloads and services. Container orchestration platforms can ease management tasks such as scaling containerized apps, rolling out new versions of apps, and providing monitoring, logging and debugging, among other functions. Kubernetes, perhaps the most popular container orchestration system available, is an open source technology (originally open-sourced by Google, based on their internal project called Borg) that automates Linux container functions originally. Kubernetes works with many container engines, such as Docker, but it also works with any container system that conforms to the Open Container Initiative (OCI) standards for container image formats and runtimes.
  • Security: The isolation of applications as containers inherently prevents the invasion of malicious code from affecting other containers or the host system. Additionally, security permissions can be defined to automatically block unwanted components from entering containers or limit communications with unnecessary resources.

Containerization

Types

The rapid growth in interest and usage of container-based solutions has led to the need for standards around container technology and the approach to packaging software code. The Open Container Initiative (OCI), established in June 2015 by Docker and other industry leaders, is promoting common, minimal, open standards and specifications around container technology. Because of this, the OCI is helping to broaden the choices for open source engines. Users will not be locked into a particular vendor’s technology, but rather they will be able to take advantage of OCI-certified technologies that allow them to build containerized applications using a diverse set of DevOps tools and run these consistently on the infrastructure(s) of their choosing.

Today, Docker is one of the most well-known and highly used container engine technologies, but it is not the only option available. The ecosystem is standardizing on containerd and other alternatives like CoreOS rkt, Mesos Containerizer, LXC Linux Containers, OpenVZ, and crio-d. Features and defaults may differ, but adopting and leveraging OCI specifications as these evolve will ensure that solutions are vendor-neutral, certified to run on multiple operating systems and usable in multiple environments.

Microservices and containerization

Software companies large and small are embracing microservices as a superior approach to application development and management, compared to the earlier monolithic model that combines a software application with the associated user interface and underlying database into a single unit on a single server platform. With microservices, a complex application is broken up into a series of smaller, more specialized services, each with its own database and its own business logic. Microservices then communicate with each other across common interfaces (like APIs) and REST interfaces (like HTTP). Using microservices, development teams can focus on updating specific areas of an application without impacting it as a whole, resulting in faster development, testing, and deployment.

The concepts behind microservices and containerization are similar as both are software development practices that essentially transform applications into collections of smaller services or components which are portable, scalable, efficient and easier to manage.

Moreover, microservices and containerization work well when used together. Containers provide a lightweight encapsulation of any application, whether it is a traditional monolith or a modular microservice. A microservice, developed within a container, then gains all of the inherent benefits of containerization—portability in terms of the development process and vendor compatibility (no vendor lock-in), as well as developer agility, fault isolation, server efficiencies, automation of installation, scaling and management, and layers of security, among others.

Today’s communications are rapidly moving to the cloud where users can develop applications quickly and efficiently. Cloud-based applications and data are accessible from any internet-connected device, allowing team members to work remotely and on-the-go. Cloud service providers (CSPs) manage the underlying infrastructure, which saves organizations the cost of servers and other equipment and also provides automated network backups for additional reliability. Cloud infrastructures scale on demand and can dynamically adjust computing resources, capacity, and infrastructure as load requirements change. On top of that, CSPs regularly update offerings, giving users continued access to the latest innovative technology.

Containers, microservices, and cloud computing are working together to bring application development and delivery to new levels not possible with traditional methodologies and environments. These next-generation approaches add agility, efficiency, reliability, and security to the software development lifecycle—all of which leads to faster delivery of applications and enhancements to end users and the market.

The Birth of “Intermodalism”

To realize intermodal cargo transport, all areas of the transport chain had to been integrated. It was not simply a question of putting cargo in containers. The ships, port terminals, trucks and trains had to been adapted to handle the containers.

The Containership

On 26 April 1956, Malcom McLean’s converted World War II tanker, the Ideal X, made its maiden voyage from Port Newark to Houston in the USA. It had a reinforced deck carrying 58 metal container boxes as well as 15,000 tons of bulk petroleum. By the time the container ship docked at the Port of Houston six days later the company was already taking orders to ship goods back to Port Newark in containers. McLean’s enterprise later became known as Sea-Land Services, a company whose ships carried cargo-laden truck trailers between Northern and Southern ports in the USA.

Other companies soon turned to this approach. Two years later, Matson Navigation Company’s ship Hawaiian Merchant began container shipping in the Pacific, carrying 20 containers from Alameda to Honolulu. In 1960, Matson Navigation Company completed construction of the Hawaiian Citizen, the Pacific’s first full container ship. Meanwhile, the first ship specifically designed for transporting containers, Sea-Land’s Gateway City, made its maiden voyage on 4 October 1957 from Port Newark to Miami, starting a regular journey between Port Newark, Miami, Houston and Tampa. It required only two gangs of dockworkers to load and unload, and could move cargo at the rate of 264 tons an hour. Shortly afterwards, the Santa Eliana, operated by Grace Line, became the first fully containerized ship to enter foreign trade when she set sail for Venezuela in January 1960.

The Container

It was a logical next step that container sizes could be standardized so that they could be most efficiently stacked and so that ships, trains, trucks and cranes at the port could be specially fitted or built to a single size specification. This standardization would eventually apply across the global industry.

As early as 1960, international groups already recognizing the potential of container shipping began discussing what the standard container sizes should be. In 1961, the International Organization for Standardization (ISO) set standard sizes. The two most important, and most commonly used sizes even today, are the 20-foot and 40-foot lengths. The 20-foot container, referred to as a Twenty-foot Equivalent Unit (TEU) became the industry standard reference with cargo volume and vessel capacity now measured in TEUs. The 40-foot length container – literally 2 TEUs – became known as the Forty-foot Equivalent Unit (FEU) and is the most frequently used container today.

1 Comment

  • tutorial centre on January 4, 2021 2:12 pm Reply

    【兆基創意書院】 兆基創意書院新增初中課程?可自創校服?
    Apple, 2 weeks ago 1 min 41
    封面圖片來源

    文化
    第二,書院比較民主,關於學生的大小事務都會和學生溝通討論。它的自由度分為六大點。第一,兆基創意書院校服是由學生自由選擇及搭配。兆基創意書院的校服由本地知名設計師—陳米記設計,校服分為POLO-shirt,T-shirt,恤衫,同時亦有不同顏色可供選擇。第二,學生可以在校服上做任何設計。但不能令校服損壞和尊重設計師設計。學生可以發揮自己的創意,搭配成獨一無二的校服。第三,書院容許塗鴉。學生可以在院內塗上自己的獨特設計。第四,學生可以自由舉辦任何學會。第五,班別可以設計有自己風格的教室。

    圖片來源

    這是未受同學們改成自己風格前的校服
    這是未受同學們改成自己風格前的校服
    延伸閱讀:【升中派位2020】中學派位全攻略!仲有免費升中派位指南!

    延伸閱讀:【呈分試比重】2021呈分試攻略 助子女入讀心儀學校!

    補習社
    補習免中介
    搵補習
    補習平台
    補習老師

Leave a Comment

Your email address will not be published. Required fields are marked *

Style switcher Reset
Body styles
Custom Color
Main color
Accent color
Background image
Patterns
error: Content is protected !!