From the Blog

5 Reasons Container Infrastructure Deployments Fail – And How You Can Avoid Them

Developers love containers.

More than 50% of enterprises have adopted container technology or plan to in 2018. These initiatives will be lead by Developers and Platform Architects according to a recent report by Dimanti. Another survey from ClearPath states that container adoption is going to outpace even DevOps adoption.

This shouldn’t come as a surprise to anyone. Since the release of Docker in 2014, containers have been empowering developers and giving them the ability to move faster than they could have before while enabling deployment of new technologies like serverless architectures and providing uniformity across software infrastructures.

It’s clear containers can drive critical business transformation. However, not all container deployments are successful. Very often container implementations turn into costly projects that yield little to no business impact. That’s why in today’s blog post, we’re going to take a deep dive into why containers fail and how organizations can successfully arrive at a containerized environment.

We will address 4 key issues which we believe organizations encounter which inevitably cause them to fail in their first attempt to move to a container architecture- and how to avoid these pitfalls. Let’s begin!


#1 Enterprises Share Data, Public Clouds keep it Separated

Whether you run a large scale banking application or an industrial IoT software responsible for latency-sensitive sensor interactions, having the ability to share your compute infrastructure and data natively is critical.  This will directly affect your ability to scale via any software scalability model (i.e. microservices) or infrastructure stack. Most container architectures were adopted by public cloud vendors for their uses cases meaning their networking, security and isolation of data are top priorities. This works well for multi-tenancy but is problematic when creating enterprise architectures.

Container security is very important, although you must ensure that any component of the architecture has the ability to communicate natively with other components and data. This ability to share data securely is not always inherent to containers.

As we look at the widespread adoption of containers, from Windows to Linux to software destined for an L4 Microkernel, it’s important to understand the components which allow an enterprise architecture to truly scale across organization and technology lines.

More important is the ability for enterprise organizations to have a similar network architecture and have software work cohesively across a single container ecosystem (e.g. hybrid clouds). For example, in hospitals which have multiple data and information systems, life or death can be determined in the sharing of critical patient care information. When communication layers do not allow disparate software architectures to communicate in a single data plane, the cost can be a person’s life. This lack of information sharing is common in many industries and enterprises leveraging traditional public cloud-based communication approaches, including banking and high tech environments.

We recommend that you not only ensure the business functions you are supporting are mapped out (e.g. see #4 below, especially for enterprise tech teams), but that you also understand all components of your solution’s stack and confirm they’re mapping to be twelve-factor (or at least are on an OS compatible with the container technology).


#2 The Ability To Communicate Across Your Container Architecture is Key

In many organizations, container infrastructure is first set up by application teams and software development organizations. However, this places containers further away from the systems experts and the network gurus who are key to ensuring scalability with your communications platform.

Most enterprise products default to bridge or NAT-based implementations, which work well for a few dozen systems.  However, the reason that organizations are moving to containers is to tackle a set of workloads which were not previously achievable.

Unless our container network is able to allow for the rapid scale up and scale down of applications and system namespaces, the software environment will not be able to take advantage of the maturity containers can provide. We recommend a native layer 3 architecture, preferably using mature tunneling or other native approaches in communication.

Evolute focuses on ensuring that whether the network needs to scale across one hundred or tens of thousands of systems it reaches its effective architecture. With simplicity at scale and native communication across Layer 2, we’ve been able to extend connectivity across operating systems, allowing native communication across Windows and Linux containers as a part of the same backplane.  This foundation is critical as organizations need the ability to share information and resources.


#3 The Only Thing More Important Than Features Is Security

Containers were created to be a simple isolation unit used at the OS level. Isolation would occur between processes, hence the system’s security was not a top priority in its design. Much like Chrome’s introduction of the tabbed browser, containers were focused on the isolation of a single application. However, container security can no longer be an afterthought, especially if the use case requires no human intervention.  

For enterprises, security is one of the top pitfalls to achieving maturity. Enterprises need to consider the method by which they will ensure the protection of the privileged user. If you walk through the CIS benchmark, an industry standard, you’ll notice 50% of all security issues have to do with authentication and authorization. Thus its critical for the organization to ensure their security plan and systems validates application security. In the Evolute platform, we’ve found the four key inputs to any application workload baseline.

This allows the privileged user, either via the `evolute container` or our management interface, to ensure privileged software execution is managed by the appropriate user. While there are many other concerns when it comes to security, the ability to advance policy-driven networking is critical to ensuring containerized workloads can scale to meet the environment’s demand.

Business leaders can be confident that the layer of host, network and base application security meets existing standards and scalable in supporting your favorite BPF-packet filtering or vulnerability assessment tool to ensure you’re meeting the baseline for a successful adoption.


#4 Containers are Nice, Business Is Messy

While most technologists can tell you about the advantages of containers (e.g.: 1,000 Mhz and 100mb saved, applications up in 30 seconds, ability to move to microservices), the truth is that most enterprises do not take a business-first approach to the technology. This is one of the top things engineers encounter  when it comes to containers. This technology can solve too many problems across an enterprise, enterprise application, or software infrastructure. In order to be successful, organizations must first focus and choose the primary goal to achieve with containers.  

As software, system, and core business engineers develop their enterprise environments, they would be wise to heed these words by Michael Porter: “strategy is choosing what not to do”. So it must be with containers in any environment. You need to decide whether true north is microservices and serverless or simply cost reduction. The latter was actually the first use case at companies like Google and Apple, who were far outpacing their physical hardware and overpaying for the cost of virtualization infrastructure. Evolute simplified our product model and platform to ensure each organization does not get off track as they address their implementation objectives. Our platform can solve what we believe is the large majority of container scalability, adoption, and technology problems.

When it comes to application automation and infrastructure, our goal of providing our migration, infrastructure and management tools separately ensures we’re enabling enterprises to achieve their best business impact across the organization.



While the enterprise is projected to be the top adopter of container technology over the next year, key implementation success factors include:

  1. Focusing on true business objectives and maintaining convergence between desired business outcomes and technology implementation
  2. Implementing a security-first architecture
  3. Ensuring containers are deployed with native communication    
  4. Confirming containers have a model which allows them to share information across the enterprise

Architects, enterprise IT Managers and software engineers need guidance to build an environment in which they can quickly and successfully achieve enterprise objectives with container technology. Stay tuned as we review how Evolute is able to ensure the base platform for a container is successfully realized independent of its use cases, providing best practices you can use as you deploy and operate container technology.