AI value or vanity? How SaaS companies are approaching innovation
Download the report
Request a DemoLog in

Containerisation: not just for shipping!

Blake Wilkinson Cloud Architect
Publish date: 22nd July 2021

There are horror stories of trying to release a change to one program, only to discover that another program (which has happily been chugging away in the background) has fallen over because they share a dependency. Or maybe a defect released with a piece of software (it happens!) which could not be rolled back because it might take too long, so the only conceivable way of fixing something is to carry on moving forwards, leading to a protracted period of downtime and unstable releases. Or a change that worked perfectly in development but does not work in production! 

But it means so much more than just running production workloads. Testers frequently jump around different releases as they compare functionality quirks to pin down when a particular change started impacting on the smooth running of software. It used to be difficult to test software in a sterile environment, where you could eliminate differences in base configuration as being the driving factor in the unexpected behaviour of your software.

What is containerisation?

Think of containerisation as an astronaut on a space walk in their space suit. Their space suit has everything they need to maintain their life - oxygen, temperature regulation and flexibility so they can perform the task at hand without too much restriction. Their space suit is sealed off from the outside world. Containers are very much like this. They too are encapsulated and come with their own environment. 

Containerisation also comes with a great degree of flexibility. You can choose to be very light touch, using docker run to launch individual containers on your host. You can group them into services and take advantage of pre-configured internal networking and check your configurations into source control or simply have things repeatable and shareable by means of docker compose files. Or you could go the whole way and use a container orchestration to manage fleets of containers on your public cloud (most public cloud providers have container orchestration, usually Kubernetes). Exactly how far down the rabbit hole you want to go is your discretion. 

Continuing with the best practices of dev-ops, configuration is pulled forwards to the beginning of deployment and is stored as code, meaning it is repeatable – eminently testable. Because containers are encapsulated, deployment and upgrade are greatly simplified, and the upgrade process is testable too. Containers are immutable, meaning if one stops working, destroy it and launch a new one in its place. Containers are specific. When it does come to upgrading, launching a new container alongside the old and migrate traffic can be done at your own pace - big bang, or gradual and tested.

A real world example

A recent example was with investigating an issue with Panintelligence’s test team, who recently increased their adoption of docker. They took a test image from a recent build and found it was not launching correctly. They were seeing evidence in the logs that the application was not connecting properly to the repos database -which was a MariaDB configured to run externally from the application on another container, another win for containerisation! Was the issue in the compose file? One line on the compose file was changed to point from the broken version to a previously released version and voila! the dashboard worked flawlessly, highlighting that the problem was in the new release.   

In the old world, uninstalling and reinstalling an old version might take 20 minutes, and there may be other hanging configuration which could sour the results of the test. With containerisation, performing the switch with a simple edit of the compose file and a simple command, took just 2 minutes, confirming there was nothing left on the host which could have changed the results. Moreover, because the configuration was stored in the compose file, it can be shared easily with anyone who wants it and they can easily reproduce the issue, meaning time to fix is reduced. 

If you’d like to try containerisation, you can ask to be whitelisted on Panintelligence’s “dockerhub” account, where each new release is published. Upgrade can be as simple as pulling the new image and running it! There are always best practices associated with containerisation, but the beauty part of it all is, it’s free and widely adopted, so there’s plenty of help out there… including the in-house containerisation evangelists! 

Topics in this post: 
Blake Wilkinson, Cloud Architect View all posts by Blake Wilkinson
Share this post
Related posts: 
Embedded Analytics

Why every SaaS vendor should think about data from the beginning

Things are moving at a million miles an hour, but you have things under control, so why should you be thinking about data? Data can give you a distinct competitive advantage, among other things. We explain the importance of data in our blog.
Read more >>
Embedded Analytics

Say goodbye to your product backlog in 2023

It can be a huge challenge to prioritize tasks in the backlog. There may be conflicting priorities, or it may be tough to determine which tasks are most important or have the highest potential impact. We share our thoughts on buy vs. build to help reduce your backlog items not core to your IP with minimal to no development time.
Read more >>
Embedded Analytics

Using Qase API and ChatGPT to generate test cases

Have you ever wished for an AI-powered assistant to help you generate more test cases? If yes, then you're in luck. We'll show you how we combined Qase API and ChatGPT's OpenAI API to generate test cases automatically.
Read more >>

Houston... we've got mail.

Sign up with your email to receive news, updates and the latest blog articles to inspire you and your business.
  • This field is for validation purposes and should be left unchanged.
Privacy PolicyT&Cs
© Panintelligence