In these days, thanks to cloud-based infrastructures and flexible infrastructures, systems can scale up within seconds if the need arises and if something bad happens to one instance, other instances take over immediately. Even system upgrades or application releases can be done on the fly without impacting availability. As a result of this, we simply don’t see many incidents anymore that actually cause downtime. Systems can maintain 99,999% over periods of multiple years.
But 100% availability doesn’t always guarantee a perfect customer experience. Modern web applications are complex beasts that consist of many individual components, many of which the owner of the site can’t even control. If one of these objects has an issue and the object is on the critical path, then the customer will experience a slowdown. Let’s be honest, this is what we are all experiencing almost every day as we are spending more and more time on the Internet and not always on bleeding edge devices or perfect network conditions.
The good thing about availability is that it is boolean or binary. Either your system is available or it isn’t and that make the issue relatively easy to detect. Performance degradation is much harder to detect and often, even harder to fix. All the more reason to focus on delivering good performance immediately in the design and development phase of an application. At least be really clear about what performance you expect a system to deliver (and how to measure that).
Lighthouse can help you to detect and fix performance issues and our Dynalight platform is able to help you avoid issues through advanced optimization and automation. Please contact us if you would like to find out more!