All CDN’s are equal but some are more equal than others!

All CDN’s are equal but some are more equal than others!

Customer frequently ask us why we are working with multiple Content Distribution Networks and I think that the answer to this question is interesting for a larger audience.

Let’s start with a simple comparison. Think of a CDN like a car. All cars share characteristics but there are also a lot of differences that are not always immediately visible. Some cars are better equipped for shopping and others for moving house, some are better for families with children and others are better for men in a mid-life crisis. The same is true for CDN’s.

There are probably hundred’s of different CDN’s out there and within this large group there are probably around 15 that we would qualify as serious offerings for companies seeking a single global solution. Then there is probably a similar number of good providers for specific geographies like China, Russia or South America. Which CDN is a good solution for your company depends on your specific requirements, there is no generic answer to this question.

When customers ask us to help them select a CDN we follow a proven methodology that starts with an assessment of your applications, the underlying infrastructure, key markets and strategic plans. Based on the outcome of this assessment, we select a shortlist of 3-5 different providers with either a global or regional footprint. These vendors will be asked to deliver a financial proposal.  Then we run a proof of concept phase in which we field test each of these providers on your applications. We monitor these tests using Catchpoint Synthetic Monitoring which offers datacenter hosted monitoring in every relevant location and provide insight in all relevant metrics. With these results we can clearly rank the technical performance and combine that with financials and deliver a clear recommendation.

The output of these assessments has shown us that there are a number of CDN’s that are included in these shortlists more often than others. As our Dynalight platform benefits from a tight integration, we have developed a formal partnership with some of these vendors and will probably seek additional partnerships with others in the near future. These partnerships do not influence the outcome of a CDN selection project in any way.

If you are interested in selecting a CDN or comparing the performance of your current CDN with some competitors, Lighthouse is the perfect partner for you!

Slow is the new Down

Slow is the new Down

This is a catchphrase that keeps popping up the last couple of years but what does is actually mean? In the good old days of mainframes and early Unix servers it was quite common for a system to come grinding to a halt and then to stop working completely. From the users’ perspective the system was down at that point and you had to wait for the system to be up and running again. In the early years of the internet, the infrastructure that supported a website was still highly inflexible and had a fixed capacity. If there was too much traffic or there was a single point of failure that got triggered, the same thing would happen, and no one would be able to access the system.

In these days, thanks to cloud-based infrastructures and flexible infrastructures, systems can scale up within seconds if the need arises and if something bad happens to one instance, other instances take over immediately. Even system upgrades or application releases can be done on the fly without impacting availability. As a result of this, we simply don’t see many incidents anymore that actually cause downtime. Systems can maintain 99,999% over periods of multiple years.

But 100% availability doesn’t always guarantee a perfect customer experience. Modern web applications are complex beasts that consist of many individual components, many of which the owner of the site can’t even control. If one of these objects has an issue and the object is on the critical path, then the customer will experience a slowdown. Let’s be honest, this is what we are all experiencing almost every day as we are spending more and more time on the Internet and not always on bleeding edge devices or perfect network conditions.

The good thing about availability is that it is boolean or binary. Either your system is available or it isn’t and that make the issue relatively easy to detect. Performance degradation is much harder to detect and often, even harder to fix. All the more reason to focus on delivering good performance immediately in the design and development phase of an application. At least be really clear about what performance you expect a system to deliver (and how to measure that).

Lighthouse can help you to detect and fix performance issues and our Dynalight platform is able to help you avoid issues through advanced optimization and automation. Please contact us if you would like to find out more!

Big isn’t always Better!

Big isn’t always Better!

One of the things that Lighthouse is working on is image optimization. What we mean with optimization is serving the optimal size and image quality to your users in the fastest possible way, always, everywhere.

Last week I had an interesting discussion with a CDO who is responsible for over 20 different websites operating mostly across Europe. We talked about his priorities regarding page performance and how these were decided. He explained that they are using a Real User Monitoring tool to track each page that is accessed. The pages are clustered into Gold, Silver and Bronze based on their relevance for the business and each cluster has its own performance objectives. Every week, a report is produced that shows the top 20 pages for the gold and silver cluster that are not meeting the objective and the team will perform a quick analysis to identify the most likely root cause and propose a quick fix. If the fix is successful, the page will drop out of the top 20 report next week. If the underlying issue is complex or related to infrastructure, software components or external parties then a formal project is launched to guarantee a longer-term solution and the pages impacted by this are tagged accordingly.

I think that this is a great way to manage performance. Monitor all that matters. Focus on the things that matter most. Get things done.

Then we talked about the most frequent root-causes for pages to appear in the report and the immediate answer was image size. Content is being produced by a mix of agencies and the frequency with which content is changing is high. Time to market is always an issue and resources are thin. As a result of this people tend to focus on the home page and landing pages and not so much on anything else. As a result of this some pages will contain images that are way too big (6M is not an exception) and that have a clear negative impact on page load times. He was also honest enough to mention that they have another issue with images that is much harder to identify. Images are resized too much and as a result of this the quality is below what it should be.

We are planning a POC of our Dynalight platform to automatically resolve both issues once and for all!

Oh, by the way, the second most important root cause is too much Javascript. This needs to be downloaded and executed on the customers device and, very often, there is just too much of it. More about this later.

Interested in finding out more about how we can help you? Let us know!