The Appetite to Eat Efficiency

Performance Irony

“Any gain in bandwidth capacity through efficiency techniques will be short lived. Increases in functionality, bad practice or abstracted inefficiency will absorb that initial saving quickly ” – me 

I was flicking through the IKEA catalogue the advert (see diagram) struck a cord with me. An  LED bulb had been created which was 85% more energy efficient than a traditional bulb. However the designer and retailer have seen fit to put 16 of them into a single housing.  I calculated that they had created a fixture which would use 140% more energy than that of a single incandescent bulb.  As we innovate and create a more efficient ways, the market appears to unconsciously find a way of utilising the spare capacity given.

I see many parallels within the IT industry.  I’ve seen chipsets get faster and the supporting software get bloated (MS for instance) – negating the increased and intended benefits.  Within the web industry we see that as the average bandwidth capability increases, the average size of web-pages increase.  Every low level improvement (SPDY, Broadband, CDN’s) seems to be met by a general tendancy to gobble up the spare capacity given. It’s almost as if there is a unconscious and self-regulating mechanism at play.

No matter how much we increase the available capacity the marketplace seems to find a way of utilising this, through bad practice or good.  I have a theory as to one reason why this may be the case and it’s one of the costs associated with encapsulation and abstraction.  The hardware, OS, software services, compiler, back, middle and presentation tier are essentially abstracted.  The people that implement (developers, web designers) are so specialised and siloed they have little regard or interest in these components until they realise the application is unacceptably slow; then improvements are made up until application performance becomes acceptable, thus filling all available capacity.

I’m going to make an attempt at a general industry Rule:

Buksh’s Web Capacity Rule:

Any gain in bandwidth capacity through efficiency techniques will be short lived, as increases in functionality, bad practice or abstracted inefficiency will absorb that initial saving quickly

Also by implication, the rule is saying that the marketplave will average a page to reach the target audience within an acceptable industry response time  (2-3 seconds)

Average Page Size = ALT * Average Bandwidth Capacity 

ALT = Average Acceptable Load Time, lets say this is 3 secs.

ABT = Average Bandwidth Capacity of target market.

This means we can probably work out the average bandwidth capacity of the world (western) by taking the average page size and dividing by 3 secs.

….looking around, the average page size is now 1024k (according to Strangeloop).  So:

Average Bandwidth Capacity of consumers is 341k == 0.3 of a MB ==2.5 Mbps  

This means if a page is to load in 1 sec, it should be approx. 0.3 MB in size.  (Of course this is a very course calculation, but it should provide a good guide)

This as a rule has some other implications.  It’s fairly safe to assume that the average bandwidth (to consumers) will continue to increase.  This means the industry is forever going to be in a continuous game of cat and mouse.  Pages will continue to gain in average size no matter what.  Client browsers will experience larger demands on their resources and continue to grow in complexity.

Dancing is better than waddling

So back to the lights – I suspect if we ever reach a point where too much performance capacity is cheaply given, then as a whole we will have a tendency to over utilise & become bloated.  Our nature and practice seems to dictate this.  Having performance and capacity restrictions ironically makes us leaner, meaner, inventive and more responsive.  I think if we get to a point where bandwidth capacity isn’t a restricting factor then this will have a tendency to make us lazy, fat and bloated with a host of unintentional and indirect consequences.  Thankfully those that are lean, dance a lot better than those that are not at present, I hope it remains that way.

2 thoughts on “The Appetite to Eat Efficiency

  1. You need to draw a distinction between throughput and latency. Most (if not all) of the improvements over the last 20 years have been with throughput. Latency hasn’t improved because the speed of light hasn’t gotten faster.

    I have two posts that may be relevant here:
    http://tech.bluesmoon.info/2010/08/equation-to-predict-pages-roundtrip.html
    http://tech.bluesmoon.info/2010/12/using-bandwidth-to-mitigate-latency.html

    Hope they help you take this further.

    1. Hi Phillip, This calculation is an average of good and non-good practices (In my experience there is a mix of both in the industry) and the capacity of the marketplace to deliver. The market is assumed to be local to the target for application, so we can assume a mixture of use of CDN’s in the clac). I did consider latency effects (TCP/IP packet sizes average around 600bytes) but this becomes unwieldy if we start factoring in parallel HTTP requests and then “what if” you leverage a CDN for static resources. Also – as the Calc is an average of ‘everything’ this is indirectly built into the final answer. ps Thanks and your pieces are very good.

Leave a Reply

Your email address will not be published. Required fields are marked *