Skip to content

Responsive Image Letterboxing

May 28, 2014

Introduction

This document will try to show how to do responsive letterboxing of images. The letterboxing is needed if the image to be displayed on a webpage, may not adhere to the expected ratio. For example, if a webpage expects a 16:9 ratio image but the image loaded is a 200×300, then it could break the entire layout. What we wanted to achieve is: scale 100% like a expected ratio image could but keep the height under control and letterbox the image on left/right. Something like below. The yellow area denotes the expected image ratio whereas the received image will try to fit into the container without messing up its own original ratio.

Screen Shot 2014-05-28 at 4.52.44 PM

Responsive Letterboxing

With responsiveness coming into picture, we cannot give our containers a hardcoded width and height. A typical example for responsiveness is, the element will go to 100% (any percentage) of the parent element and so on. Hence, the first step in being responsive is, flexible width. What about height? The image height will always grow/shrink based on the width. This brings us our golden steps for getting a responsive image with letterboxing:
1) Figuring out the current width of the parent container.
2) Determine mathematically, the “would-be” height of a “perfect” image (say 16:9) when placed inside such a container and stretched to 100%. The formula is: (width * 9) / 16
3) Set the above calculated number as max height on the container element.
4) To make it responsive, listen for window resize and repeat steps 1, 2 and 3.

Please find the demo here.

JS/CSS Required Changes

Setting max height:

var imgUrl = $('.url-input').val(), $container = $(".image-container");
function setMaxHeight() {
     var containerWidth = $container.width(), expectedImageHeight = 0.5625 * containerWidth;
     $container.css('max-height', expectedImageHeight + "px");
}

Listen for window resize and set max height again:

$(window).on('resize', function() {
     setMaxHeight();
});

Styles needed to display per our letterboxing requirements:

.image-container {
    max-width: 980px;
    width: 100%;
    height: 100%;
    background-color: #ffff00;
}

.image-container img {
    height: 100%;
    margin-left: auto;
    margin-right: auto;
    display: block;
}

Conclusion

That is all folks. With little help from JS, we can design a responsive website with any image ratios. Again, please find the demo link here

May 5, 2014
by

I host many different drupal sites on the same box, and so that I have lots of flexibility in my drupal platforms, I generally use a different webroot for each site (only one site to a webroot).

Adding cron tasks via drush gets pretty cumbersome with each new site I create. Normally I would create a shell script along the lines of

cd /var/www/web1/htdocs
drush cron
cd /var/www/web2/htdocs
drush cron
cd /var/www/web3/htdocs
drush cron

 

but that gets old pretty quickly. Here’s my answer as a single script that runs them all. It loops through all dirs in /var/www/*/htdocs (my hosting directory structure pattern), checks sites/default/settings.php in each to see if it’s a Drupal installation, and executes sudo <username> drush cron in each dir.

On my box, each web root has its own user and group, hence the sudo for file permissions. I added the following to /etc/sudoers to support this:

 ALL ALL=(ALL) SETENV: NOPASSWD: /usr/bin/drush --quiet cron

and the following to my crontab (vixie cron), as I put this script in ~/bin:

 */15 * * * * $HOME/bin/drupal-run-cron

Additionally, I wanted lots of logging when I ran it by hand, so the script checks whether I am running it or not, and sets a variable to know whether to give more debugging information.

#!/bin/sh
# drupal-run-cron
# process all of the cron jobs for all of the drupal web sites on this box
# written by Joseph Cheek, joseph@cheek.com, 30 Apr 2014
# released into the public domain.

# set DEBUG if i'm running by hand
tty -s && DEBUG=1 || DEBUG=

# create a temp file to save drush output
# use $TMPDIR if set, otherwise /tmp
[ -z $TMPDIR ] && TMPDIR=/tmp
TMPFILE=$(mktemp $TMPDIR/$(basename $0).XXXXXX)

# rotate through all of my web dirs
for a in /var/www/*/htdocs; do

# save web site name to tmp file; debug, show on stdout too
echo $(basename ${a/htdocs/}): > $TMPFILE
[ $DEBUG ] && cat $TMPFILE

# change to the relevant dir
cd $a

# if it's a drupal dir, find the owner of the web site, run drush cron,
# and save output
[ -e sites/default/settings.php ] &&
WR_OWNER=$(ls -l sites/default/settings.php | cut -d ' ' -f 3) &&
echo '(as '$WR_OWNER')' >> $TMPFILE &&
sudo -u $WR_OWNER COLUMNS=80 drush --quiet cron >> $TMPFILE 2>&1

# add a few newlines to make the output more readable
echo -e \\n >> $TMPFILE

# debug? show the output, minus the web site name that's already been shown
[ $DEBUG ] && tail -n +2 $TMPFILE

# not debug? show the entire output if any errors or warnings were shown
[ ! $DEBUG ] && egrep -qs '(warning|error)' $TMPFILE && cat $TMPFILE

# delete the temp file
rm -f $TMPFILE

done

Kudos to drush.ws for some helpful information.

Chrome 34 and Responsive Images

April 22, 2014

With the release of Chrome 34, Google now supports the srcset attribute of the IMG tag. This is one necessary part of responsive images, but is nowhere near all the functionality you would need. After playing around with it, I’ve found some unexpected behavior. I’m not sure if it’s a bug or a feature, but I reported it anyway, just in case.

The general idea is to use the standard IMG tag, but instruct the browser to load an alternate image if the device has a DPR (window.devicePixelRatio) > 1, and if the srcset attribute lists a URL for a matching PDD (pixel density descriptor). Here’s an example where I want the browser to load a gray 155×114 image if srcset is not supported, a blue 155×114 image if srcset is supported but the DPR is 1, and a red 310×228 image at 155×114 size if srcset is supported and the DPR is 2. Even though I’m not explicitly specifying the dimensions of the image, all three variants are displayed at a width of 155 and a height of 114.

<img alt="Sample One" src="http://dummyimage.com/155x114/eee/000.gif&text=155+x+114"
srcset="http://dummyimage.com/155x114/009/fff.gif&text=155+x+114 1x,
http://dummyimage.com/310x228/f00/000.gif&text=310+x+228 2x">

So far, so good. So what happens if I add a green 620×456 image with a PDD of 4? I would expect any device with a DPR of 4 to display the green image and a device with a DPR of 2 to continue to display the red image.

<img alt="Sample Two" src="http://dummyimage.com/155x114/eee/000.gif&text=155+x+114"
srcset="http://dummyimage.com/155x114/009/fff.gif&text=155+x+114 1x,
http://dummyimage.com/310x228/f00/000.gif&text=310+x+228 2x,
http://dummyimage.com/620x456/0f0/fff.gif&text=620+x+456 4x">

I don’t have a device with a 4 DPR screen, but my retina device with a 2 DPR continues to display the red image.

So what should happen if there isn’t an exact match between a device’s DPR and the tag’s PDD? If I’m reading the specification correctly, I would expect a 2 DPR device to fall back to the 1 PDD src and display the blue image.

<img alt="Sample Three" src="http://dummyimage.com/155x114/eee/000.gif&text=155+x+114"
srcset="http://dummyimage.com/155x114/009/fff.gif&text=155+x+114 1x,
http://dummyimage.com/620x456/0f0/fff.gif&text=620+x+456 4x">

However, that is not the result I got. Instead, Chrome displays the 4 PDD green image at the 155×114 size.

After a series of tests, I can only conclude that the browser is using a DPR > 1 as a trigger to use any srcset URL for a PDD !== 1, and displaying it scaled down proportionately to the PDD value. I’m not sure if this is a bug, or if this was the intent of the specification, but it is certainly not what I expected.

UPDATE: Chrome developers responded to my bug report and kindly pointed out two bits of information. One, the specification calls for loading the next higher PDD image if an exact match isn’t available. Two, the image that is displayed will be scaled according to the specified PDD value. This accurately describes the behavior I was seeing, so it is definitely not a bug.

on the Edge

February 19, 2014

REST is pretty simple architecture and for the majority of the moving parts within, it’s well understood. Its nature nicely aligns with the stateless HTTP protocol and for the limited server-side performance. When it comes to simple web resources REST is the standard and it looks like that everyone agrees with it. Few problems/questions arise when one attempts to apply this simple principle on a complex web document.

Current web documents are assembled from multiple resources and it is not uncommon to find a HTML page that forces the browser to executes hundreds of requests. We tend to say that this is bad architecture or the page needs optimization but the reality is that a useful document or interface is just complex structure of many, many resources.

To improve the client-side performance we combine, minify and compress any resource we can, Google even multiplexes TCP connections in a shiny new protocol but still – separation of resources forces requests fragmentation.

In most of the cases, the fragmentation is considered the small price to be paid for gaining nice cacheability and order of magnitude in server-side performance, Still – there are requirements like SEO, authentications, inter-resource dependency and so on when fragmentation is reaching unmaintainable levels. Usually in such situations we instinctively reach out for the well known server-side implementation of monolithic documents dynamically generated by the ever growing server farm. This might patch the problem at hand but breaks the REST and more painfully – busts the cache.

Interestingly enough the solution is in the REST definition itself. To cite the wikipedia article” … Layered system : A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way. …”. We can combine, verify and swap resources on the fly without invoking any server-side logic or at least any application server logic.

Edge Side Includes (ESI) can solve most request fragmentation and provide elegant yet scalable solutions to search engine optimization, inter-resource dependencies and simple authentication.

Let’s examine usecase where we have a simple resource – an html document but our business logic demands that it is available only to clients with proper API key. Historically we would setup some server-side authentication, which would work but as the number of requests increases so will our server load. Not only that, but we would create a bottleneck in our architecture because URLs like that are completely uncacheable, so our total response time will increase and client-side performance will degrade.

If we implement ESI (Akamai flavor here) we can create a document that contains something like :

<esi:eval src="/check-key?api=$(QUERY_STRING{api})" dca="none"/>
<esi:choose>
 <esi:when test="$(checkResult) == 'OK'">
   <esi:include src="/locked/our-not-so-secured-doc.html"/>
 </esi:when>
 <esi:otherwise>
   <esi:include src="/bad-api-key.html" ttl="365d" />
 </esi:otherwise>
</esi:choose>

This snippet is self explanatory but the idea behind is pretty powerful one.

With ESI we have REST with its simplicity, server-side scalability and client-side performance. SEO with  _escaped_fragment_ should be a breeze and simple auth per asset shouldn’t kill the client.

One might ask if the edge server won’t become another application server (aka bottleneck) but the simplicity of the logic available should eliminate “apps on the edge”.

The one real problem with ESIs is that they are proprietary to the CDNs and there is no real specification or standard. I suspect the as REST is becoming the standard architecture for web and mobile the specifications will solidify and open source implementation will appear.

Cache JSON(P) calls client side

June 10, 2013

Prerequisites

The article assumes the reader knows the basics of AngularJS. The article shows how the cache logic can be written in JavaScript but the UI render is done using AngularJS. A non-angular reader can still choose to continue reading through the article and can get the logic bits from the code. I leave it up to you.

Introduction

In real world, where we have CMS (Content Management System) to assemble our page with modules and each module function independently. The modules are developed by independent developers and the page is assembled by, probably, a different user altogether. AJAX calls are always meant to enhance user experience, but given the fact the each module function independently, there seems to a duplication of AJAX calls made on the page, probably by different modules. This post shows a way, how to cache such AJAX calls with the help of jQuery promise and a JavaScript object.

jQuery Promises

As name suggests, jQuery Promise is a literal promise made by jQuery that a call will be made on the object after its completion. The object is just like an JavaScript object and can be passed around like a ball to any method you want and any number of times you want. For more, read here.

Details

Now that you have an idea of what we are going to do, let me take you through each step of the process.

Constructing an deferred object cache

I will try not to include AngularJS code in the sample but in some places it is unavoidable. Assuming that “command” is the part of the URL and “params” are the parameter key-value map, here is a snapshot of constructing a cache map.

var cacheStorage = {};

function getCacheKey(command, params) {
    var paramStr = command + '-';
    if(params) {
        var keys = [];
        for(var key in params) keys.push(key);
        var sortedKeys = keys.sort();
        for(var count=0; count < sortedKeys.length; count++) {
            var sKey = sortedKeys[count];
            paramStr += (sKey + '-' + params[sKey] + (count < sortedKeys.length-1 ? '-' : ''));
        }
    }
    return paramStr;
};

var ret = {
    get : function(command, params) {
        var paramKey = getCacheKey(command, params);
        var cachedObj;
        if(paramKey.length > 0) {
            cachedObj = cacheStorage[paramKey];
        }
        if($rootScope.debug) {
            $log.log(paramKey + " => " + (cachedObj ? 'hit' : 'undefined'));
        }
        return cachedObj;
    },

    put : function(command, params, deferredObj) {
        var paramKey = getCacheKey(command, params);
        if(paramKey.length > 0) {
            cacheStorage[paramKey] = deferredObj;
        }
    }
};

return ret;

Explanation: If you know Angular, you probably knew about $log and $rootScope. If not, just assume that these are variables injected by Angular API. The cache tries to form a key and save the object in the cache map for the key. We sort the params before forming the key because we do not want to duplicate the same object just because user gave params in a different order.

Data Source Client

Now that, our cache is ready, we need to implement a client which uses this cache and can be a interface to all the modules on the page. The requirements of the client is, to provide a generic interface to all the calls to a particular website because we wrote the cache store for a single domain. If multiple domains are involved, it is only a matter of time we edit the cache storage to modify key that includes domain name or the complete url.

var url_defaults = { key: $rootScope.key, cb: "JSON_CALLBACK" };

function doDS2Cmd( command, params ) {
    if(!params || !params.locid) {
        if(!params) { params={};}
        params.locid = $routeParams.locId;
    }
    var cachedObj = dsCacheStore.get(command, params);
    if(cachedObj) {
        return {
            'deferredObj' : cachedObj,
            'fromCache' : true
        };
    }
    var url = $rootScope.wxdata_server + "/" + command + "/" + params.locid + ".js";
    var deferredObj = $http.jsonp(url, { params: angular.extend( {}, url_defaults, params ) } );

    deferredObj.success(function (data, status) {
            if($rootScope.debug) {
                $log.log(command + ": " + JSON.stringify(data));
            }
        })
        .error(function (data, status) {
            $log.error("error $http failed with " + status + " for " + url);
        });

    dsCacheStore.put(command, params, deferredObj);
    return {
        'deferredObj' : deferredObj,
        'fromCache' : false
    };
};

var ret = {
    executeCommand : function(command, params) {
        return doDS2Cmd(command, params);
    }
};

return ret;

Explanation: We are trying to provide a interface with just one public method: executeCommand – which means executing a JSONP call. The client is trying to read from cache and if not found, it creates a promise object by var deferredObj = $http.jsonp(url, { params: angular.extend( {}, url_defaults, params ) } );. Consider this, as a jQuery equivalent of $.ajax(). Now, this promise is stored in the cache. Next time, when we get a hit from the cache, we get the promise object. Since, you always get a promise object, your module can always use .success on the promise every time it executes. If the call is already completed, your .success callback is called immediately else it waits. Here is the trick, since you are not creating a new promise, AJAX call is NOT made. Instead, it works on the existing promise object and gets the response from the promise – how many ever times you want.

Sample Module Usage

Here is an example of how to call from a module.

var commandOutput = ds2Client.executeCommand(callObj.call, callObj.params);
var fromCache = commandOutput.fromCache;
commandOutput.deferredObj.success(function(response) {
     .....
});

NOTE: The JSONP call is used for demo purposes. However, other protocol also works and you just have to change from $http.jsonp to $http.get.

Here is a complete demo (the link works only from TWC network). For out of network guys, see a non-twc demo here: non-twc demo

Conclusion

What did we just do: We learned a bit about jQuery promise, how to implement a cache store that stores jQuery promises for a given url and params, how to implement a client API which makes of cache store and provide a public interface to make AJAX calls and how to write a module that makes use of the client.

WXForecast Prototype

June 4, 2013

Prerequisites

The following post assumes the user have a basic knowledge on AngularJS and worked on the basics. If not, I would suggest you read through what Angular is and its basic tags here: Egghead IO or in Angular site.

Introduction

In this post, we will be working on implementing a complex weather reporting view with multiple functionalities like Google Maps integration, Reverse Geo-code using Google APIs, etc. The scope of this document is to let you know how to integrate all these APIs together in a Angular page and not for fully understanding the internals of the APIs.

Details

What do we need to achieve a data rep as shown: link to demo – We need hourly weather, daily weather, narration, sunrise, sunset, google charts, google maps, google geo API’s, type ahead, etc.

Aggregation data

$resource is a way to get JSON data object with a simple syntax like object.getMethod(). However, $http is even more flexible with its promise methods and $resource is more useful when performing CRUD operations. The advantage of using a resource is the output of the operations inside the $resource first returns a empty object and later returns the AJAX output once done. This is specifically useful when you assign the resource output directly to UI model. However, it is not particularly useful when you have to do some operations on top of AJAX call. I have used $resource in this demo for just showing usage.

wxModule.factory("mobagg", ['$http', '$routeParams', '$resource', function($http, $routeParams, $resource) {
    return $resource("http://wxdata.weather.com/wxdata/mobile/mobagg/:locID.js",
        {
            cb:'JSON_CALLBACK', locID:'@id'
        },
        {
            getAggregatedInfo: {method:'JSONP', params:{"key" : "2227ef4c-dfa4-11e0-80d5-0022198344f4", "hours" : "48"}, isArray: true}
        }
    );
}]);

The same can be written in $http using

function doDS2Cmd( cmd, params ) {
    var url = $rootScope.wxdata_server + cmd + "/" + params.locid + ".js";
    return $http.jsonp(url, { params: angular.extend( {}, url_defaults, params ) } )
      .success(function (data, status) {
        if($rootScope.debug) {
          $log.log(cmd + ": " + JSON.stringify(data));
        }
      })
      .error(function (data, status) {
        $log.error("error $http failed with " + status + " for " + url);
      });
}

// params contain key, locid.
var deferredObj = doDS2Cmd( 'mobagg', params );

The output of the call looks like below:

MobaggOutput

Routes

The routes are going to be either location key based or latlong:

var wxForecastModule = angular.module('wxforecast', ['ngResource', 'ui', 'ui.bootstrap', 'google-maps', 'googlechart.directives']).config(['$routeProvider','$locationProvider', function($routeProvider, $locationProvider) {
    $routeProvider.
        when('/:locId', {templateUrl: 'partials/skeleton.html'}).
        when('/:lat/:lng', {templateUrl: 'partials/skeleton.html'}).
        otherwise({redirectTo: '/30339'});
}]);

Page Content Partial

<div class="container" ng-controller="ForecastController">
    <h3 class="page-header">WX Forecast Prototype</h3>
    <div class="span12 nomargin-left">
        <div ng-controller="AlertController">
            <alert ng-repeat="alert in alerts" type="alert.type" close="closeAlert($index)">{{alert.msg}}</alert>
        </div>
    </div>
    <div class="well span7 nomargin-left">
        <div>Map with Weather Layer.</div>
        <google-map center="center" draggable="true" zoom="zoom" markers="markers" mark-click="true" fit="false" latitude="latitude" longitude="longitude" class="angular-google-map ng-isolate-scope ng-scope" style="position: relative; background-color: rgb(229, 227, 223); overflow: hidden; -webkit-transform: translateZ(0);"></google-map>
        <div ng-show="address">Exact Location Assumed: {{address}}</div>
        <span ng-repeat="place in places">{{place}}<span ng-hide="$last"> &gt; </span></span>
    </div>
    <div class="span4">
        <div ng-controller="TypeaheadController" class="ta">
            <input type="text" ng-model="selected" typeahead="location as location.displayName for location in searchDS2($viewValue)" ng-change="update()" ui-keypress="{enter:'directLoad($event)'}" placeholder="Search location or enter zip..." />
        </div>
        <a><span tooltip-html-unsafe="{{tooltipString}}"><img src="img/locicon.png" width="40px" height="40px" ng-click="getCurrentLocation()" style="padding-bottom: 10px; float: right;"  /></span></a>
    </div>
    <div class="span4">
        <div class="forecastimg" ng-show="nowWxIcon"><img ng-src="http://s.imwx.com/v.20120328.084208/img/wxicon/120/{{nowWxIcon}}.png" height="180" width="180" alt="Rain Shower" class="wx-weather-icon"></div>
        <div class="header">{{hiradObs.temp}}<sup>&deg;<span class="wx-unit">F</span></sup></div>
    </div>
    <div class="span11">
        <table>
            <tbody ng-repeat="dailyForecast in dailyForecasts" class="span10 modal-header wxrow" ng-init="dayText = ['Today', 'Tomorrow']" ng-click="dailyForecast.isCollapsed = !dailyForecast.isCollapsed">
                <tr>
                    <td class="span2"><span ng-show="$index > 1">{{getDate($index) | date:'MMM dd'}}</span><span ng-show="$index <= 1">{{dayText[$index]}}</span></td>
                    <td class="span7">{{dailyForecast.narration.phrase}}</td>
                    <td class="span2"><div ng-show="dailyForecast.maxTemp">{{dailyForecast.maxTemp}}<sup>&deg;F</sup> <span class="icon-arrow-up"></span></div> <div>{{dailyForecast.minTemp}}<sup>&deg;F</sup> <span class="icon-arrow-down"></span></div></td>
                </tr>
                <tr>
                    <td colspan="3">
                        <div collapse="dailyForecast.isCollapsed">
                            <div class="well wxdetails">
                                <div ng-controller="ChartDataController" ng-show="isHourlyDataAvailable">
                                    <div google-chart chart="chart" style="{{chart.cssStyle}}"/>
                                </div>
                                <div>
                                    <div class="span4">Sunrise: {{getDateFromEpoch(dailyForecast.sunData.rise) | date:'hh:mm a'}}</div>
                                    <div class="span4">Sunrise: {{getDateFromEpoch(dailyForecast.sunData.set) | date:'hh:mm a'}}</div>
                                </div>
                            </div>
                        </div>
                    </td>
                </tr>
            </tbody>
        </table>
    </div>
</div>

As you can see from the partial, the page is quite a bit collection of google maps, google chat for each daily row, sunrise-sunset data, narration data, etc. The population javascript is quite simple, we get the mobagg response and map it to appropriate models.

mobagg.getAggregatedInfo({locID : locId}, function(aggdata) {
            if(aggdata && aggdata[0]) {
                $scope.aggInfo = aggdata[0];
                $scope.weatherAlerts = $scope.aggInfo.WeatherAlerts;
                $scope.hiradObs = $scope.aggInfo.HiradObservation;
                ....
            }
});

Google Maps

Google maps is included by angular-google-maps plugin available on the internet. However, I had to make some modifications to enhance the map such as weather layer, click traversal, etc. The angular directive “google-maps” creates a DOM element inside the directive and provides the DOM element to google maps for map rendering.

// Create our model
var _m = new MapModel(angular.extend(opts, {
      container: element[0],
      center: new google.maps.LatLng(scope.center.latitude, scope.center.longitude),
      draggable: attrs.draggable == "true",
      zoom: scope.zoom
}));

The scope is initialized with default zoom and lat/long we provide. However, we update the latlong on getting the values from mobagg call. The $watch listeners then update the map with the latest center latlong values.

The address resolution in google maps is an amazing functionality and can provide you with an almost accurate address.

(new google.maps.Geocoder()).geocode({latLng: latLng}, function(resp) {
        if (resp[0]) {
              var bits = [];
              for (var i = 0, I = resp[0].address_components.length; i < I; ++i) {
                    var component = resp[0].address_components[i];
                    if ($scope.contains(component.types, 'political')) {
                        bits.push(component.long_name);
                    }
                }
                $scope.places = bits;
                $scope.address = resp[0].formatted_address;
                $scope.$digest();
        }
});

Google Charts

Google charts are provided by angular-google-chart plugin available in the internet. Configuring the chart is a slightly complicated process as the data passed to charts API on each ngRepeat directive. Hence, we have manually update few attributes for each chart data. Hence, we have a separate ChartDataController which populates one.

<div ng-controller="ChartDataController" ng-show="isHourlyDataAvailable">
    <div google-chart chart="chart" style="{{chart.cssStyle}}"/>
</div>

Weather Alerts

The alerts are shown on the top bar on the block level and uses bootstrap’s alert functionality.

<div ng-controller="AlertController">
    <alert ng-repeat="alert in alerts" type="alert.type" close="closeAlert($index)">{{alert.msg}}</alert>
</div>
....
angular.module('wxforecast').controller('AlertController', function($scope, $http) {
    $scope.alerts = [];

    $scope.$watch('weatherAlerts', function(newValue, oldValue) {
        $scope.alerts = [];
        angular.forEach($scope.weatherAlerts, function(weatherAlert) {
            $scope.alerts.push({'type' : (weatherAlert.severity == 1 ? 'error' : 'warning'), 'msg' : weatherAlert.description, 'closeable' : false});
        });
    });

    $scope.closeAlert = function(index) {
        $scope.alerts.splice(index, 1);
    };
});

Conclusion

So, we have just seen how to integrate several APIs in an Angular page and still makes it more responsive and end up with a cleaner code. We have also seen few code examples on how to do certain things like creating Angular alerts bar, google chart, etc along the way.

Improving Icons for Site Performance

December 20, 2012

Once upon a time (before 2007 and Steve Souders’ rise to fame), Designers crafted unique icon sets for the Web and published them as individual, stand-alone images. Developers then added the individual images to Web pages with one icon per IMG tag. As the number of unique icons on a page increased, the page got slower. It both took longer to download the assets on the page and longer to render and display the whole page. Part of the reason for the slowness was due to browsers not being able to simultaneously request more than 2-4 items from a single hostname, and part of the reason was slower Internet connection speeds. It didn’t help that most sites were not yet using CDNs to bring the content closer to the end user, and didn’t yet cache the images properly or effectively.

The first step taken by sites to address the issue was to create single images containing all of the icons, rather than one image per icon. By using CSS to display only a slice of the image containing the appropriate icon in the background of an HTML element, it meant that only one image needed to be downloaded no matter how many unique icons appeared on the page. A single image containing many constituent sub-images was termed a “sprite,” after the video game programming technique of placing many views of a character into a single graphic file. Since all commonly-used browsers now supported CSS positionable backgrounds, this technique was viable. However, there were some serious drawbacks:

  • icons could not be moved around in the image (or re-designed if the size changed) without having to modify CSS as well
  • large buffers needed to be placed around images depending on how much of it needed to be displayed
  • transparent icons were initially limited to 8-bit GIF images because some browsers did not support 24-bit PNGs with alpha transparency (remember the ugly IE 6 Web Behaviors hack with memory leaks?)
  • different variant sprites were needed if the icons were a different color or size
  • sprites tended to get really wide as the height was sometimes fixed, so new icons had to be added to the end of the image

Still, there were big performance gains to be had when used properly, and this was the only solution available at the time. We used this technique sporadically beginning in 2007, and everywhere as a matter of policy by 2010.

The next promising technology to be supported by most browsers was inline (or embedded) images. This technique serializes binary resources like images into a base64-encoded string, which can then be referenced directly in place of a URL. The string generated by this technique can be used both in CSS, and directly in an image tag:

<img width="109" height="184" src="data:image/png;base64,iVBORw0KGgoAAA...">

By using this technique, we gained the advantages of sprites with fewer server requests.
On the other hand, inline images are more difficult to manage and maintain, and if not done right, the size of CSS files can grow dramatically. Ideally, image references in CSS are resolved and converted to inline images at build time. We used this technique in 2011 when we built our new mobile web site targeted to smartphones, since opening new connections is so expensive (in terms of network latency) over cell radios and mobile browsers download fewer simultaneous resources in parallel.

One other fascinating approach we’ve studied is inline vector graphics via SVG in IMG tags. By using vector-based graphics, we could scale imagery like icons to almost any size without loss in quality. The drawbacks are that the colors couldn’t be altered by CSS, and that older IE browsers don’t support it. In fact, even polyfills would require us to maintain two separate sets of imagery.

What are the alternatives to using sprites? One technology we’ve found is Icon Fonts. This is a cross-browser solution supported by all of the major browsers on all platforms. It involves creating a font file in several different binary formats that contains the icons you need to use. Each browser only loads the font file in the format that it supports. This gives us the ability to scale the icons since fonts are scalable. It also allows us to style the fonts using standard CSS.

So, how does the file size of Icon Fonts compare to a similar sprite-based solution? Consider the icons in the following image from our recently-launched Video Player page.
sample icon font output
There are nine icons being used. We have two sizes, one for desktop users and a larger one for tablet users. We also have two states for each icon, normal and hover. This requires us to place 36 separate icon images into a sprite. By keeping things a single color and compressing the image, we can get the file size down to around 8kb. The largest icon font file size is the WOFF format at 3Kb.

To use icon fonts instead of a sprite, you still need to edit both the page (to apply the right class), and the CSS file (to create the selector rules). However, you no longer need to calculate positioning since the icon is now mapped to a character instead of x and y coordinates. It is easy to add new icons to the font since you just map the new icon to a new character. You are not likely to ever need to change the character an icon is mapped to.

To implement icon fonts, we followed the directions outlined at IcoMoon. It allows us to upload our own icons or use those already provided. It maps the icons to a special section of Unicode that is not used by any existing alphabet, so there is no chance that the icon would accidentally show up naturally in any translation of our site. For accessibility, the icons also have a corresponding label that explains the purpose of the icon’s hyperlink. One additional word of caution here: don’t forget to modify your Web server configuration to serve the font files with the proper MIME types, and to add the Access-Control-Allow-Origin header to enable cross domain font file requests.

Using icon fonts for a scalable, customizable solution seems to meet all of our needs.
It works on all supported browsers. It can be easily edited without changing existing markup or CSS. The only issues that keep this from being a perfect solution are that it requires four different files to be edited and published, and that it doesn’t allow icons of more than one color. Hopefully SVG will mature enough to be thoroughly supported, or some other solution will come along like true vector-based images.

Follow

Get every new post delivered to your Inbox.

Join 508 other followers