Skip to content

Of Sitemaps and SEO

March 5, 2015

Of Sitemaps and SEO

Pre-SEO

In a world where SEO drives site traffic, internet search giants like Google drive delivery of relevant content at the rate of about 40,000 search queries per second. With the weight of internet traffic, especially news and events traffic being driven by search weather.com did not focus on the benefits of SEO and an broader SEO strategy. Basically, it boiled down to the fact that there were more important projects and with such a large footprint and domain expertise it was not a priority. Before late 2013 weather.com did not have any sitemaps for search engines. The focused changed as we realized we were missing out on traffic to competitors and over time our search rankings began to shrink. It became a priority for The Weather Channel/weather.com to analyze where we could improve our SEO strategy. One area of focus was to develop a sitemap generator that would allow us to handle the disparate types of data and traffic, keep news relevant and keep the weather channel atop the search ranking of the major search engines organically.

SEO Focused

We had multiple concerns on the outset in approaching the task to create sitemaps:

  • What tool(s) would be the best to use?
  • How often would they need to be updated?
  • How large were they going to be, and would that affect decisions?
  • How best to perform maintenance and testing?

After careful consideration to the questions above weather.com decided to go with NodeJS and Grunt for the sitemap file generation. First off, this fit perfectly with the developer skill sets at weather.com. The development team is javascript heavy with a number of resources to maintain after initial build. It would allow for multiple tasks which would mean we could run small tasks geared at news and video. These smaller tasks could add new data quickly to the sitemap at 15 minute intervals. Initially we did not know how large our sitemap footprint would be. Grunt is a perfect system for these types of automated tasks. The Grunt plugin/NPM community already contains so many great open source packages to leverage that the majority of the work is done before the job got started. After setting up a few simple CRON jobs the application tasks can be run on their own or can be initiated on demand.

After a quick first pass, and fleshing out by others, it was generating sitemaps for ~40 Web properties.  It took hours to run, and the uncompressed sitemaps took up ~1TB of storage, ~120GB compressed. Not too bad for a pilot program of this type. This sitemap generator ran consistently for our website through most of 2014.

In late 2013 the company decided to redesign and rebuild weather.com with more advanced technologies, Angular and Drupal. The project was called Reboot. The outlying goals of reboot were to enhance the experience and increase performance. Information that is relevant to the user is front and center. It is cleaner and more responsive with an emphasis on information and data.  When the Reboot project reached a point where we needed to generate quicker, more targeted sitemaps we revisited the original code base with a new team. The obvious choice was to utilize Grunt to separate out all our tasks. The previous system queried the database asynchronously and saved each item in memory until complete and piped through gzip. This is inefficient. In our current system each job has it’s own grunt  task that is run in sequence. We have found a substantial increase in performance. Our previous monthly task used to take hours to complete taking up a whopping 120GB of storage, compressed. Today’s monthly process takes less than 20 minutes to run. Plus, we added tasks that increase overall testing and output. Each generator cleans out all files first using grunt-contrib-clean (https://github.com/gruntjs/grunt-contrib-clean). This module makes it easy to clear out any path or file to start fresh each time the build runs. To keep each clean process specific to each task we created multiple clean tasks:

clean: {

build: {

src: configFile.default_options.paths

},

temp: {

src: 'temp'

},

fifteenmin: {

src: ['dist/**/news/**.*', 'dist/**/video/**.*']

},

daily: {

src: ['dist/**/static/**.*', 'dist/**/collections/**.*']

},

monthly: {

src: ['dist/**/**.*']

},

urltest: {

src: ['temp/url_tests/*.json']

},

dist: {

src: appConfig.outputFolder

},

index: {

src: ['dist/**/sitemap.xml']

}

},

The build processes follow, running the specific sitemap tasks. In the case of weather.com each type of sitemap page is run at a different interval. Every fifteen minutes the news and video builds are run. This updates the overall sitemap keeping timely information fresh. These listings are also posted at the top of the sitemap.xml main page. There is a daily build that runs which pulls information for static pages and collections of content added that day. Monthly the sitemap generator will run the location based pages. This is the largest task. With this we are matching URL patterns, like ‘weather.com/weather/today…’ with multiple location types from around the world. This generates over 300 sitemap pages of around 48000 entries each. This is where the Grunt tasks shine. By splitting up each process we are not trying to do too much in a single process which increases overall speed.

After the custom build tasks are run and we’ve created our .xml sitemap files we then gzip all the xml sitemap files (https://github.com/gruntjs/grunt-contrib-compress).

compress: {

main: {

options: {

mode: 'gzip',

replace: true

},

files: [{

expand: true,

src: ['dist/*/*/*.xml'],

dest: '',

ext: '.xml.gz'

}]

}

},

The compression pushes the file sizes down to a much more manageable level, making the transfer process faster.

Following the grunt gzip compression task our build creates an index file. It contains links to all our other sitemap files. This is done by scanning the folders for files created by the individual build tasks (https://www.npmjs.org/package/grunt-fileindex).

During the build processes the script takes a random sample of URLs returned. After all the files have been created there is a final grunt task that takes the sample URLs and pings them to see if they exist and how long it take them to load. If there are any errors an error is logged.

Modularizing the sitemap processes allows for shortening the creation cycle. It keeps relevant, newsworthy information updated at a 15 minute rate and allows fine tuning. It also saves time in not processing unchanged information. The initial issue of file size being a factor in the process has been mostly eliminated. There is now little issues with in memory task that we had with the initial sitemap project. Updates and testing are much easier now. Firstly, updates are simpler. We are able to update each sitemap process individually. We can also update the prerequisite processes individually for each generated sitemap. Testing is now built in. Where before, in the original sitemap generator, we relied on a QA test run far after the build or with the results from Google’s Webmaster tools today we get the results during the process and are notified when there is an error. For speed we limited the URL tests to a sample set. Also, with a link list as long as the weather.com footprint we are expecting a limited number or errors and 404s.

All-in-all the learning process of sitemap creation taught us to think specifically to each task at hand and to limit each job. We also learned a great deal about task automation with Grunt and NodeJS. Compared to the pre-SEO days we are seeing much improvement in SEO and our relevant content showing up higher in search engine results.

Future plan for this project are to open source the parts of the system that are not weather.com specific and provide a road map to other developers and site owners on how to generate SEO friendly sitemaps for small-large footprint sites while keeping the tasks specific to what is needed for immediacy while supporting larger maintenance tasks. It is also a proof of concept in automating a sitemap from a system that does not have sitemap generation built in or for a system with a complex set of URLs and update timeframes.

Our current sitemaps

Responsive Image Letterboxing

May 28, 2014

Introduction

This document will try to show how to do responsive letterboxing of images. The letterboxing is needed if the image to be displayed on a webpage, may not adhere to the expected ratio. For example, if a webpage expects a 16:9 ratio image but the image loaded is a 200×300, then it could break the entire layout. What we wanted to achieve is: scale 100% like a expected ratio image could but keep the height under control and letterbox the image on left/right. Something like below. The yellow area denotes the expected image ratio whereas the received image will try to fit into the container without messing up its own original ratio.

Screen Shot 2014-05-28 at 4.52.44 PM

Responsive Letterboxing

With responsiveness coming into picture, we cannot give our containers a hardcoded width and height. A typical example for responsiveness is, the element will go to 100% (any percentage) of the parent element and so on. Hence, the first step in being responsive is, flexible width. What about height? The image height will always grow/shrink based on the width. This brings us our golden steps for getting a responsive image with letterboxing:
1) Figuring out the current width of the parent container.
2) Determine mathematically, the “would-be” height of a “perfect” image (say 16:9) when placed inside such a container and stretched to 100%. The formula is: (width * 9) / 16
3) Set the above calculated number as max height on the container element.
4) To make it responsive, listen for window resize and repeat steps 1, 2 and 3.

Please find the demo here.

JS/CSS Required Changes

Setting max height:

var imgUrl = $('.url-input').val(), $container = $(".image-container");
function setMaxHeight() {
     var containerWidth = $container.width(), expectedImageHeight = 0.5625 * containerWidth;
     $container.css('max-height', expectedImageHeight + "px");
}

Listen for window resize and set max height again:

$(window).on('resize', function() {
     setMaxHeight();
});

Styles needed to display per our letterboxing requirements:

.image-container {
    max-width: 980px;
    width: 100%;
    height: 100%;
    background-color: #ffff00;
}

.image-container img {
    height: 100%;
    margin-left: auto;
    margin-right: auto;
    display: block;
}

Conclusion

That is all folks. With little help from JS, we can design a responsive website with any image ratios. Again, please find the demo link here

May 5, 2014
by

I host many different drupal sites on the same box, and so that I have lots of flexibility in my drupal platforms, I generally use a different webroot for each site (only one site to a webroot).

Adding cron tasks via drush gets pretty cumbersome with each new site I create. Normally I would create a shell script along the lines of

cd /var/www/web1/htdocs
drush cron
cd /var/www/web2/htdocs
drush cron
cd /var/www/web3/htdocs
drush cron

 

but that gets old pretty quickly. Here’s my answer as a single script that runs them all. It loops through all dirs in /var/www/*/htdocs (my hosting directory structure pattern), checks sites/default/settings.php in each to see if it’s a Drupal installation, and executes sudo <username> drush cron in each dir.

On my box, each web root has its own user and group, hence the sudo for file permissions. I added the following to /etc/sudoers to support this:

 ALL ALL=(ALL) SETENV: NOPASSWD: /usr/bin/drush --quiet cron

and the following to my crontab (vixie cron), as I put this script in ~/bin:

 */15 * * * * $HOME/bin/drupal-run-cron

Additionally, I wanted lots of logging when I ran it by hand, so the script checks whether I am running it or not, and sets a variable to know whether to give more debugging information.

#!/bin/sh
# drupal-run-cron
# process all of the cron jobs for all of the drupal web sites on this box
# written by Joseph Cheek, joseph@cheek.com, 30 Apr 2014
# released into the public domain.

# set DEBUG if i'm running by hand
tty -s && DEBUG=1 || DEBUG=

# create a temp file to save drush output
# use $TMPDIR if set, otherwise /tmp
[ -z $TMPDIR ] && TMPDIR=/tmp
TMPFILE=$(mktemp $TMPDIR/$(basename $0).XXXXXX)

# rotate through all of my web dirs
for a in /var/www/*/htdocs; do

# save web site name to tmp file; debug, show on stdout too
echo $(basename ${a/htdocs/}): > $TMPFILE
[ $DEBUG ] && cat $TMPFILE

# change to the relevant dir
cd $a

# if it's a drupal dir, find the owner of the web site, run drush cron,
# and save output
[ -e sites/default/settings.php ] &&
WR_OWNER=$(ls -l sites/default/settings.php | cut -d ' ' -f 3) &&
echo '(as '$WR_OWNER')' >> $TMPFILE &&
sudo -u $WR_OWNER COLUMNS=80 drush --quiet cron >> $TMPFILE 2>&1

# add a few newlines to make the output more readable
echo -e \\n >> $TMPFILE

# debug? show the output, minus the web site name that's already been shown
[ $DEBUG ] && tail -n +2 $TMPFILE

# not debug? show the entire output if any errors or warnings were shown
[ ! $DEBUG ] && egrep -qs '(warning|error)' $TMPFILE && cat $TMPFILE

# delete the temp file
rm -f $TMPFILE

done

Kudos to drush.ws for some helpful information.

Chrome 34 and Responsive Images

April 22, 2014

With the release of Chrome 34, Google now supports the srcset attribute of the IMG tag. This is one necessary part of responsive images, but is nowhere near all the functionality you would need. After playing around with it, I’ve found some unexpected behavior. I’m not sure if it’s a bug or a feature, but I reported it anyway, just in case.

The general idea is to use the standard IMG tag, but instruct the browser to load an alternate image if the device has a DPR (window.devicePixelRatio) > 1, and if the srcset attribute lists a URL for a matching PDD (pixel density descriptor). Here’s an example where I want the browser to load a gray 155×114 image if srcset is not supported, a blue 155×114 image if srcset is supported but the DPR is 1, and a red 310×228 image at 155×114 size if srcset is supported and the DPR is 2. Even though I’m not explicitly specifying the dimensions of the image, all three variants are displayed at a width of 155 and a height of 114.

<img alt="Sample One" src="http://dummyimage.com/155x114/eee/000.gif&text=155+x+114"
srcset="http://dummyimage.com/155x114/009/fff.gif&text=155+x+114 1x,
http://dummyimage.com/310x228/f00/000.gif&text=310+x+228 2x">

So far, so good. So what happens if I add a green 620×456 image with a PDD of 4? I would expect any device with a DPR of 4 to display the green image and a device with a DPR of 2 to continue to display the red image.

<img alt="Sample Two" src="http://dummyimage.com/155x114/eee/000.gif&text=155+x+114"
srcset="http://dummyimage.com/155x114/009/fff.gif&text=155+x+114 1x,
http://dummyimage.com/310x228/f00/000.gif&text=310+x+228 2x,
http://dummyimage.com/620x456/0f0/fff.gif&text=620+x+456 4x">

I don’t have a device with a 4 DPR screen, but my retina device with a 2 DPR continues to display the red image.

So what should happen if there isn’t an exact match between a device’s DPR and the tag’s PDD? If I’m reading the specification correctly, I would expect a 2 DPR device to fall back to the 1 PDD src and display the blue image.

<img alt="Sample Three" src="http://dummyimage.com/155x114/eee/000.gif&text=155+x+114"
srcset="http://dummyimage.com/155x114/009/fff.gif&text=155+x+114 1x,
http://dummyimage.com/620x456/0f0/fff.gif&text=620+x+456 4x">

However, that is not the result I got. Instead, Chrome displays the 4 PDD green image at the 155×114 size.

After a series of tests, I can only conclude that the browser is using a DPR > 1 as a trigger to use any srcset URL for a PDD !== 1, and displaying it scaled down proportionately to the PDD value. I’m not sure if this is a bug, or if this was the intent of the specification, but it is certainly not what I expected.

UPDATE: Chrome developers responded to my bug report and kindly pointed out two bits of information. One, the specification calls for loading the next higher PDD image if an exact match isn’t available. Two, the image that is displayed will be scaled according to the specified PDD value. This accurately describes the behavior I was seeing, so it is definitely not a bug.

on the Edge

February 19, 2014

REST is pretty simple architecture and for the majority of the moving parts within, it’s well understood. Its nature nicely aligns with the stateless HTTP protocol and for the limited server-side performance. When it comes to simple web resources REST is the standard and it looks like that everyone agrees with it. Few problems/questions arise when one attempts to apply this simple principle on a complex web document.

Current web documents are assembled from multiple resources and it is not uncommon to find a HTML page that forces the browser to executes hundreds of requests. We tend to say that this is bad architecture or the page needs optimization but the reality is that a useful document or interface is just complex structure of many, many resources.

To improve the client-side performance we combine, minify and compress any resource we can, Google even multiplexes TCP connections in a shiny new protocol but still – separation of resources forces requests fragmentation.

In most of the cases, the fragmentation is considered the small price to be paid for gaining nice cacheability and order of magnitude in server-side performance, Still – there are requirements like SEO, authentications, inter-resource dependency and so on when fragmentation is reaching unmaintainable levels. Usually in such situations we instinctively reach out for the well known server-side implementation of monolithic documents dynamically generated by the ever growing server farm. This might patch the problem at hand but breaks the REST and more painfully – busts the cache.

Interestingly enough the solution is in the REST definition itself. To cite the wikipedia article – ” … Layered system : A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way. …”. We can combine, verify and swap resources on the fly without invoking any server-side logic or at least any application server logic.

Edge Side Includes (ESI) can solve most request fragmentation and provide elegant yet scalable solutions to search engine optimization, inter-resource dependencies and simple authentication.

Let’s examine usecase where we have a simple resource – an html document but our business logic demands that it is available only to clients with proper API key. Historically we would setup some server-side authentication, which would work but as the number of requests increases so will our server load. Not only that, but we would create a bottleneck in our architecture because URLs like that are completely uncacheable, so our total response time will increase and client-side performance will degrade.

If we implement ESI (Akamai flavor here) we can create a document that contains something like :

<esi:eval src="/check-key?api=$(QUERY_STRING{api})" dca="none"/>
<esi:choose>
 <esi:when test="$(checkResult) == 'OK'">
   <esi:include src="/locked/our-not-so-secured-doc.html"/>
 </esi:when>
 <esi:otherwise>
   <esi:include src="/bad-api-key.html" ttl="365d" />
 </esi:otherwise>
</esi:choose>

This snippet is self explanatory but the idea behind is pretty powerful one.

With ESI we have REST with its simplicity, server-side scalability and client-side performance. SEO with  _escaped_fragment_ should be a breeze and simple auth per asset shouldn’t kill the client.

One might ask if the edge server won’t become another application server (aka bottleneck) but the simplicity of the logic available should eliminate “apps on the edge”.

The one real problem with ESIs is that they are proprietary to the CDNs and there is no real specification or standard. I suspect the as REST is becoming the standard architecture for web and mobile the specifications will solidify and open source implementation will appear.

Cache JSON(P) calls client side

June 10, 2013

Prerequisites

The article assumes the reader knows the basics of AngularJS. The article shows how the cache logic can be written in JavaScript but the UI render is done using AngularJS. A non-angular reader can still choose to continue reading through the article and can get the logic bits from the code. I leave it up to you.

Introduction

In real world, where we have CMS (Content Management System) to assemble our page with modules and each module function independently. The modules are developed by independent developers and the page is assembled by, probably, a different user altogether. AJAX calls are always meant to enhance user experience, but given the fact the each module function independently, there seems to a duplication of AJAX calls made on the page, probably by different modules. This post shows a way, how to cache such AJAX calls with the help of jQuery promise and a JavaScript object.

jQuery Promises

As name suggests, jQuery Promise is a literal promise made by jQuery that a call will be made on the object after its completion. The object is just like an JavaScript object and can be passed around like a ball to any method you want and any number of times you want. For more, read here.

Details

Now that you have an idea of what we are going to do, let me take you through each step of the process.

Constructing an deferred object cache

I will try not to include AngularJS code in the sample but in some places it is unavoidable. Assuming that “command” is the part of the URL and “params” are the parameter key-value map, here is a snapshot of constructing a cache map.

var cacheStorage = {};

function getCacheKey(command, params) {
    var paramStr = command + '-';
    if(params) {
        var keys = [];
        for(var key in params) keys.push(key);
        var sortedKeys = keys.sort();
        for(var count=0; count < sortedKeys.length; count++) {
            var sKey = sortedKeys[count];
            paramStr += (sKey + '-' + params[sKey] + (count < sortedKeys.length-1 ? '-' : ''));
        }
    }
    return paramStr;
};

var ret = {
    get : function(command, params) {
        var paramKey = getCacheKey(command, params);
        var cachedObj;
        if(paramKey.length > 0) {
            cachedObj = cacheStorage[paramKey];
        }
        if($rootScope.debug) {
            $log.log(paramKey + " => " + (cachedObj ? 'hit' : 'undefined'));
        }
        return cachedObj;
    },

    put : function(command, params, deferredObj) {
        var paramKey = getCacheKey(command, params);
        if(paramKey.length > 0) {
            cacheStorage[paramKey] = deferredObj;
        }
    }
};

return ret;

Explanation: If you know Angular, you probably knew about $log and $rootScope. If not, just assume that these are variables injected by Angular API. The cache tries to form a key and save the object in the cache map for the key. We sort the params before forming the key because we do not want to duplicate the same object just because user gave params in a different order.

Data Source Client

Now that, our cache is ready, we need to implement a client which uses this cache and can be a interface to all the modules on the page. The requirements of the client is, to provide a generic interface to all the calls to a particular website because we wrote the cache store for a single domain. If multiple domains are involved, it is only a matter of time we edit the cache storage to modify key that includes domain name or the complete url.

var url_defaults = { key: $rootScope.key, cb: "JSON_CALLBACK" };

function doDS2Cmd( command, params ) {
    if(!params || !params.locid) {
        if(!params) { params={};}
        params.locid = $routeParams.locId;
    }
    var cachedObj = dsCacheStore.get(command, params);
    if(cachedObj) {
        return {
            'deferredObj' : cachedObj,
            'fromCache' : true
        };
    }
    var url = $rootScope.wxdata_server + "/" + command + "/" + params.locid + ".js";
    var deferredObj = $http.jsonp(url, { params: angular.extend( {}, url_defaults, params ) } );

    deferredObj.success(function (data, status) {
            if($rootScope.debug) {
                $log.log(command + ": " + JSON.stringify(data));
            }
        })
        .error(function (data, status) {
            $log.error("error $http failed with " + status + " for " + url);
        });

    dsCacheStore.put(command, params, deferredObj);
    return {
        'deferredObj' : deferredObj,
        'fromCache' : false
    };
};

var ret = {
    executeCommand : function(command, params) {
        return doDS2Cmd(command, params);
    }
};

return ret;

Explanation: We are trying to provide a interface with just one public method: executeCommand – which means executing a JSONP call. The client is trying to read from cache and if not found, it creates a promise object by var deferredObj = $http.jsonp(url, { params: angular.extend( {}, url_defaults, params ) } );. Consider this, as a jQuery equivalent of $.ajax(). Now, this promise is stored in the cache. Next time, when we get a hit from the cache, we get the promise object. Since, you always get a promise object, your module can always use .success on the promise every time it executes. If the call is already completed, your .success callback is called immediately else it waits. Here is the trick, since you are not creating a new promise, AJAX call is NOT made. Instead, it works on the existing promise object and gets the response from the promise – how many ever times you want.

Sample Module Usage

Here is an example of how to call from a module.

var commandOutput = ds2Client.executeCommand(callObj.call, callObj.params);
var fromCache = commandOutput.fromCache;
commandOutput.deferredObj.success(function(response) {
     .....
});

NOTE: The JSONP call is used for demo purposes. However, other protocol also works and you just have to change from $http.jsonp to $http.get.

Here is a complete demo (the link works only from TWC network). For out of network guys, see a non-twc demo here: non-twc demo

Conclusion

What did we just do: We learned a bit about jQuery promise, how to implement a cache store that stores jQuery promises for a given url and params, how to implement a client API which makes of cache store and provide a public interface to make AJAX calls and how to write a module that makes use of the client.

WXForecast Prototype

June 4, 2013

Prerequisites

The following post assumes the user have a basic knowledge on AngularJS and worked on the basics. If not, I would suggest you read through what Angular is and its basic tags here: Egghead IO or in Angular site.

Introduction

In this post, we will be working on implementing a complex weather reporting view with multiple functionalities like Google Maps integration, Reverse Geo-code using Google APIs, etc. The scope of this document is to let you know how to integrate all these APIs together in a Angular page and not for fully understanding the internals of the APIs.

Details

What do we need to achieve a data rep as shown: link to demo – We need hourly weather, daily weather, narration, sunrise, sunset, google charts, google maps, google geo API’s, type ahead, etc.

Aggregation data

$resource is a way to get JSON data object with a simple syntax like object.getMethod(). However, $http is even more flexible with its promise methods and $resource is more useful when performing CRUD operations. The advantage of using a resource is the output of the operations inside the $resource first returns a empty object and later returns the AJAX output once done. This is specifically useful when you assign the resource output directly to UI model. However, it is not particularly useful when you have to do some operations on top of AJAX call. I have used $resource in this demo for just showing usage.

wxModule.factory("mobagg", ['$http', '$routeParams', '$resource', function($http, $routeParams, $resource) {
    return $resource("http://wxdata.weather.com/wxdata/mobile/mobagg/:locID.js",
        {
            cb:'JSON_CALLBACK', locID:'@id'
        },
        {
            getAggregatedInfo: {method:'JSONP', params:{"key" : "2227ef4c-dfa4-11e0-80d5-0022198344f4", "hours" : "48"}, isArray: true}
        }
    );
}]);

The same can be written in $http using

function doDS2Cmd( cmd, params ) {
    var url = $rootScope.wxdata_server + cmd + "/" + params.locid + ".js";
    return $http.jsonp(url, { params: angular.extend( {}, url_defaults, params ) } )
      .success(function (data, status) {
        if($rootScope.debug) {
          $log.log(cmd + ": " + JSON.stringify(data));
        }
      })
      .error(function (data, status) {
        $log.error("error $http failed with " + status + " for " + url);
      });
}

// params contain key, locid.
var deferredObj = doDS2Cmd( 'mobagg', params );

The output of the call looks like below:

MobaggOutput

Routes

The routes are going to be either location key based or latlong:

var wxForecastModule = angular.module('wxforecast', ['ngResource', 'ui', 'ui.bootstrap', 'google-maps', 'googlechart.directives']).config(['$routeProvider','$locationProvider', function($routeProvider, $locationProvider) {
    $routeProvider.
        when('/:locId', {templateUrl: 'partials/skeleton.html'}).
        when('/:lat/:lng', {templateUrl: 'partials/skeleton.html'}).
        otherwise({redirectTo: '/30339'});
}]);

Page Content Partial

<div class="container" ng-controller="ForecastController">
    <h3 class="page-header">WX Forecast Prototype</h3>
    <div class="span12 nomargin-left">
        <div ng-controller="AlertController">
            <alert ng-repeat="alert in alerts" type="alert.type" close="closeAlert($index)">{{alert.msg}}</alert>
        </div>
    </div>
    <div class="well span7 nomargin-left">
        <div>Map with Weather Layer.</div>
        <google-map center="center" draggable="true" zoom="zoom" markers="markers" mark-click="true" fit="false" latitude="latitude" longitude="longitude" class="angular-google-map ng-isolate-scope ng-scope" style="position: relative; background-color: rgb(229, 227, 223); overflow: hidden; -webkit-transform: translateZ(0);"></google-map>
        <div ng-show="address">Exact Location Assumed: {{address}}</div>
        <span ng-repeat="place in places">{{place}}<span ng-hide="$last"> &gt; </span></span>
    </div>
    <div class="span4">
        <div ng-controller="TypeaheadController" class="ta">
            <input type="text" ng-model="selected" typeahead="location as location.displayName for location in searchDS2($viewValue)" ng-change="update()" ui-keypress="{enter:'directLoad($event)'}" placeholder="Search location or enter zip..." />
        </div>
        <a><span tooltip-html-unsafe="{{tooltipString}}"><img src="img/locicon.png" width="40px" height="40px" ng-click="getCurrentLocation()" style="padding-bottom: 10px; float: right;"  /></span></a>
    </div>
    <div class="span4">
        <div class="forecastimg" ng-show="nowWxIcon"><img ng-src="http://s.imwx.com/v.20120328.084208/img/wxicon/120/{{nowWxIcon}}.png" height="180" width="180" alt="Rain Shower" class="wx-weather-icon"></div>
        <div class="header">{{hiradObs.temp}}<sup>&deg;<span class="wx-unit">F</span></sup></div>
    </div>
    <div class="span11">
        <table>
            <tbody ng-repeat="dailyForecast in dailyForecasts" class="span10 modal-header wxrow" ng-init="dayText = ['Today', 'Tomorrow']" ng-click="dailyForecast.isCollapsed = !dailyForecast.isCollapsed">
                <tr>
                    <td class="span2"><span ng-show="$index > 1">{{getDate($index) | date:'MMM dd'}}</span><span ng-show="$index <= 1">{{dayText[$index]}}</span></td>
                    <td class="span7">{{dailyForecast.narration.phrase}}</td>
                    <td class="span2"><div ng-show="dailyForecast.maxTemp">{{dailyForecast.maxTemp}}<sup>&deg;F</sup> <span class="icon-arrow-up"></span></div> <div>{{dailyForecast.minTemp}}<sup>&deg;F</sup> <span class="icon-arrow-down"></span></div></td>
                </tr>
                <tr>
                    <td colspan="3">
                        <div collapse="dailyForecast.isCollapsed">
                            <div class="well wxdetails">
                                <div ng-controller="ChartDataController" ng-show="isHourlyDataAvailable">
                                    <div google-chart chart="chart" style="{{chart.cssStyle}}"/>
                                </div>
                                <div>
                                    <div class="span4">Sunrise: {{getDateFromEpoch(dailyForecast.sunData.rise) | date:'hh:mm a'}}</div>
                                    <div class="span4">Sunrise: {{getDateFromEpoch(dailyForecast.sunData.set) | date:'hh:mm a'}}</div>
                                </div>
                            </div>
                        </div>
                    </td>
                </tr>
            </tbody>
        </table>
    </div>
</div>

As you can see from the partial, the page is quite a bit collection of google maps, google chat for each daily row, sunrise-sunset data, narration data, etc. The population javascript is quite simple, we get the mobagg response and map it to appropriate models.

mobagg.getAggregatedInfo({locID : locId}, function(aggdata) {
            if(aggdata && aggdata[0]) {
                $scope.aggInfo = aggdata[0];
                $scope.weatherAlerts = $scope.aggInfo.WeatherAlerts;
                $scope.hiradObs = $scope.aggInfo.HiradObservation;
                ....
            }
});

Google Maps

Google maps is included by angular-google-maps plugin available on the internet. However, I had to make some modifications to enhance the map such as weather layer, click traversal, etc. The angular directive “google-maps” creates a DOM element inside the directive and provides the DOM element to google maps for map rendering.

// Create our model
var _m = new MapModel(angular.extend(opts, {
      container: element[0],
      center: new google.maps.LatLng(scope.center.latitude, scope.center.longitude),
      draggable: attrs.draggable == "true",
      zoom: scope.zoom
}));

The scope is initialized with default zoom and lat/long we provide. However, we update the latlong on getting the values from mobagg call. The $watch listeners then update the map with the latest center latlong values.

The address resolution in google maps is an amazing functionality and can provide you with an almost accurate address.

(new google.maps.Geocoder()).geocode({latLng: latLng}, function(resp) {
        if (resp[0]) {
              var bits = [];
              for (var i = 0, I = resp[0].address_components.length; i < I; ++i) {
                    var component = resp[0].address_components[i];
                    if ($scope.contains(component.types, 'political')) {
                        bits.push(component.long_name);
                    }
                }
                $scope.places = bits;
                $scope.address = resp[0].formatted_address;
                $scope.$digest();
        }
});

Google Charts

Google charts are provided by angular-google-chart plugin available in the internet. Configuring the chart is a slightly complicated process as the data passed to charts API on each ngRepeat directive. Hence, we have manually update few attributes for each chart data. Hence, we have a separate ChartDataController which populates one.

<div ng-controller="ChartDataController" ng-show="isHourlyDataAvailable">
    <div google-chart chart="chart" style="{{chart.cssStyle}}"/>
</div>

Weather Alerts

The alerts are shown on the top bar on the block level and uses bootstrap’s alert functionality.

<div ng-controller="AlertController">
    <alert ng-repeat="alert in alerts" type="alert.type" close="closeAlert($index)">{{alert.msg}}</alert>
</div>
....
angular.module('wxforecast').controller('AlertController', function($scope, $http) {
    $scope.alerts = [];

    $scope.$watch('weatherAlerts', function(newValue, oldValue) {
        $scope.alerts = [];
        angular.forEach($scope.weatherAlerts, function(weatherAlert) {
            $scope.alerts.push({'type' : (weatherAlert.severity == 1 ? 'error' : 'warning'), 'msg' : weatherAlert.description, 'closeable' : false});
        });
    });

    $scope.closeAlert = function(index) {
        $scope.alerts.splice(index, 1);
    };
});

Conclusion

So, we have just seen how to integrate several APIs in an Angular page and still makes it more responsive and end up with a cleaner code. We have also seen few code examples on how to do certain things like creating Angular alerts bar, google chart, etc along the way.

Follow

Get every new post delivered to your Inbox.

Join 508 other followers