Mar 07

Versatile RESTful APIs Beyond XML

An article I wrote has just been published over at InfoQ. It’s called Versatile RESTful APIs Beyond XML and shows how easy it can be to extend Rails’ RESTful behaviour to input and output resources not only as XML but also as JSON and Microformatted HTML.

The article builds on some posts on this blog, such as Intercepting Microformats In Rails Input, but offers a bit more context. The timing of the article fits nicely with a post on the microformats-rest list about Rails, REST and microformats, so hopefully we’ll see more discussion of these concepts over the coming weeks.

Mar 07

Rails Geo Plugins: acts_as_geocodable

acts_as_geocodable (blog entry, repository) is the newest kid on the rails geo plugin block. It actually consists of two parts, a gem called graticule which handles the actual geocoding, interacting with external services, etc, and the plugin which offers extensions to your models.

I like that separation. Having the generalised code in a gem and the rails-specific hooks in a plugin makes a lot of sense and makes it much easier to use the core code in non-rails ruby apps, and having a single gem that supports multiple services allows for built-in failover should the preferred geocoder be unavailable.

Much of the functionality of the plugin is already integrated into my application, but not with quite so many options. In such cases I really enjoy installing plugins; there’s something very satisfying about going through my application deleting code.

The plugin adds two tables to your database. The first, geocodes, holds longitudes/latitudes for given addresses, while the other, geocodings, polymorphically links those geocodes to your existing models. In my case, this meant re-geocoding all the locations already in my database, but since I’m operating on a fairly small data set, that was a pretty simple case of iterating across them all and re-saving them. For those operating with very large databases, you may want to write a more sophisticated migration to handle that.

The trickiest thing was re-coding my search queries to use the new database. acts_as_geocodable offers a number of neat methods for running queries such as (from their documentation)

  event.distance_to "49423"

  Event.find(:all, :within => 50, :origin => "97232")

  Event.find(:nearest, :origin => "Portland, OR")

But I wanted a way to build more sophisticated searches so I could, say, limit by title and order by distance. It turns out that’s pretty easy too:

Location.find(:all, :origin => 'Grand Rapids MI',
	:conditions => ['title LIKE ?', 'My Title'] :order => 'distance')

The one place where I had a problem was when trying to use the last of the examples.

Location.find(:nearest, :origin => 'Portland, OR')

blew up with:

ActiveRecord::StatementInvalid: Mysql::Error: Incorrect parameter count in the call to native function 'RADIANS': SELECT locations.*, geocodes.*,
            (ACOS( SIN(RADIANS()) * SIN(RADIANS(geocodes.latitude)) + COS(RADIANS()) * COS(RADIANS(geocodes.latitude)) * COS(RADIANS(geocodes.longitude) - RADIANS()) ) * 3963.1676)  AS
            distance FROM locations  JOIN geocodings ON
              locations.id = geocodings.geocodable_id AND
                geocodings.geocodable_type = 'Location'
              JOIN geocodes ON geocodings.geocode_id = geocodes.id  ORDER BY distance ASC LIMIT 1

but when I used a full address (my home) in the same query, I got an appropriate result. It looks as though perhaps if it fails to get an appropriate pair of co-ordinates for the specified location, it tries to perform the query anyway, with an exception resulting.

I also found some problems when trying to use the plugin with locations outside North America, but that is a limitation of the geocoding services and not of the gem or plugin themselves. Hasten the day when enough data is open that global geocoding services can become a reality.

Working with acts_as_geocodable has so far been a very straightforward experience and has allowed me to rid my code of some pieces I’d always meant to refactor out. It’d be good to see the error I ran into handled more neatly, and perhaps an obvious API to take advantage of the failover options presented by graticule, but the plugin is still early in its life and shows a lot of promise.

Mar 07

Governmental Pipes

I’ve refrained from blogging much about Yahoo! Pipes, mainly because everyone else seemed to be. It’s definitely an interesting development, and shows how far we’ve come with open data, but also how far we still are from that really making an obvious impact for non-geeks.

Two of the more interesting pieces on the use of Pipes that I’ve seen so far are two blog entries that Tim McGhee pointed out on the govtrack list. He’s done some work using Pipes to repurpose various feeds about government activity, and they’re worth a look. Check out: Managing the volume of content from Congress and Geek Out: Mashing Yahoo! Pipes and the Congressional Record over on his blogs.

Feb 07

Relax over REST

Mark Nottingham has a good post running through a few topics on which people get needlessly caught when designing RESTful applications. If you’re new to working on RESTful application design (as many rails developers are) it’s worth checking out to save yourself needless anguish.

Thankfully for Rails developers at least some of the issues he identifies will be a little simpler than they might be for people designing systems from scratch. In particular, while there are a few URL design choices (numeric IDs, other parameters, or a hybrid? nested vs. flat?) the conventions are good and changing isn’t all that hard.

And while some sort of schema definition is important to ensure that server and client can be sure they’re speaking the same language, for those using tools like ActiveResource which take full advantage of dynamic languages, there’s a little more flexibility than there may have been before.

Feb 07

More on OpenID

It seems DHH is hopping on the OpenID bandwagon, and that the next 37signals app will allow openid-based authentication. He’s talking about releasing his code as a plugin, so maybe I won’t need to find the time.

For those following the OpenID buzz, Simon Willison’s cool things you can build with OpenID is well worth reading to begin to get a sense of the new opportunities opened up once we have unique IDs for users that map between sites. And the comments on this post at Tim Bray’s site may help people with some lingering questions.

Update: DHH has committed the code as a plugin, and Dan Webb has posted another tutorial that’s well worth a look

Feb 07

Jumping On

Bandwagon logo

Bandwagon is a soon-to-be-launched service to help people back up their itunes libraries. It provides online services (and it looks like tools) to manage and store the backups.

They’re also offering free accounts to bloggers linking to their site, and I’d really like to try the service, so here’s my post.

I’m a little sceptical that it’ll be practical to do online backups of our main itunes library, seeing as how it’s just steamed past 175GB and our DSL connection isn’t at the high end. But I’d love to be proven wrong…

Feb 07

Input formats and content types in Rails 1.2

One feature of recent releases of Rails I hadn’t spotted before is the ability to define your own parameter parsing based on content type. I’m working on an application that will employ a RESTful API and that I hope will take its input in either standard http parameters, microformatted HTML, XML or JSON.

I don’t really want to have to write custom code within the controllers to interpret the input based on content type, so I started looking for how rails parses XML input and came across the following in the actionpack changelog:

    # Assign a new param parser to a new content type
    ActionController::Base.param_parsers['application/atom+xml'] = Proc.new do |data|
      node = REXML::Document.new(post)
     { node.root.name => node.root }

    # Assign the default XmlSimple to a new content type
    ActionController::Base.param_parsers['application/backpack+xml'] = :xml_simple

Looking at the actual source code it appears it’s actually being implemented slightly differently, with the Mime::Type object being used as the key, rather than the text content type. Since the json content type is already defined (with a reference to the object in Mime::JSON), JSON can (usually) be parsed as YAML, and the :yaml symbol is a shortcut to a YAML parser, handling it transparently is almost as simple as adding:

ActionController::Base.param_parsers[Mime::JSON] = :yaml

or if we wanted to be a bit more explicit:

ActionController::Base.param_parsers[Mime::JSON] = Proc.new do |data|

to environment.rb. Building these APIs is even easier than I’d thought!

Sep 06

Bus routes on Rails

Following on from my previous entry about scraping bus route data from The Rapid’s website, and to begin to demonstrate the possibilities it opens up, I’ve set up a simple web service to provide route and stop data. It’s based on the new REST style from Edge Rails, and routes are scoped by city to allow for future expansion. To get data on Route 1, GET:


To get a list of the stops within 1.5 miles of a given longitude and latitude, GET:


Using Edge Rails, setting up the application was remarkably simple. Three models, three controllers, appropriate use of respond_to blocks, and the right entries in config/routes.rb:

map.resources :cities do |cities|
  cities.resources :stops
  cities.resources :routes

This was the first time I’ve used nested routes so it took a few minutes to work out the correct syntax for the link_to calls. When using nested routes like those above, you must declare first the ID of the city and then the ID of the stop or route, eg:

I’m not making any guarantees about the long term availability of the service, but if anyone wants to make use of it, let me know and we can probably work something out. I’ll probably be making use of it myself.

Aug 06

Embedding PHP with Smart Pill

It’s always an interesting challenge to take a system you are familiar with and try to use it in an entirely new way or context. That’s what I’ve been getting with PHP of late. As more and more of my web development work moves to Rails, I’ve had the chance to work on PHP embedded within Filemaker Pro as I’ve tested and explored Scodigo‘s new Smart Pill plugin.

For those of us used to programming in full-fledged languages, writing scripts and functions in filemaker can be quite a challenge. Writing and debugging all that recursive code is a time-consuming process, and communicating with external processes isn’t really worth the work without plugins. Smart Pill changes all that, by opening up the entire PHP (5.1.4, including many extensions) engine for use within Filemaker.

I can’t imagine the plugin is going to tempt many web developers over to Filemaker, but for those who are integrating with existing filemaker systems or who need an office database system but want to be able to make use of web services or any of the other functionality PHP allows this could be a huge improvement.

You can see a screencast of the plugin in action, including a number of examples I worked on, over at the Filemaker Magazine site.

Feb 06

Exploring Greenbelt with Last.FM

My particular focus this year as a member of the Greenbelt web team is on finding ways to better integrate festival related content with the wider web, and then working out how to use the festival’s website as a hub for all of that information. It started out with the collage that we built using flickr photos, del.icio.us links, and blog entries around the festival last year, and the next step (the first longer term one) is integration with last.fm.

The integration is pretty simple. We have a Greenbelt group set up on last.fm that we’re encouraging festivalgoers to join. That immediately brings with it all sorts of benefits, like discussion boards, journals, and a custom radio station, but we’re then making use of last.fm’s web services to suck the listening information into the Greenbelt database once a week and produce our own chart page.

By pulling the data into our own database we’re able to do some matching between the artists festivalgoers have been listening to, and the artists in our database. As we build that database out with new bookings, and old archived information that information will get richer and peoples’ listening habits will become not only a way to learn more about the community, but a gateway into our collective history.

There are some kinks to work out. Too many of us keep listening to artists with non-latin characters in their names, and it took me a while to get round to ensure that was being handled nicely. And we probably need a fallback so that if last.fm haven’t updated their charts by a given time, we check again, and provide people a way to access alternative charts. But it’s yet another demonstration of how easy it is to make one site richer using another’s metadata.