Showing posts with label Bookmarklets. Show all posts

Bookmarklet to download complete Google+ albums to Picasa #

PicasaWeb albums have an option to download the entire album to Picasa, which comes in quite handy when you'd like to have a complete archive of the pictures taken at an event. However, PicasaWeb URLs now redirect to the view of the album in Google+, which doesn't expose the equivalent command. There is a workaround for this (append noredirect=1 to the PicasaWeb URL), but it's rather involved. As an easier alternative, here's a bookmarklet:

Download to Picasa

If you invoke it on a Google+ photo album page, it'll try to launch Picasa and download that album (unless the owner has disabled that option).

The pretty-printed version of the bookmarklet script is:

(function(){
  var ALBUM_URL_RE =
      new RegExp('https://plus\\.google\\.com/photos/(\\d+)/albums/(\\d+)[^?]*(\\?.+)?');
  var match = ALBUM_URL_RE.exec(location.href);
  if (match) {
    location.href = 'picasa://downloadfeed/?url=' +
        'https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fpicasaweb.google.com%2Fdata%2Ffeed%2Fback_compat%2Fuser%2F' + 
        match[1] + '%2Falbumid%2F' + match[2] + '%3F' +
        (match[3] ? encodeURIComponent(match[3].substring(1) + '&') : '') +
        'kind%3Dphoto%26alt%3Drss%26imgdl%3D1';
  } else {
    alert('Oops, not on a Google+ album page?');
  }
})();

The picasa://downloadfeed URL scheme was observed by looking at what happens when "Download to Picasa" is selected on a page through Chrome's DevTools' Network tab. The attempt at preserving query parameters is to that the authkey token and the like are passed on (unclear if it actually does anything though).

Google Reader Play Bookmarklet #

It occurred to me that it'd be pretty easy to make a bookmarklet for the recently-launched Google Reader Play:

PlayThis!

All it does is take the current page's feed and display it in the Play UI. You may find this useful when discovering a new photo-heavy site (or anything else with a feed, like a Flickr user page), or when you want to share, star or like an item from a site you're not subscribed to (you can also use the regular Reader subscribe bookmarklet for that).

P.S. If you're reading this in a feed/social content reader, you'll most likely have to view the original post, as the javascript: URL on the bookmarklet link will no doubt get sanitized away.

Autosaving Form Data #

Due to my penchant for beta browsers and/or clumsy fingers, I have on more than one occasion lost text I had been laboring over in a form. Given that the modus operandi of most blogging and wiki applications is for the user to dump his/her thoughts in a <textarea>, this is presumably getting more and more common. The usual solution to this is to do all input in a more capable application or, at the very least, to periodically do a select-all-copy on your text. Unfortunately this is all rather tedious and clumsy. OmniWeb is the only browser that attempts to do something about this, but I happen to use Firefox most of the day, thus this doesn't help.

A Firefox extension would be one way to approach this, but I believe I have come up with a more browser-agnostic solution. Simply drag these two bookmarklets to your toolbar:

Autosave and Load

Clicking the first favelet will save all form data in the currently visible page (and continue to do so automatically every 30 seconds). Conversely, the second one will restore saved data for the current page.

It sounds perfect, and it almost is. The primary caveat is that cookies are used for storage, therefore we are limited to ~4K. If you're going to be writing your NaNoWriMo opus in a <textarea>, you may need to look elsewhere. One would think that segmenting the form data into 4K chunks would work, but unfortunately most web servers (Apache included) refuse to accept Set-Cookie headers longer than 4K, even if they are made up of multiple cookies. Solutions to this (beside the obvious workaround of not using cookies to begin with) are welcome.

"Autosave" must be invoked at least once per editing session, which is also not quite ideal. If you are able to edit the HTML behind that page's form, you can insert the following snippet into its source and have autosaving happen at loading time:

<script type="text/javascript">PAS_mode='save'</script>
<script src="http://persistent.info/autosave/favelet.js" type="text/javascript"></script>

This is a hosted favelet, and thus should automatically update itself if/when I find bugs (or work around the cookie length issue). This also means that it will not work in Safari (as of version 1.2). Mozilla and MSIE behave well, though getting it to operate in both took some work. For example, both browser change JavaScript behavior based on DOCTYPE and/or MIME type. Fun stuff.

Pseudo-Local W3C Validator Favelet #

My to-do list has had a "install local copy of the W3C validator" item on it for quite a while, and when I came across an article detailing how to do just that, I thought I was all set. However, my excitement faded shortly after I saw the steps required to do the installation. A CVS checkout, replacing some files with Mac OS X-specific ones, Apache config file editing, two libraries to download and install, and fourteen Perl modules to setup. I resigned myself to an hour or two of drudgery, and went through the list. I eventually stumbled when trying to set up the Open SP library: I didn't feel like installing Fink just for this one thing, and my attempt at building it by hand didn't quite work out (the libtool that was included wasn't the right one for OS X).

Rather than force myself to go through with the rest, I decided that perhaps an alternative approach was worth investigating. Instead of running my local (behind the firewall) documents through my own validator, I could instead transfer the file to another server, and then point the regular W3C validator to its (publicly-visible) temporary URL. Doing this in the form of a favelet/bookmarklet seemed ideal, since it would provide one-click access and be more portable than a shell script. This favelet would then invoke a CGI script on my server; a hybrid design in the style of my feed subcription favelet.

The first thing that must be done is to get the current page's source code. Initially, an approach based on the innerHTML DOM property seemed reasonable. However, it turned out that this property is dynamically generated based on the current DOM tree, and thus not necessarily reflective of the original source. Furthermore, it's hard to get at the outermost processing instructions in XHTML documents, thus the source wouldn't be complete anyway. Therefore, I decided to use a XMLHttpRequest to re-fetch the page and then get its source by using the responseText property. Unfortunately at this stage Internet Explorer support had to be dropped, since its equivalent ActiveX object didn't seem to want to run from within a favelet (clicking on the identical javascript: link in a webpage worked fine).

With the source thus in hand, I had to find a way to get it on the server. The XMLHttpRequest object also supports the PUT HTTP method, but apparently Safari only supports GET and POST. In any case, the object's use is restricted for security reasons, and so it would've been difficult to make any requests to a server different than the one hosting the page that was to be validated. However, the other, more old-school way of communicating with servers, via a form, was still available. Therefore the favelet creates a form object on the fly, sets the value of a hidden item to the source, and then passes it on to the CGI script. The script generates a temporary file and then passes it on to the validator.

The validator favelet is thus ready to be dragged to your toolbar. The original, formatted and commented source code is also available, as is the server-side script that receives its data and passes it to the W3C validator. The development process of this favelet was made much more pleasant due to the generator that can be used to transform human-readable code into something that's ready to be pasted in a javascript: URL.

Full disclosure: For some reason, perhaps because it was 1 AM, it didn't occur to me to use the POST method to submit the source. Instead I devised a (convoluted) method that would take the source, divie it up into ~2K chunks, and then create a series of iframes that would have as their src attribute the CGI script that took the current chunk as its query string (i.e. via the GET method). Since there was no guarantee that all of the chunks would arrive in order, I had to keep track of them and eventually join them to reconstruct the original source (à la IP packet fragmentation). You'd think that I would have realized the folly (i.e. difficulty in proportion to benefit) of this approach early on, but no, I pursued it until it worked 95% of the time (modulo some timing issues in Firefox). Only when I was researching this entry did I realize that the form/POST approach was much faster (each chunk required a new HTTP connection and a fork/exec on the server), and ended up implementing it in 15 minutes with half the code. Chalk one up to learning from your mistakes (hopefully).

Bookmarklet/Favelet Generator #

I don't know what prolific authors of favelets/bookmarklets do when they code them, but I find the process rather annoying. Having to strip out all newlines, and preferably all extraneous whitespace (for the sake of shorter URLs) gets tedious when revising a script that's more than a few lines long.

As a result, I've come up with the following Perl script that transmogrifies a readable JavaScript source file into a single compressed line, ready to be used as a favelet:

#!/usr/bin/perl -w

use strict;

my $bookmarklet = 'javascript:';
my $inComment = 0;

while (<>)
{
  chomp;
  s/^\s*(.*)\s*$/$1/;         # whitespace preceding/succeeding a line
  s/([^:])\/\/.*$/$1/;        # single-line comments ([^:] is to ignore double slashes in URLs)
  s/^\/\/.*$//;               # whole-line single-line comments
  s/\s*([;=(<>!:,+])\s*/$1/g; # whitespace around operators
  s/"/'/g;                    # prevent double quotes from terminating a href value early
  
  # multi-line comments
  if ($inComment)
  {
    if (s/^.*\*\/\s*//) # comment is ending
    {
      $inComment = 0;
    }
  }
  elsif (s/\s*\/\*.*$//) # comment is beginning
  {
    $inComment = 1;
    $bookmarklet .= $_; # we have to append what we have have so far, since
                        # the line below won't get triggered
  }
  
  $bookmarklet .= $_ if (!$inComment);
}

print $bookmarklet;

print STDERR "Bookmarklet length: " . length($bookmarklet) . "\n";

The contents of your .js file should obviously serve as the standard input of the script, and for maximum efficiency (on Mac OS X) its output should be piped to pbcopy so that it can be pasted into a browser's location bar for easy testing. The length of the favelet is also printed, since some browsers impose a maximum URL length.

Not quite a favelet IDE, but it certainly makes life easier.

NetNewsWire Subscription Favelet #

As I was pondering the implications of the Safari RSS announcement (my thoughts pretty much mirror Brent's) I realized that a web browser/feed aggregator combo does have one thing going for it. Subscription to a site right now requires at best a drag-and-drop of the site's URL into NetNewsWire, and at worst looking through the page (or its source) for the (possibly orange) RSS/XML/Atom icon. The blue logo that Safari RSS adds to the location bar makes it much more obvious when a site has a feed, and this is some functionality that NNW doesn't have. It could perhaps be added to current versions of Safari with an InputManager extention á la PithHelmet or Saft. Clicking on it would subscribe to the site's feed using NetNewsWire, or whatever aggregator was registered as a handler for the feed:// protocol. However, this requires a level of Cocoa-based hacking that I'm not familiar with and aren't prepared to learn just yet.

As a hack-ish alternative, a favelet could be used to at least allow one-click subscribing to a site. Such favelets/bookmarklets already exist, but they rely on RSS Auto-Discovery. The problem with that is that it is a relatively brittle approach, i.e. it fails completely if a site doesn't use auto-discovery, even if it may have a feed. Mark Pilgrim's Feed Finder tries much harder to find the RSS or Atom feeds that relate to a page, while doing elegant things like respecting a site's robots.txt file. Re-implementing its functionality in JavaScript didn't seem like an appealing option, so instead I used a hybrid approach. This simple CGI script lives on my server, and calls Mark's tool (I am a total Python newbie, thus there may be better ways of achieving this):

import os, feedfinder

if os.environ.has_key('QUERY_STRING'):
  uri = os.environ['QUERY_STRING']
  try:
    feeds = feedfinder.getFeeds(uri);
    if (len(feeds) > 0):
      print "Location: feed://%s\n\n" % (feeds[0])
    else:
      print 'Content-type: text/html\n\nNo feed(s) found at the given URL.\n' 
  except IOError:
    print 'Content-type: text/html\n\nCould not access given URL.\n'
else:
  print 'Content-type: text/html\n\nNo URL was given.\n'

The Subscriber favelet then simply invokes it with the current page's URL. If a feed is found, the redirect within the CGI script triggers the feed:// protocol handler thus invoking the aggregator. Such a favelet has the advantage of being easily updatable; if the auto-discovery standard were to change or if Mark figures out even more clever ways of determining if a site has a feed, then all its uses would immediately benefit.

Google Search Terms Highlighting for All Browsers #

One of the neat features of the Google Toolbar and of Google's cached pages is that they highlight for you the search terms that brought you there. Unfortunately the former is only available for MSIE, while the latter can destroy formatting, prevents scripts from working and present (slightly) outdated content. Inspired by the Highlighter Favelet (as pointed to by Bill Bumgarner), I've put together a slightly enhanced version that checks for a Google referrer, and in its presence hilights the search terms.

The short version: drag the Highlighter to your browser toolbar and click it after visiting a Google result (or any other time to specify other words to highlight).

The long version (expanded, more readable code follows): It uses the same color scheme as Google. I have preserved the scrolling to the first item found that the original favelet implemented, but it seems to overshoot (in Safari). I have only tested it in Safari and Firefox, but it is simple enough that it ought to work in all JavaScript DOM-supporting browsers. Quoted terms aren't handed properly (e.g. "Mihai Parparita" will be treated as two words, each with a quote on the left/right sides). It doesn't skip over common words that Google ignores (e.g. "is", "the"). Words are replaced blindly, so scripts may still stop working, button names may have HTML in them, etc. All of these are fixable problems, except that coding favelets beyond a certain length gets very annoying.

var items = new Array();

if (document.referrer.indexOf('google') != -1 &&
    document.referrer.indexOf('q=') != -1)
{
  var queryTermsRegExp = new RegExp('q=([^&]+)');
  if (queryTermsRegExp.test(document.referrer))
  {
    items = RegExp.$1.split('+');
  }
}

if (items.length == 0)
{
  var searchWords = prompt('Enter one or more words to highlight:','');
  items = searchWords.split(' ');
}

var colors = new Array('#ffff66', '#a0ffff', '#99ff99', '#ff9999', '#ff66ff',
                       '#880000', '#00aa00', '#886800', '#004699', '#990099');

b = document.body.innerHTML;

for (var i=0; i < items.length; i++)
{
  var replacementRegEx = new RegExp('(' + items[i] + ')','gi');
  
  b = b.replace(replacementRegEx,'<span style=\'background-color:' +
                                 colors[i % colors.length] +
                                 ';\'>$1</span>');
}

var scrollToRegEx = new RegExp('(' + items[0] + ')','i');
b = b.replace(scrollToRegEx, '<span id=\'scrollToHilight\'>$1</span>');

void(document.body.innerHTML = b);

window.scrollTo(0,document.getElementById('scrollToHilight').offsetTop);