My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Wednesday, February 20, 2013

My Personal Github Flow

I've created many repositories in my programming history, starting from the good old Google Code, passing through Mercurial HG and svn, ending up using on daily basis the awesome Github.
There is something I've spotted every time I've created a new repo, I needed a way to do always the same thing ... but better each time!
The very latest case is the callerOf utility, something that small and already demanding usual/common stuff such:
  • a meaningful and organized structure, instead of files in the wild
  • an easy way to test for browsers and often nodejs too
  • a simple build process per each target, able to combine them all at speed light
  • a linter for those projects that could be widely adopted and linters are so annoying ... where was I ... right ...
  • thanks to the same folder structure, an already prepared .gitignore, together with the .npmignore
  • a LICENSE.txt file, in my personal experiments and libraries always Mit Style
  • a Makefile able to help me combining all these tasks
  • last, but not least, an almost fully prepared, and basic, package.json file with main info to publish
  • optionally, the usage of .travis.yaml for the awesome Travis CI service

About Travis

Today I've maden a donation to help those guys maintaining the project. The email as soon as something goes wrong is a great way to be notified about problems. I've worked in many enterprise environment where this is the default, most basic configuration, to be instantly notified and be able to fix ASAP or revert instantly and a free service working all the time doing this for all those Open Source projects cannot be ignored, and applauded from Developers indeed.
All major programming languages are supported plus it's not that difficult to configure and it's based, if node.js is supported, to the simple npm test command: awesome!
I won't tell you how much I've donated 'cause it does not matter as long as you donate something to these good fellas, right?

I Might Not Need Travis, But ...

The basic way I've configured my gitstrap, this is the silly name I've chosen for the repo with most basic structure, forces me to build and run tests before being able to push. OK, OK ... is not that if I don't build and run tests I cannot push, I mean, it could simply be the README.md edited and not necessarily code, but to know if my code/library is working, I necessarily need to make and, in that case, be sure that everything is green.
In few words, the moment I push some code I am pretty sure Travis CI will be still green but if something goes wrong with one of the node.js versions, the test, the server, whatever, really, I'll be still notified and able to react.
The classic scenario is a pull request proposed without testing, mybe looks good, only Travis will tell you if it really does, right? ... oh well, feel free to enforce whoever sble to edit even online to run tests ... if you manage :)

What I Think Is Essential

In my cases these are the most basic dependencies, able to make my workflow freaking fast and robust enough too.
  • wru testing framework for node.js and web via make test, already configured in latter case inside a handy index.html eventually published via gh-pages through make pages
  • polpetta, in order to be able to automate the inclusion of the same test for both web and node.js in a ctrl+click and make web shortcut
  • JSHint to eventually enforce the usage of a linter, through make hint shortcut
  • UglifyJS
All above projects can be simply included in the current folder via make dependencies, not distributed via npm since these are not really part of the project, except the test, where in this case a tiny overhead of 130Kb for wru testing library isn't really a problem for anyone, right? :)

This Is gitstrap

Really a sort of github boilerplate for JS related projects, something already organized and ready to go, something you can simply:
curl -s https://raw.github.com/WebReflection/gitstrap/master/new >~/gitstrap && bash ~/gitstrap && rm ~/gitstrap
Following instructions here, or if you prefer a manual installation:
git clone git://github.com/WebReflection/gitstrap.git project-name
cd project-name
rm -rf .git
make dependencies
git init
git add .
git commit -m "gitstrap in"
git remote add origin [email protected]:yourname/project-name.git
git push -u origin master
After this, don't forget to update Makefile and package.json with the right name, specially if you are planning to push to npm, as well as the README.md.
I might decide to automate this procedure too pretty soon and any extra task or contribution will be more than welcome. Right now I think this is enough.

About Files And Folders

Right, this is where I explain what the hack is that structure ... let's start from the top:

build/

This folder will contain all versions of the same projects, if the project would like to be compatible with nodejs, web or generic JS engine, AMD loader based on define logic and stuff.

Yes, A Different Automated File!

After all discussion on "how should a file be to be compatible with all the mess out there" I've realized that exporting JS is really a matter of env, so if everything else is more or less the same, why on earth pollute all possible projects with that exporting nonsense? You need AMD? You get AMD ... You need node.js? You get node.js ... and same is for generic env, is really that easy!
Please note that all builds generate a .max version of the file, those I've just linked, and a minified version too, with the exception of node.js, since I don't believe in packed server side code that much :-)

index.html

This is used to generate the test page in gh-pages, so that once the pages have been generated, the test folder will contain those tests and launched through the index.html file.

Makefile

The most important thing to edit here is the name of the project and the main file, or the list of files to use. These can be different per build, if needed, or just the same for all of them. You decide!

LICENSE.txt and other files

Have been explained already :-) Modify the name in the license, modify the license too, if necessary, and that's pretty much it. You need to edit where necessary to go with your own project.

src/

Here is where your source files should go. If the build shoul dhave different targets, as example for node or amd, I think is good to prefix or suffix them with these names. The main used as example is what the project will export, just an empty object.

template/

This might look weird but it's actually what will be used, as file, before and after each build. These files could be empty too, it does not really matter, but it's handy to have them to easily generate AMD, node.js exports, or generic closures to use before and after the generic code reused across targets.

test/

Here the big deal, where the .test.js file will be in charge of running wru against whatever test is present in the folder, as long as there is a counter part in the src too. This should make the possibility to test in isolation easier. Bear in mind node.js needs to require() but the browser can load things in pieces so both built version and other files will be included and tested, if tests are in place.

As Summary

The aim of this repo is to make at least my life easier, and you can see already in all my github repos that the structure is already like this, and every project is a partial clone of the other with tiny improvements over the Makefile and some automation maybe not necessary anymore, as it was this good old builder once in python, then in nodejs, and finally obsolete in latter repos :)
Have fun with you extra ultra cool best library ever!

Sunday, February 10, 2013

Jokes A Part ...

... is really that easy to start from the scratch or abandon everything, but that's not, by any meaning, an evolution, is rather a reboot.
Unfortunately, written software cannot reboot that easily, and we all know that, except few exceptions where is really needed and we call that refactoring!
Refactoring is needed when everything is not under reasonable control or performance anymore, refactoring puts everything on hold until it's completed ... you know that ;)

Focus On Reality

We really should never loose focus on what we are really trying to do, really trying to improve, and for who, if needed, and beside our own self thoughts.
If the rest of the world is doing something in a way, we have really few chances to change that way quickly and easily because we decide that way is wrong, right?
We need to be able to propose the best change able to improve that de-facto reality rather than thinking that we are able to improve everything simply imposing our own superior reality ... right ? The moment we'll impose blindly our own meaning of "best way ever", without even analyzing what's good out there, we are doing everything wrong, IMHO!

Graceful Enhance ... Everything!

Really, I think this is generally speaking the best way to go on and, probably, the only way to go too, since everything else has historically failed already so ... why try again?
Understand developers needs inside their libraries too, and not only patterns they used, is, as example, a good starting point.
I am expecting this to be a sort of JS improvements constitution for the most used programming language in the world, accordingly with the biggest open source community, at least in github ...

For A Better JS Future

  1. do not break what has been widely adopted already, unless that's really bad in terms of security
  2. try to stick with the already available and standardized syntax, allowing partial or full polyfills because of graceful OS, Environment, Browsers, Engines, whatever! migration
  3. involve as many developers as possible (public survey over internal survey) rather than provide already decided internal decisions based in already decided internal pools nobody ever heard about out there
Three points, since everything else is reasonable already and done in a good way ... still!

Why Is This Important

I think these points are more or less everything I wished following es-discuss mailing list, really ... from time to time, I have experienced these situations, absolutely unexpected:
  1. ES4 failed because it was braking the web, we have transpilers now, so everyone should use them instead of JS because of new unsupported syntax (transpilers break the web!)
  2. if that library does that, and everyone likes that, and that library is not the old Prototype.js or another one nobody here heard about, that library is wrong and that behavior should be different
  3. we don't want internals/private pools saying that what the rest of the world thinks is needed is wrong, we can have a much bigger audience through public surveys.
Latter one was the most frustrating experience, personally speaking, trying to follow and contribute in that mailing list with parallels, private, things behind the scene, I could not stand by since either you are public, being the ML public and telling the world you are, or you are not, and you can have, in that case, all pointless, useless, irrelevant, pools you can think about, without bothering the rest of the world with your results!

About That, I Apology because I know that specific case had, again, best intentions, but my point is that surveys should be public too because if 3 developers cannot represent the entire community, neither can 300 behind the same company, or just a couple. There are many more of us out there, I'd love to see the possibility to participate every time a decision about an API should be made!

Thank you for listening!

JavaScript Modules, Maybe

So, you might know already, but ES guys are talking these days about modules and things, as usual, went out of control since everyone wants its own best module version, ever!

Current Status

Synchronous, asynchronous, AMD, require() ... apparently these are all right for some use case, but wrong for some other.
It looks like JS cannot do synchronous modules ... wait what?
why browsers can't since browser have synchronous require since the very beginning? <script> tag anybody?
This was me after reading few times JS cannot do sync. Turned out, sync script is in HTML specs, not JS one, but wasn't this about JS indeed, where JS on browser never had this real problem and is simply envious of node.js module loader simplicity?

How About Facing Reality

... where in every language, requiring dependencies has always been synchronous because nobody ever cared about that latency, right? And did anyone even bothered using asynchronous file reading to include modules?
Not even node.js does that, the most async-centric env I personally know!
As summary, since this is a browser only problem, what I would expect is the ability from browsers engine to pause in a non blocking way the client until the file has been loaded. You know what I mean? F# does that so that's not an impossible reality ...
How cool would that be and how "free from browsers limits" the specification of the next module loader in a programming language would be?
The answer seems to be that JavaScript is not an HTML/W3C matter but is limited because of HTML/W3C implementors, those browsers ...

It Doesn't Matter, Had Module!

Developers are concerned that TC39 might not have real use cases and the funny part is that an evangelist of everything you know about the web as @paul_irish is had his self some concern about TC39 choices in term of real-world cases.
Meanwhile, AMD does not seem to be an answer, neither the preferred choice, but regardless we have these scenarios:
  • those who write for the web and go AMD
  • those who write for node.js and go require
  • those who will probably add all this crap regardless, and with all due respect for the dev who wrote that with best intentions, even if testing only on web or only node.js (and again, the point is not about the snippet but the fact it should be everywhere in the JS world, you got what I mean, I am sure!)
In few words, Domenic's suspects are already a reality: nobody is even caring/following about what's going on in this discussion!
Lazy developers will simply realize at some point nothing works anymore and will go strike blaming the corrupted system, the conspiracy agains the World Wide Web, the fact nobody told them it was going to disappear or change even after 5 years of warnings about deprecations in console, etc etc ... right?
Wrong, they'll just use what worked for them 'till that day without problems and they will still think that you should not break what's in already, which is one of my favorite parts about ES5, the best update ever, if only every browser was there already, it would be a better JS world for everyone, isn't it?

Decoupling Import From Loading

How insane would that be? A semantic syntax that works in any platform, no matter how the platform loads stuff, the build process behind, or the fact you might wrap this way and do what you think is best, even improving upfront your static analysis ... right?
So here a beta repository called remodel, something you might want to git clone git://github.com/WebReflection/remodule.git to run eventually node node_modules/wru/node/program.js test/remodule.js and see that all tests are passing already.

Wait, What

So that project is about having import like syntax available even in ES3
// ES.next modules syntax
import {a} from "something"

// remodule
imports("a").from("something");
You are following, right?
// ES.next modules export
module.exports.a = "whatever";

// remodule
modules("something", {
  a: "whatever"
});
So, kinda yes, the missing part of all this mess is that a module should be able to register itself if we would like to static analyze it and make the logic work everywhere in both sync and async environments, right?
// ES.next modules export
module.exports = {
  what: "ever"
};

// remodule
modules("exports", {
  what: "ever"
});

// remodule backward compatible
modules("exports", module.exports = {
  what: "ever"
});
Latter example is about loading that file with current require or without it ... unfortunately modules function should be there but that's easy to fix, right?

What Else

modules("test", {
  a: "this is a",
  b: "this is b",
  c: "this is c"
});
With above code, we might be able to export the test module, regardless the position in the filesystem or the package manager, and do funny things such:
var test = imports('*').from('test');
JSON.stringify(test);
// {"a":"this is a","b":"this is b","c":"this is c"}
Cool? we just imported whatever the module exported itself ... there's really nothing to worry about, as @benvie might spot out, the this is safe too, is the exported object.
var a = imports('a').from('test');
a; // "this is a"
Well, it's straight forward to get that we can import just one thing from a module, right? Behind the scene, the module must be imported once, and never again, but we can grab a property as needed instead of the whole thing, right? ... and more!
var arr = imports('a', 'c').from('test');
// same as
var arr = imports(['a', 'c']).from('test');

// same as, once Array comprehension is in ...
var [a, b] = imports('a', 'c').from('test');
We can specify multiple properties out of a single module ... cool, uh? And more!
var aliased = imports({
  a: 'A',
  c: 'b'
}).from('test');

// in a real world scenario ...
var $ = imports({
  jQuery: '$'
}).from('jquery').$;
Yep, aliases are there too, so that you can actually test everything for real in this file.

So ...

what I've done there, is not even in charge of loading synchronously or asynchronously anything ... I mean, that should be your build process in charge of making things work and instantly available once needed, right? I mean, the Web/DOM part, I get it, modal spinners all over instead of a frozen tab so annoying for the user, but why nobody simply came out with a library in charge of this? Why ES6 modules will break browsers, using all those reserved words unusable in older browsers, will break node.js logic, being incompatible with the exported module, and will probably be never adopted in fact by anybody?
I am still dreaming about web improvements where stuff that gets out is what's really needed and most likely already used out there. Improved? Better? Sure! Pointless? No, thank you!

Thursday, February 07, 2013

JavaScript EventTarget

This is about the W3C EventTarget interface, something standard in the DOM side, but still confusing in the JavaScript one where EventEmitter in node.js, or many other kind of constructors, are simply simulating what has been there for years, and standardized across all browser engines.

Now In JavaScript Too

Correct, I have implemented, written, and re-writtem this shit so many times that I have decided to "unofficialize" the already, well described, interface.

So, here the repository, something you can install in node via npm install event-target and checking examples on how to use it. Cool? I hope so :-)

Only thing that does not make much sense in a non DOM environment is the capture extra argument, something ignored from 90% of the web, something that can be really useful with the DOM but I could not think about any concrete utility in a pure JS world way. Cool thing is: anyone can extend, wrap, or improve, this EventTarget library: enjoy!

Sunday, February 03, 2013

Opera Mobile Is The Best Browser!

I really do not understand why the Web keeps ignoring this browser which is able to provide the best browsing experience out of old hardware too!
It's not me saying that, there are all test you might want to double check or try by yourself.
  • Opera Mobile VS Chrome mobile, a browser available only in latest hardware, and Safari Mobile for iOS 6, and we all know how good is the hardware here. Here the result, with Opera Mobile scoring more than anything else
  • Hardware Acceleration, something possible on canvas you might want to test in this old prototype of a map, the same I have presented at Front Trends in 2010
  • multi touches, so that an interaction with more modern UI based on gestures could work without problems. Here the tesla experiment

In Android 2.X Too

This is the part I love the most about this browser ... I mean, if you assume you have best hardware ever under the hood it's easy to be cool, right?
I am looking at you Chrome and Safari Mobile, and I am leaving Firefox Mobile outside this challenge since, unfortunately, it never competed against stock browsers, in terms of performance. They are getting better, and have to, with Firefox OS, but in an Android 2.3 ... not sure where they are :(
I have a Galaxy Ace, an Android 2.3 phone, that scores with Opera Mobile 406 plus 12 bonus points.
Basically, if all web apps out there would support Opera Mobile, people should not spend more to update their hardware because there is a browser that is kicking every other browser asses in term of performance!

Symbian Too

Correct, good old NOKIA phones could be up to date without problems simply using Opera Mobile, no need to spend that much to get a Windows Phone there, if the problem is the browser you can have touches and multi touches plus extreme performance boost simply downloading and using by default Opera Mobile: as easy as that!

Definition Of Best Browser

A browser that is able to bring to the user every possible modern feature, without requiring HardWare or Operating Systems updates. This would be, in my opinion, the best browser in the world, the missing piece we all have and somehow keep ignoring, in this web scenario.

Why Is That

I start thinking Opera Mobile team has a really bad marketing support. I cannot believe my stock browser scores 200 against 406 in Opera Mobile and there's no usage percentage in global stats about this browser if not about the Opera Mini version, a completely different beast?
Wha the hell is going on? Why aren't we all developing for this browser too? It's also the easiest to test since available in many platforms ... so, as summary, when I think about any HTML5 product that does not support Opera Mobile is kinda lame, while if it does not support Opera Mobile at its best, using all features that are available, usually twice as those available by default stock browser and in a really performant way, we should rethink our priorities, also because once again, this browsers is available in multiple platform so it should be the preferred target, rather than the least considered one. This usually means profit too so ... I am just saying, and thank you for listening!

Friday, February 01, 2013

The Difficult Road To Vine Via Web

One of the coolest and most rumored app of these days looked so fun, and conceptually simple, that I could not resist to challenge myself trying to reproduce it via HTML5 and all possible experimental things I know that are working these days for both desktop and mobile.

Wine

This is the name I have chosen for this experiment, and this is the very first warning: it does not work as it should, is not the equivalent, it cannot substitute the native App: too bad, but probably the reason vine team didn't even try to propose such broken experience.

Well, Something Is Working!

Guess who's this idiot with a Koala hat in a leaving room:



That's correct, "it's a me" through the wine experimental project and a Chrome browser. But let's talk a little bit about technologies I have used, OK? If you want to know how to install the environment and play with the project, once again, the repository explaining how to :-)

How Does It Work

Well, you launch the server, you connect to that page, you press the video up to 6 seconds. If you release your finger or your pointer, it stops recording. when the top bar is filled up, each frame will be rendered as image and sent to the server, together with the audio, where some magic happens and the result is a video in mp4, ogv, and webm formats, plus a nice fallback as animated gif so that every single body can see again those 6 seconds, nice? Now, time to talk about all possible problems I had during its development ...

Everything You Know About getUserMedia() Is A Lie

This has been the biggest headache during the creation of this prototype.
All articles I have read, included this excellent one in HTML5 Rocks, which is marking the article valid for Opera and Firefox too, do not work ... really, as simple as that: that stuff does not work!

The Right Way To Attach A Stream

In my proudly created spaghetti code I ended up with a Frankenstein monster such:
function attachStream(media, stream) {
  try {
    // Canary likes like this
    media.src = window.URL.createObjectURL(stream);
  } catch(_) {
    // FF and Opera prefer this
    // I actually prefer this too
    media.src = stream;
  }
  try {
    // FF prefers this
    // I think it should not be needed if the video is autoplay
    // ... never mind
    media.play();
  } catch(_) {}
}
So, the most advanced browser is apparently behind the schedule because Firefox Nightly and Opera Next just refuse to work through the URL.createObject() approach.
However, I found Firefox behavior a non-sense because of the required play(), completely against the logic behind the autoplay video attribute.

Bad News Is ...

my code ain't gonna work for long time neither, things are changing, so keep reading and smile ^_^

AudioContext and AudioStream Is Nowhere!

Correct, another myth of these HTML5 days is the audio stream. Nightly is able to expose it inside the stream but I could not manage to retrieve it and handle buffers in and out. Nightly has also another really annoying problem, the redundancy of the microphone recording the audio itself ... a noise you'll spot if you don't mute manually your computer speakers.
I had to video.muted = true in order to avoid such disturbing noise, something present in Canary too but if the volume is not 100% is much less easy to reach that point. Canary seems to be more clever here! Opera does not seem to work neither with this audio stuff.
The best one seems to be Apple Safari browser: nothing works but they have the best documentation!

Surely It's Me

I might have done something wrong, but if browser vendors keep implementing and changing standards behind the scene how can this be developers fault?
Here an example, I ask how come getUserMedia() is so much NOT available?
The answer was this one!
Because Microsoft recently scuttled ongoing standardization efforts with a surprisingly valid counterproposal I'm afraid.

DAFUQ?!

I love all efforts from Microsoft, Firefox OS, Chrome OS, if any, and all others trying to propose standards ... but I don't understand any vendor that is trying to kill a reasonable one until the best proposed one ever will be rolled out ... I mean, can I haz that meow?

Until Things Are Usable

I feel like somebody is having fun screwing standards from time to time and every abandoned proposal that looks good to developers has the same destiny:
  • stick forever in some library because in those days, that was the behavior
  • make the new proposal not powerful regardless, since it came out of a hybrid, not perfect one, that everyone probably already adopted, as it is for localStorage and WebSQL, things that just work as developers need to do more, things still there, just randomly there
  • fragmented on the Desktop, fragmented on the mobile more than ever and where on mobile, updates do not basically exist. On mobile, we keep changing Hardware generations, and not software!
    iOS here is a partially lovely exception, able to update longer, but my iPad 1 is stuck behind iOS 5.X, you know what I mean ... right ?
I really feel Christian Heilmann when he says that what matters is reachability and everything else is a futile discussion.
I probably have same feelings, better summarized, as web developer, in this personal thoughts:
It does not matter if it's touchstart or pointerEventDown event guys, what's important is that it f*#$!(in works when a person put a finger in his device screen and this person bought that device thinking is touch-able ... YES, THE BROWSER TOO!
Also, because people don't, and should, ever care about software, that's our problem, and should never be people limitation with the hardware and the software they like, they use, they need, they want ... but we keep smiling, right? ^_^

The Web Has Never Been This Broken

And this is the beautiful lie behind HTML5: it's utopia that never worked in reality!
Articles and examples that work only for this browser, the thing we have complained about for years about IE thinking "dude, if you don't know how to create a site don't write it works only for IE you lamer"!
Problem is, we are not going anywhere even on mobile, where guess what, platform fragmentation is growing much more than desktop one. On Desktop we have 3 OS Families, Windows, OSX, and Linux generic distro (I feel you Gentoo, Fedora, Redhat, Ubuntu, Debian, Kubuntu guys ... sorry to group you there).
On mobile, we have newcomers all over so iOS, Android 2.2/1.3/3.0/3.1/4.0/4.0.1/4.1/4.2 and here you have the coolest device ever, and Firefox OS plus that sneaky Opera Mobile, I mean mobile, not mini, the best, fastest, most updated, browser ever for both Symbian and Android 2.X!

Wasn't This About Wine?

Right ... you are right, I stop here wining about the fact that indeed, Opera Mobile is the only mobile browser able to work, even in Android 2.3, regardless low performance it just look and feel OK there, so if you want to try this you can try with Opera Mobile and enjoy the project.
Chrome Mobile doesn't getUserMedia(), so doesn't FirefoxOS, neither anything else I could try (come on, you are not trying with a windows phone, right? They killed the current standard proposing something else ... cooler, but more to wait for!).
So, the end of this story is that I have created a project which aim is to simulate a native App, and I miserably failed. Not because performance were not good, since once again it works via Opera Mobile in my Galaxy Ace, an Android 2.3 smartphone really simple, really functional, really usable thanks to a decent battery life due to low hardware specs, so ... again, it's not a performance issue, and you can test it, it's more about mistakes, rush, and wrongly accepted proposal from those that are deciding standards ... for good, sure, but if WebSQL was universally available, cross browser/platform speaking, how much more we, web developers, could have we done?
Think about it, that could have been the best thing ever to build No-SQL concept on top, but never something like this about IndexedDB:
Because this technology's specification has not stabilized, check the compatibility table for the proper prefixes to use in various browsers. Also note that the syntax and behavior of an experimental technology is subject to change in future version of browsers as the spec changes.
And this after at least 2 years ... now ask yourself honestly if current getUserMedia() was already available cross browser, how many creative things could have been created already?

At Least These!

  • web based alarm systems, video can be captured into canvas, canvas can scan images, canvas can detect suspicious movements comparing diffs between previous image and current one in a place that supposes to be quite
  • no need to call the specialist that will install the expensive hardware, the cable, the camera, and everything else, if we can program quadcopters via node.js, JavaScript is good and fast enough to monitor the house, the garage, the entrance, and tell you everywhere you are in the world, what's going on plus, if you need to, it can send you pictures while quadcopting around :D
  • create a Skype like application without needing Skype at all ... OK, Skype offers an amazing service and we cannot even think to compete on web, but still ...

End Of The Rant

I am pretty sure John Resig, who's already using Vine since the very beginning, would actually agree with this Web situation .. or maybe not, since his story-rock API main goal was to uniform all this mess ... but should we keep relying third parts API rather than awesome, ultra skilled, exceptional people, in charge of the future of the Web?

Anyway, Wine Works With...

So once you have installed everything, all you have to do is to start polpetta in that folder and connect through these browser to that address: Chrome Canary, Firefox Nightly, Opera Next, Opera Mobile, or any other browser you think should support this app, and it will not ... keep smiling!!!! ^_^

Thanks for your understanding, I am developing web mobile since 2009, since Android 1.5, and the thing is: it never got truly better, it just kept changing and fragmenting!
Probably the reason I love my job, and the constant challenge it offers on daily basis but I'd like to do more there ...

Wednesday, January 30, 2013

Resurrecting The With Statement

You might think this must be a joke, well no, this is a partial lie behind the "use strict"; directive.

Roots Of The Hack

// global
"use strict";

function strict() {
  "use strict"; // top of the function

  return {
    // invoked inline
    withStrict: function(){
      return this; // undefined
    }(),

    // invoked inline too
    withoutStrict: Function("return this")()
  };

}

// the test
var really = strict();

really.withStrict;    // undefined
really.withoutStrict; // global, BOOOOM!

The Good News

I have been blaming since ever the fact that use strict makes impossible to retrieve the real global object ensuring nobody in the closure redefined window or global by accident so that code is more reliable.
Well, now we have the possibility to return it again when it's needed for security reasons or to be sure is the right one.
// a classic code for Rhino, node, and Web
var G = typeof window !== "undefined" ? window : global;
// then we need to use G

// with this hack
var global = Function("return this")();
// that's it, is the window or the global object

The Funny News: With Statement Is Back

So, we are able to deactivate the "use strict" directive in the global scope, right?
How about bringing back something that would throw an error otherwise in a strict context as with(){} is?
"use strict";
Function("with({test:123}){ alert(test) }")();
// 123
It Works!!! Awesome, we can use a with statement always be executed through Function which, differently from eval, evaluates in the global scope.

With Great Power Come Great Shenanigans

The reason number one for abandoning the with(){} statement is its ambiguity, together with the ability to pollute by mistake the global scope.
However, there were few things impossible to represent without that statement, and few of them have been proposed as the monocle mustache behavior.
array.{
    pop()
    pop()
    pop()
};

path.{
    moveTo(10, 10)
    stroke("red")
    fill("blue")
    ellipse(50, 50)
};

this.{
    foo = 17
    bar = "hello"
    baz = true
};

A Mustache Like With statement

Latter snippet is not able to pollute the global context, neither it changes context, plus it can interact with the outer scope. OK, it is not possible to implement automagically the latter one, but we can still avoid context and global context pollution ensuring a proper this value, and throwing errors if some variable does not belong to the mustached object.
How? Reactivating the "use strict"; directive again inside the non strict code: how crazy is that?
function With(o) {
  // needs a block, a function
  // can simulate that properly
  return function (f) {
    // deactivate during evaluation the strict directive
    return Function(
      // it is possible to use the with statement now
      "with(this){return(" + ("" + f).replace(
        // but we want to reactivate strict env inside
        "{", "{'use strict';"
      // avoid global context pollution
      // forcing a different this
      ) + ").call(this)}"
    ).call(o);
  };
}
So, let's see compared with previous examples, right ?
With(array)(function(){
    pop()
    pop()
    pop()
});

With(path)(function(){
    moveTo(10, 10)
    stroke("red")
    fill("blue")
    ellipse(50, 50)
});

With(this)(function(){
    foo = 17
    bar = "hello"
    baz = true
});
Does it work nested too ? Yes!
With({test:{key:"value"}})(function(){
  alert(test); // [object Object]
  With(test)(function(){
    alert(key); // "value"
  });

  // change the property
  test = 456;

  // by accident pollute the global scope
  not_defined = "oops?"; // throws an error ^_^

});
After that, removing the error at the end, the original object would shave the number 456 as test property.
In few words, we can have a secured with(){} statement behavior without the possibility to hurt the generic surrounding scope anyhow, except for those death browser without the strict directive, of course :D

Performance, Use Cases, etc

Yes, I believe the performance problem we know about that statement is still there, but with less problems to take care due strict behavior and a global environment. I would actually say that performance could be optimized with this technique, because no scope and context are implicit or modifiable anyhow, but I am not the right person to tell you what the hell happens in that case inside a JS engine :D
Use cases might be tests related, DOM related, since there things are slow in any case, or quick API prototyping due implicit return this nature of the hack: you decide :-)

Last Improvement

If you would like to adopt the technique but you want to be able to bring other local variables in that mustached block, you can use this version of the same function:
// The Strictly Monocle With Statement
function With(o,a) {
  return function(f) {
    return Function(
      "with(this){return(" + ("" + f).replace(
        "{", "{'use strict';"
      ) + ").apply(this,arguments)}"
    ).apply(o,a);
  };
}
With latest piece of code we can bring in that function whatever we need in this way:
With(
  document.body, // the implicit context
  [ // arguments to pass
    jQuery,  // jQuery
    window._ // lo-dash
  ]
)(function($, _){
  // le the magic happens
});

// or simply
With({},[1, 2])(function(a, b){
  alert([a, b]); // 1,2
});
:) Thanks for reading!

Tuesday, January 29, 2013

experimental.js

This is a tiny post about a tiny utility, something Modernizr like, but actually much simplified, like about 330 bytes minzipped.
Of course the power is kinda limited, but for most common things such requestAnimationFrame or CSS transition it should be more than enough.

Basic Example

You can find more in the experimental.js repository but here a couple of examples:
// check if present and use it
if (experimental(window, "requestAnimationFrame",
  true // optional third flag to assign the found property
)) {
  // in this case attached directly to the global
  // so we can just use it all over
  requestAnimationFrame(callback);
} else {
  setTimeout(callback, 10);
}
Another example, discovering the right string for transition:
var TRANSITION = experimental(body.style, "transition");
alert(TRANSITION);
// mozTransition
// webkitTransition
// oTransition
// msTransition
// or just transition
If you are wondering about pure CSS, it's easy to add a tiny extra step such:
function toCSSProperty(JS) {
  return JS
    .replace(
      /([a-z])([A-Z])/g,
      function (m, $1, $2) {
        return $1 + "-" + $2;
      }
    )
    .replace(
      /^(?:ms|moz|o|webkit|khtml)/,
      function (m) {
        return "-" + m;
      }
    )
    .toLowerCase()
  ;
}
var CSS_TRANSITION = toCSSProperty(
  TRANSITION
);
As we can see, it's easy to grab new features following the almost de-facto rule about checking properties with prefixes too in objects.
Bear in mind this is not as powerful sa Modernizr, neither it offers any yep/nope mechanism to download on features test library, got it? :)

Online Test

I have created an idiotic page which aim is to test experimental.js against most common objects, here an example.

Sunday, January 20, 2013

redefine.js - A Simplified ES5 Approach

If you think Object.defineProperty() is not widely adopted, here the list of browsers that support it, together with Object.create() and Object.defineProperties().
  • Desktop
    • Chrome 7+
    • Firefox 4+
    • Safari 5+
    • Opera 12+
    • IE 9+
  • Mobile
    • Android 2.2+, 3+, and 4+, stock browser
    • Android 4.1+ Chrome
    • Android and Symbian Opera Mobile
    • Android Dolphin Browser
    • Android Firefox
    • iOS 4+, 5+, and 6+, Safari Mobile
    • iOS Chrome Browser
    • Windows Phone 7+ IE9 Mobile
    • Windows 8+ RT and Pro
    • webOS stock Webkit browser
    • Chrome OS
    • Firefox OS
    • I believe Blackberry supports them without problems with their advanced browser
    • I believe updated Symbian too but I could not test this
  • Server
    • node.js
    • Rhino/Ringo
    • JSC
    • BESEN and others I could not test too
Except where I have stated differently, I have manually tested everything in this list but you can try by your self in kangax es5 compat table.
Please note that even if Object.freeze() and others might not be supported, create, defineProperty, and defineProperties are, as it is for example in Android 2.2 stock browser.
As summary, unless you are not so unfortunate you have to support that 8% (and dropping) of IE8 market share, there are really no excuse to keep ignoring JavaScript ES5 Descriptors.
Update the build process now updates tests automatically in the repository web page.

Descriptors Are Powerful

Things we can improve using ES5 descriptors are many, including new patterns we never even thought were possible since most of them might result not implementable in many other programming languages.
This is as example the case of the inherited getter replaced on demand with a direct property access, a pattern discussed in The Power Of Getters post, a pattern described from jonz as:
Right now this syntax seems like obfuscation but the patterns it supports are what I've always wanted, I wonder if it will ever become familiar.

Descriptors Are Weak Too

Not only the syntax might look completely not familiar for everything that has been written until now in JavaScript, but current specifications suffer inheritance problems. Consider this piece of malicious code:
Object.prototype.enumerable = true;
Object.prototype.configurable = true;
And guess what, every single property defined without specifying those properties, both false as default, will be enumerable and deletable so for/in loops and trustability will be both compromise.
Even worst, if Object.prototype.writable = true comes in the game, every attempt to define a getter or a setter will miserably fail.
I don't think we are planning to write this kind of code to ensure desired defaults, right?
Object.defineProperties(
  SomeClass.prototype, {
  prop1: {
    enumerable: false,
    writable: false,
    configurable: false,
    value: "prop1"
  },
  method1: {
    enumerable: false,
    writable: false,
    configurable: false,
    value: function () {
      return this.prop1;
    }
  }
});
Not only the moment we would like to assign a getter, we are trapped in any case, since we cannot have both writable and get, even if inherited, but above code has been always written in JS such:
SomeClass.prototype = {
  prop1: "prop1",
  method1: function () {
    return this.prop1;
  }
};
OK, this way will enforce us to use Object#hasOwnProperty() in every loop and does not guarantee that those properties won't change in the prototype, but how about having the best from both worlds?

redefine.js To The Rescue

This tiny library goal, which size once minzipped is about 650 bytes, is to use the power of ES5 descriptors in an easier, memory safe, and more robust approach.
redefine(
  SomeClass.prototype, {
  prop1: "prop1",
  method1: function () {
    return this.prop1;
  }
});
That's pretty much it, an ES3 alike syntax with ES5 descriptors and the ability to group definitions by descriptor properties.
redefine(
  SomeClass.prototype, {
  prop1: "prop1",
  method1: function () {
    return this.prop1;
  }
}, {
  // we want that prop1 and method1
  // can be changed runtime in the prototype
  writable: true,
  // we also want them to be configurable
  configurable: true
  // if not specified, enumerable is false by default
});
As easy as that, that group of properties will all have those descriptor behavior and everything is safe from the Object.prototype, you have 50 and counting tests for the whole library that should cover all possibilities with the provided API.

More Power When Needed

What if we need to define a getter inline? How to not have ambiguity problems since the value is accepted directly? Like this :)
redefine(
  SomeClass.prototype, {
  get1: redefine.as({
    get: function () {
      return 123;
    }
  }),
  prop1: "prop1",
  method1: function () {
    return this.prop1;
  }
}, {
  writable: true,
  configurable: true
});
That's correct, redefine can understand developers intention thanks to a couple of hidden classes able to trivially remove ambiguity between a generic value and a meant descriptor, only when needed, as easy way to switch on ES5 power inline.
Wen we need a descriptor? We set a descriptor!
If the descriptor has properties incompatible with provided defaults, latter are ignored and discarded so we won't have any problems, as it is as example defining that getter with writable:true as specified default.

Much More In It!

There is a quite exhaustive README.md page plus a face 2 face in the HOWTO.md one.
Other 2 handy utilities such redefine.from(proto), a shortcut of Object.create() using descriptors and defaults as extra arguments, and redefine.later(callback), another shortcut able to easily bring the lazy getter replaced as property pattern in every developers hands, are both described and fully tested so I do hope you'll appreciate this tiny lib effort and start using it for more robust, easier to read and maintain, ES5 ready and advanced, client and server side projects.
Last, but not least, redefine.js is compatible with libraries such Underscore.js or Lo-Dash, being an utility, rather than a whole framework.

Friday, January 18, 2013

Have You Met Empty ?

The proper title for this post could have easily been something like JavaScript Inheritance Demystified or slightly less boring as The Untold Story About JavaScript Objects ...well, thing is, I don't really want to alarm anyone about this story and as a randomly improvised storyteller, let's move forward, and see where it goes, starting from where it all began ...

Elementary, My Dear Watson!

The very first thing we might want to do, in order to make our research, discovery, and piece of programming history useful, is creating a couple of shortcuts that will let us easily analyze the code and the current story:
// retrieve the first inherited prototype
var $proto = Object.getPrototypeOf;

// retrieve all own properties,
// enumerable or not, ES5 stuff !
var $props = Object.getOwnPropertyNames;
Why do we need that? Because almost everything is an instanceof Object in JavaScript world, which simply means, and we'll see this later on, that Object.prototype is inherited all over!

Well, Almost!

Since Object.prototype is an object, there's no way this can be called as a function, right?
No doubts Object inherits at some point its own prototype but who added apply, bind, call, constructor, length, name, and that funny toString behavior in the middle?

Not Function !

That's correct, Function has nothing to do with that. Function actually inherited those properties and redefined some behavior such name and length.
Is the Object constructor instanceof the Function one ? Yes! Is the Function object instanceof Object ? Yes again and trust me: there's no real WTF!
Object instanceof Function; // true
Function instanceof Object; // true

The First Object Ever: null

This is the prototype of the prototypes, if we check $proto(Object.prototype) the result will be null indeed.
As summary, the very number one object to inherit from, possible in fact as Object.create(null) instance too, is this reserved static keyword everybody ever complained about because it returns "object" instead of "null" under typeof null inspection.
Can we start seeing that being the mother and the father of everything we use in JS world, "object" is kinda the best typeof we could possibly expect?

There Are Two Suns!

This is the lie we all believed until now, Object.prototype is where everything inherits from ... well, brace yourself, an hidden Empty object is part of this game!
Chicken or egg, who came first? You cannot use much philosophy in programming, you have to solve stuff and give priority or order to the chaos represented by requirements and/or implementations about specs, right? :D
So here the chain you have always wondered about:
null <
  Object.prototype <
    Empty <
      Function
      Object
Wait a second ... how can Object.prototype then exists before Object is even created ?
So here the secret about JavaScript, the elephant Classic OOP guys keep ignoring in the room: Object.prototype is just an object that's inheriting from null!
The fact we call it Object.prototype is because that's the only way to reach it via JavaScript but here how you could look at the story:
null: the omnipotent entity, the origin of everything,
      the daily Big Bang that expands on each program!

  prototype: like planet Earth,
             the origin of all our user definable problems

    Empty: the supreme emptiness of everything,
           meditation over "dafuq just happened"!

      Function: somebody has to do actually something concrete:
                the slave!

      Object: somebody has to take control of everything:
              the privileged king!
Are we done here? ... uh, wait, I knew that!
"The time has come," the Object said,
"To talk of many things:
Of keys and __proto__ and magic things
Of prototype and kings
And why that Empty is boiling hot
We show that pigs have wings."

The Object.prototype we set!
As prototype itself!
Object.prototype = prototype;


The instanceof Operator

As shortcut, instanceof is freaking fast compared with any user definable interaction with the code.
In few words, if you want to know if ObjA.prototype object is in the prototype chain of objB, you better objB instanceof ObjA rather than ObjA.prototype.isPrototypeOf(objB), I mean .. really, check performance here!

So, once we get the pattern followed by instanceof, it's easy to do the math here:
Object instanceof Object
// means
Object.prototype.isPrototypeOf(Object)
Remember the hierarchy? Object inherits from Empty, and Empty inherits from prototype so, of course Object inherits from prototype, it's transitive!
Object instanceof Function
means
Function.prototype.isPrototypeOf(Object)
Again, Object inherits from Empty so obviously Function.prototype, which is Empty indeed, is one of the prototype of Object ^_^

Who Defined The prototype Behavior?

This is the tricky one ... so, prototype is not even invocable and as such it cannot be used as constructor. Actually, neither can Empty, since it cannot be used as constructor too ... so new Empty will throw, watch out!
In fact, Empty introduces the [[Call]] paradigm, inherited by all primitive constructors such Array, Object, Function, Boolean, but is not inheritable from user defined objects, or better, you need to create a function(){} object to have that special behavior inherited from Empty and the prototype inherited from Function.
Empty.isPrototypeOf(function(){}); // true

Which prototype Behavior ?

Well, this is the main point about instanceof and isPrototypeOf(), the prototype behavior defined in the Function object, the only object adding that prototype property defining its behavior in the program!
$props(Empty)
["apply", "bind", "call", "constructor", "length", "name", "toString"]

$props(Function)
["length", "name", "prototype"]
// could be more in other engines

The Last Troll Ever!

It's clear that Empty introduced the constructor property too and guess what happened there?
"The time has come," the Empty said,
"To talk of many things:
Of constructor and new toString
Of call, bind, apply and kings
And why that caller is boiling hot
We show that pigs have wings."

The Empty.constructor we set!
The Function we just let
So that Function.prototype(aka:Empty).constructor === Function is true :)
Last, but not least, if you think as "Empty function" Empty is fast, just check this bench and realize that it could be the slowest function ever invoked!
So, here is the story of the root of all amazing things we can enjoy on daily basis in this Internet era, could you imagine?

Thursday, January 17, 2013

JS __proto__ Shenanigans

I can't believe it, every time some cool pattern comes into games, __proto__ makes everything pointless, in a trusted meaning of a generic property!

Previously, in Internet Explorer

One of the biggest and most known WTFs in IE is the fact that Object.prototype properties are not enumerable.
While is possible to define such properties in ES5, this is the classic good old IE only scenario we all remember:
for(var key in {toString:"whatever"}) {
  alert(key); // never in IE
}
As simple as that, all native properties in the main prototype were considered {toString:true}.propertyIsEnumerable("toString") === false because indeed, these were not enumerating (just in case: enumerable properties are those that should show up in a for/in loop)

The Same Mistake With __proto__ If Not Worse

That's correct, the moment you play with this property name things start falling a part same old IE way or even worst.
alert(
  {__proto__:"whatever"}.
    propertyIsEnumerable(
      "__proto__"
    ) // this is false !!!
);
Not only enumerability, but also the hasOwnProperty(key) is gone!
var o = {};
o.__proto__ = 123;
o.hasOwnProperty("__proto__");
// false !!!

Do Not Use Such Mistake

Even if part of new specs, __proto__ is showing off all its anti pattern problems we all laughed about when it was IE only.
As a property, we should not have any special case, able of these kind of problems, while we should all keep using Object.statics() function that works internally, rather than on property name level.
So now you know ;)

Monday, January 14, 2013

5 Reasons To Avoid Closure Compiler In Advanced Mode

I believe we all agree that Google Closure Compiler is one of the most advanced tool we have for JavaScript development:
  • fast parsing with pretty printed or minified output with warnings about possible problems or errors
  • type checks analysis before it was cool, as example via java -jar compiler.jar --jscomp_warning=checkTypes --compilation_level=SIMPLE_OPTIMIZATIONS ~/code/file.js
  • creation of source map for an easier debugging experience with the real, minified, code
  • best raw compression ratio, death code elimination, and more with @compilation_level ADVANCED_OPTIMIZATIONS
I also believe the latter point is a trap for developers and projects: unable to scale and capable of many headaches and shenanigans.
If you want to know why, here a list of the top 5 facts I always talk about, when some developer comes to me telling the solution to all problems is advanced optimizations.

1. Incompatible With ES5

Every current version of a browser is compatible with ES5 features but the tool that supposes to help us is able to change the program creating security problems or misbehaving logic in our code.

No "use strict";

this is the very first problem. Not much about with() statement and eval(), rather about the possibility to pollute by mistake the global scope so that any namespace could be destroyed by accident.
function setName(name) {
  "use strict";
  this.name = name;
}

var obj = {setName: setName};
obj.setName("name");
Above code will produce the warning dangerous use of the global this object at line 7 character 0: this.name = name; which is correct only because
  1. it does not understand how the function is used
  2. the "use strict"; directive has been removed
If you think that's OK you have no idea how many libraries out there might use the window.name property, the one that never changes in a tab lifecycle, you should think that name could have been any runtime or predefined property used by your library or any dependency your library has. ... is this not enough? Let's see more then :)

Update on "use strict";

As @slicknet pointed out, there is the possibility to support the "use strict" directive via --language=ECMASCRIPT5_STRICT command line argument.
This flag comes in any case with extra related bugs.
However, I didn't know that flag and this is cool but "use strict"; in JavaScript world is already a directive so that shuold not be necessary at all. We are in an era where ES3 only browsers are a minority and tools should be updated accordingly, IMHO.
This part is another reason modules are not friendly with Closure Compiler Advanced portability and security is still under attack. Please read more!

Object.defineProperty(shenanigans)

Since strings are the only thing that cannot be changed, guess what is the output here:
var program = Object.defineProperty(
  {},
  "theUntouchablePropertyName",
  {
    writable: true,
    configurable: true,
    enumerable: true,
    value: 123
  }
);
alert(program.theUntouchablePropertyName);
You can test almost all these snippets via the Closure Compiler Service to double check, if you want. Anyway, here what that will be:
var a=Object.defineProperty({},
"theUntouchablePropertyName",
// **STATIC WHITE LIST**
{writable:!0,configurable:!0,enumerable:!0,value:123});
alert(a.a); // <== You see this ?
If you are thinking "sure, you have to access that property as program["theUntouchablePropertyName"]" bear with me 'cause I am getting there ...
The number of problems we have with above snippet are different. The very first one is that with a massive usage of inline ES5 descriptors there's no final size gain because of the static list of properties that Closure Compiler won't change.
All DOM classes and properties cannot be touched, together with events properties and basically everything defined in the W3C together with all globals and non standard features such window.opera which is preserved and I have no idea what else but the elephant here is that prefixed properties are not supported too!
function doAmazingStuff(obj) {
  if (obj.type === "custom:event") {
    obj.target.onwebkitTransitionEnd();
  }
}

doAmazingStuff({});
Above snippet will result in this one:
var a={};"custom:event"===a.type&&a.target.a();

Why This Is A Problem

You might say that, as everything else, the property should be accessed dynamically. But how comes that properties such writable and enumerable are maintained while anything that could come from ES6 or most recent proposal might be not supported and so compromised? What if some browser has some special case that Closure Compiler is not aware of in a reality where every browser might have some special case in core?
Are we really willing to write an inconsistently looking code such
var descriptor = {
  configurable: true,
  "inheritable": false
};
to avoid problems? Uh wait ... I know where you come from ...

2. Requires A Different Syntax

When I've tweeted about another-js somebody complained about the fact that developers will get really bored to obj.get("key") and obj.set("key", value) all the time ... and yes, I might agree that is much more typing for large applications. While the project is about the ability to observe everything that's going on, is not required at all to follow that convention: when we know an object should not be observable, we can access properties directly in order to get or set values as we have always done.
In Closure Compiler Advanced world we are forced to follow a convention about accessing properties which is dynamically rather than statically.
// JavaScript
obj.newProp = obj.oldProp + 123;

// Closure Compiler Advanced (CCA)
obj["newProp"] = obj["oldProp"] + 123;
Holy crap, I have spent already at least twice the time to write the second line ... and this is because I am planning to make this old function still able to work:
function doMoarWithDescriptors(d) {
  for (var key in d) {
    if (k === "newProp") {
      // would never happen after CCA
      // if not changing the code
    }
  }
}
So, not only we need to type more with less ability to be helped by autoComplete, since strings are strings and not every IDE supports dynamic access, but every time we would like to be sure that a key is the one we expect, in a state machine, or a more common switch(), we need to do refactoring with all consequences we know about refactoring in big, as well as smaller projects!
... and the best part is ...

3. More Things To Be Worried About

The funnies non-sense about Closure Compiler Advanced, is that while everyone is trying to change JavaScript because they believe is not good as it is and it has too many quirks in the specifications, Closure Compiler Advanced adds even more syntax problems to be worried about .. isn't this lovely insane?

Danger Indeed !

The unique id used to described problems caused by the Closure Compiler Advanced is #danger, because disasters might happen to everything if you don't follow those rules ... but wait a minute ... shouldn't DEV tools help rather than create more problems to be aware of ? Ask yourself ... you are probably still struggling understanding JavaScript coercion but you had to learn that obj["prop"] is a better way to access a property ... you know what I mean, right?
Even worst, something described in that solution list is the most common use case in modern JavaScript and guess what ...

4. It Does Not Scale With RequireJS/AMD Modules

True story bro, and reasons are many!

Requires Extra Steps In The Loop

This is not a big deal, but you must be able to create code that can be put together and then split again a part. While this might sound easy, the ability to eliminate death code and understand the code in a way that closures, namespaces, and about everything could be inlined, might be a problem. How do you understand what piece of code can be split and still work as it was before and for sure? There's no way you can run unit tests to prove that part is still working unless you compile all tests together and be able to split them a part. How much extra effort and possible trouble maker variables in the middle for a flow that should be as linear and easy as possible?

Broken Source Map

That's correct, is not only about splitting back the code when you want preserve a lazy module loading approach, the source map, the coolest feature ever, won't mach a thing anymore because it will be generated in the whole block of all modules and not for the single one so rows and columns will be screwed ^_^

Worse Performance

In order to export a module and make it compatible with Closure Compiler Advanced after build, which could be the case if you don't want to put the module source online or you upload in a CDN only its already minified version, either you create the exports.js version of the file, which means you are assuming every user is basing the build process on Closure Compiler, or you need to manually fix the exports so that methods and properties won't be changed.
// original module for CCA
module["exports"] = {
  "dontTouchThis": genericValue
};
If you are the one using Closure Compiler to create your own script, and you use Advanced Optimizations, above code will result in module.exports={dontTouchThis:genericValue};
Cool, uh? The only problem is that if you pass again that code through the Closure Compiler you are screwed because the result will be module.a={b:genericValue};.
Accordingly, you have to change back the property that should never be touched resulting in a minified version that is module["exports"]={"dontTouchThis":genericValue};
Not every browser optimizes runtime that kind of property access as Chrome does, you can see specially mobile, IE, and others going 3X or more faster with direct property rather than squared brackets plus string access. I know, this could be optimized by browsers but ... 'till now, is not, and I don't really want to write two kind of ways to access properties all the time.

5. A Compromised Security

An automation is an automation and the current status of software AI is far away from perfect or superior, period.
This means that sentences like Crockford's one about JSLint, regardless the excellent interview, should be taken metaphorically speaking and nothing more:
JSLint is smarter than I am at JavaScript and it is probably smarter than you, too. So as painful as it is: Follow its advice. It is trying to help you.
If you think the same about Closure Compiler in Advanced mode here the easy answer:



This tool has one task to do and is not to understand the code in a way developers do.
As example, if you want to avoid that your whole namespace is removed by mistake in the global scope and you don't have ES5 possibilities, as broken as these are in Closure Compiler world, you will certainly declare your namespace as:
// if already defined in the global scope,
// do not redefine it!
// handy for libraries required in different
// other libraries
var myNamespace = myNamespace || {
  utility: {
    // the magic
  }
};
As you know, var defined variables cannot be deleted.
delete myNamespace;
delete window.myNamespace;
delete this.myNamespace;
// it doesn't matter
typeof myNamespace; // "object" !!!
This is because there's no way that by accident someone can delete that namespace ... how? Here an example:
// let's pretend this is a global utility
window["clearProperties"] = function () {
  var self = this;
  for(var key in self) {
    delete self[key];
  }
};
This will compile into next snippet, and right now without warnings:
window.clearProperties=function(){for(var a in this)delete this[a]};
Right ... I have to tell you something ... first of all:

That Is A Bug

Being a software, Closure Compiler can have bugs as any other software you are creating or you know.
Since, as I have said, is not really smarter than us, Closure Compiler created in this case a piece of code that if you pass it again to the closure compiler itself will throw errors all over the place.
JSC_USED_GLOBAL_THIS: dangerous use of the global this object at line 1 character 59
window.clearProperties=function(){for(var a in this)delete this[a]};
In case you are wondering what the initial bug is, using next piece of code without var self = this; upfront causes warnings:
window["clearProperties"] = function () {
  // GCC not that smart here
  // it does not understand addressed `this`
  // warnings only in this case
  for(var key in this) {
    delete this[key];
  }
};
So, let me recap: Closure Compiler generates code you should not write ... got it?
That's how much is worth an advice from any automatically transformed, and meant code ...

That Namespace Can Be Deleted

So here was my main point: you don't want to make the namespace deletable but if you write just the very first snippet with a variable declaration, the output will be most likely this one:
Your code compiled to down to 0 bytes!
And no warnings at all ... great ... so how about another boring syntax change to learn such:
var myNamespace =
  window.myNamespace =
  myNamespace ||
  {
    utility: {
      // the magic
    }
  }
;
No way, that crap compiles to var b=window.a=b||{b:{}}; and with one warning, which means we have to double check the warning to be sure is safe, plus we have a double global namespace pollution with both a and b ... awesome ... now ask yourself: should I waste my time investigating about Closure Compiler Advanced problems, rather than focus on software development?

Closure Advanced Is Just Bad

I don't know why the advanced optimization option became so popular but I've personally never used it and never suggested it.
The amount of extra crap caused by this obtrusive directive you don't even want to think about is massive!

Myths About Size

One reason I've been thinking about is that somebody said that this option is able to make miracles within the code. A part that I've never considered a broken environment a worthy miracle, if that is about the elimination of death code so that even if you want to hook in runtime you are screwed ... well, enjoy!
However, that code is 90% of the time there for a reason and if you have like 30% of code that should not be there once minified, ask yourself why you are using closure compiler rather than clean up your bloody messed up and pointless code: profit!

The Size That Should Matter

Let's be realistic ... if you have 1Mb of JavaScript the problem is not in those few bytes you could save minifying properties. You are also probably not considering that these two snippets are equivalent:
// how you write for CCA
window["namespace"] = {
  "dontChangeSinceExported": {}
};

// how is JS
var namespace = {
  dontChangeSinceExported: {}
};
The funny part is that after minification the second, normal, natural, fully JS approach, wins!
// after CCA
window.namespace={dontChangeSinceExported:{}};

// after SIMPLE_OPTIMIZATION
var namespace={dontChangeSinceExported:{}};
3 bytes less and a safer namespace ... hell yeah!!!

As Summary

Closure Compiler is really the best tool I know for many reasons, included the ability to create warnings if you don't respect the Java Docs Like Annotation but the ADVANCED_OPTIMIZATION flag has so many problems that I wonder if is really worth it for you to adopt it. A simply minified code with a minifier able to generate source map, understand type checks, and let your logic, as developer, be always available, is a full of win that together with gzip compression will never be the **real** bottleneck.
Death code elimination is a good hint for a tool, not for a minifier able to screw up logic and code. I'd love to see death code elimination tools detached from the minification logic or upfront with the ability to understand what is really death code and drop it, rather than letting an automation understand that was OK to do.
My 2 cents

A Tiny Update

One thing I have forgotten to mention, is that AFAIK Closure Compiler Advanced supports by default Java Documentation, but this is not capable to even tell us inside the generated html whenever a property should be accessed as exported, via obj["property"], or a mungable one, as obj.property could be.

Saturday, January 12, 2013

A Simplified Universal Module Definition

There are several UMD ways to define a module with zero dependencies but I believe that's quite an overhead so here an alternative:
// please read next example too
// which seems to be better for AMD tools
(this.define || Object)(
  this.exportName = exportValue
);
Update with a nice hint by @kentaromiura where the other side of define could be just a function and not a new function as initially proposed.
Another update suggested by @unscriptable that should not break AMD tools, neither CommonJS, neither global Rhino or window, still leaking a couple of properties that should be reserved for module stuff anyhow.
var
  define = define || Object,
  exports = this.exports || this // *
;
define(
  exports.module = {}
);
Nice one, and if you are wondering why this.exports is needed, is because RingoJS apparently does not follow a de-facto standard as this as module exports is for other CommonJS implementations.

Even Better

If you don't want to pollute the scope at all:
(this.define || Object)(
  (this.exports || this).module = {}
);
now all patterns in one for easy portability, cool?

What Is `this`

In node, this is always the exports object, if the module has been loaded through require, so this.name = value is not different from exports.name = value.
Outside node.js world, this will be the global object or any augmented environment or namespace if the file is loaded as text and evaluated a part.
This makes automatically the module mock ready.

Less Fairytales, Please

This trend to double check that define function has a truthy amd is hilarious. The whole purpose of going modules is to avoid global object pollution so, if you have in your code any script that create an insignificant define variable, function, or property, in the global scope, drop that jurassic script or rename that function in that script instead of check that property all over.
Also, please consider that a property could be part of any object so that check gives us nothing ... right? (but I wouldn't name a propety `amd` ... sure, as well as I wouldn't name a global function define ...)

A Basic Example

// what.js
(this.define || function(){})(
this.what = {
  ever: function () {
    console.log("Hello");
  }
});

// node.js
var what = require('./what.js').what;
what.ever(); // Hello

// browser
<script src="what.js"></script>
<script>
what.ever(); // Hello
</script>

// AMD
require('what.js', function (what) {
  what.ever(); // Hello
});
All this works with closures too, if necessary, so you don't pollute the global scope in non node environment.
(this.define || function(){})(
this.what = function(){
  var Hello = "Hello";
  return {
    ever: function () {
      console.log(Hello);
    }
  };
}(/* **INSTANTLY INVOKED** */));
As summary, if you want to create a tiny library, utility, module, and you don't have/want dependencies, this approach has a minimum overhead of 39 bytes without compression vs 158. Now multiply this per each file you have there and also try to remember my suggested approach, I bet you have it already ... now tell me the other with if/elses :P

But ... Global Scope Polluted With Define ?

I believe this is another fairytale. If you understand how AMD works you realize the moment you execute code in the closure the module is loaded correctly and passed as argument which means, basically, who cares what that argument outside that scope.
Said that, when it comes for globals name clashes possibilities are really low. The classic $ function or the _ one are two example but again, if you use AMD you receive the right argument while if you don't use AMD and you just inject script you do want that property in the global scope and again, if you don't use a module loader, it's your problem if you have both underscore and lodash in your global scope. Just drop one of them ... and enjoy this simplified UMD :)

Sunday, January 06, 2013

The Power Of Getters

While examples used in this post are implemented in JavaScript, concepts discussed here about getters are, in my experience, universally valid.
No matter if we are programming client or server, getters can be full of wins and if you think getters are a bad practice because of performance keep reading and you might realize getters are a good practice for performance too.
As summary, this entry is about getters and few patterns you might not know or ever thought about but yeah, is a long one so ... grab a coffee, open a console if you want to test some snippet and enjoy!
Update
JavaScript allows inline runtime overload of inherited getters so that properties can be redefined as such, when and if necessary.
This is not possible, or not so easy, with Java or other classical OOP languages.
This update is for those thinking this topic has been already discussed somewhere else and there's nothing more to learn about ... well, they might be wrong! ;)

What

Generally speaking, a getter is a method invoked behind the scene and transparently. It does not require arguments and it looks exactly as any other property.
Historically represented as __defineGetter__ Object method in older browsers (but IE), ES5 allows us to use a more elegant and powerful Object.defineProperty method while older IE could use, when and if necessary, VBScript madness. However, consider that today all mobile and desktop browsers support getters, as well as server side JS implementations as showed in this table. Here the most basic getter example:
var o = Object.defineProperty(
  {},    // a generic object
  "key", // a generic property name
  // the method invoked every time
  {get: function () {
    console.log('it works!');
    return 123;
  }}
);

// will log 'it works!'
o.key; // 123

// this will throw an error
o.key();
// getters are not methods!
I know, pretty boring so far and nothing new so ... let's move on!

Why

Getters are usually behind special behaviors such read-only non-constant properties, as HTMLElement#firstChild could be, or (reaction|mutation)able properties such Array#length.
// DOM
document
  .body
  // equivalent of
  // function () {
  //   return document
  //     .getElementsByTagName(
  //       "body"
  //      )[0];
  // }
  .firstChild
  // equivalent of
  // function () {
  //   return this
  //     .getElementsByTagName(
  //       "*"
  //      )[0];
  // }
;

// Array#length
var a = [1, 2, 3];
a.length; // getter: 3
a.length = 1; // setter
a; // [1]
a.length; // getter: 1

A First Look At Performance

If we perform every time an expensive operation as the one described to obtain the body, of course performance cannot be that good. Thing is, the engine might perform that every time because it must be reliable when we ask again for the body, being this just a node inside the documentElement that can be replaced as any other node at any time.
However, even the engine could be optimized when it comes to widely used accessors as firstChild could be, and this is what we can do as well with our defined getters ( and if you are wondering how to speed up the document.body access, well ... just use var body = document.body; on top of your closure if you are sure no other script will ever replace that node which is 99% of use cases, I guess ... drop that script otherwise :D )

A DOM Element Example

Use cases for getters are many but for the sake of explaining this topic, I have chosen a classic DOM simulation example. Here the very basic constructor:
// A basic DOM Element
function Element() {
  this.children = [];
}
That is quite common constructor for many other use cases too, right? What if I tell you that there is already something inefficient in that simple constructor?

Already Better With Getters

So here the thing, when we create an object, this might have many properties that could be objects or arrays or any sort of instance, isn't it? Now question yourself: am I going to use all those objects by default or instantly?
I believe the answer will be most of the time: NO!
function Element() {}
// lazy boosted getter
Object.defineProperty(
  // per each instance
  Element.prototype,
  // when children property is accessed
  "children", {
  get: function () {
    // redefine it with the array
    // dropping the inherited getter
    return Object.defineProperty(
      // and setting it as own property
      this, "children", {value: []}
    ).children;
  }
});

// example
var el = new Element;
// later on, when/if necessary
el.children.push(otherElement);
We can see above benchmark results here. In real world the boost we have per each instance creation, and the lazy initialization of many object properties, will make the benchmark even more meaningful.
Moreover, what jsperf never shows is the lower amount of used RAM adopting this pattern based on getters. It is true that we have a bigger overhead in the code itself, but unless every instance will use those properties, the number of objects to handle will be reduced N times per instance creation and this is a win for Garbage Collector operations too.

Recycling The Pattern

OK, that looks a lot of overhead for such common pattern, when it comes to properties as objects, so how could we reuse that pattern? Here an example:
function defineLazyAccessor(
  proto,        // the generic prototype
  name,         // the generic property name
  getNewValue,  // the callback that returns the value
  notEnumerable // optional non-enumerability
) {
  var descriptor = Object.create(null);
  descriptor.enumerable = !notEnumerable;
  Object.defineProperty(Element.prototype, name, {
    enumerable: !notEnumerable,
    get: function () {
      descriptor.value = getNewValue();
      return Object.defineProperty(
        this, name, descriptor
      )[name];
    }
  });
}

// so that we can obtain the same via
defineLazyAccessor(
  Element.prototype,
  "children",
  // the new value per each instance
  function () {
    return [];
  }
);
The callable value is a compromise for raw performance but worth it. An extra call per each property and once should never be a problem while RAM, GC operations, and initialization per each instance, specially when many instances are created, coul dbe quite a bottleneck.
Now, back to the main constructor :)

The Element Behavior

For this post sake we would like to simulate appendChild(childNode) and firstChild as well as lastChild. Theoretically the method itself could be the best place to obtain this behavior, something like this:
Element.prototype.appendChild = function (el) {
  this.children.push(el);
  this.firstChild = this.children[0];
  // to make the code meaningful with the logic
  // implemented later on ... this is:
  this.lastChild = this.children[
    this.children.length - 1
  ];
  // instead of this.lastChild = el;
  return el;
};
Above snippet is compared with another one we'll see later on in this benchmark.

Faster But Unreliable

Yes, it is faster, but what happens if someone will use another method such replaceChild() passing, as example, a document fragment so that the number of children could change? And what if the other method changes the firstChild or the lastChild?
In few words, inline properties assignment are not an option in this case so, let's try to understand what should we do in order to obtain those properties and clean them up easily with other methods.

An Improved defineLazyAccessor()

If we want to be able to reconfigure a property or reuse the inherited getter, the function we have seen before needs some change:
var defineLazyAccessor = function() {
  var
    O = Object,
    defineProperty = O.defineProperty,
    // be sure no properties can be inherited
    // reused descriptor for prototypes
    dProto = O.create(null),
    // reused descriptor for properties
    dThis = O.create(null)
  ;
  // must be able to be removed
  dThis.configurable = true;
  return function defineLazyAccessor(
    proto, name, getNewValue, notEnumerable
  ) {
    dProto.enumerable = !notEnumerable;
    dProto.get = function () {
      dThis.enumerable = !notEnumerable;
      dThis.value = getNewValue.call(this);
      return defineProperty(this, name, dThis)[name];
    };
    defineProperty(proto, name, dProto);
  };
}();
At this point we are able to define firstChild or lastChild and remove them any time we appendChild()
// firstChild
defineLazyAccessor(
  Element.prototype,
  "firstChild",
  function () {
    return this.children[0];
  }
);

// lastChild
defineLazyAccessor(
  Element.prototype,
  "lastChild",
  function () {
    return this.children[
      this.children.length - 1
    ];
  }
);

// the method to appendChild
Element.prototype.appendChild = function(el) {
  // these properties might be different
  // if these were not defined or no children
  // were present
  delete this.firstChild;
  // and surely the last one is different
  // after we push the element
  delete this.lastChild;

  // current logic for this method
  this.children.push(el);
  return el;
};

Optimize ... But What?

It is really important to understand what we are trying to optimize here which is not the appendChild(el) method but firstChild and lastChild repeated access, assuming every single method will use somehow these properties as well as the rest of the surrounding code.
Accordingly, we want to be sure that these are dynamic but also assigned once and never again until some change is performed. This benchmark shows performance gap between always getter and current, suggested, optimization. It must be said that V8 does an excellent work optimizing repeated getters, but also we need to consider that daily code is, I believe, much more complex than what I am showing/doing here.

Avoid Boring Patterns

The repeated delete thingy is already annoying and we have only two properties. An easy utility could be this one:
function cleanUp(self) {
  for(var
    // could be created somewhere else once
    name = [
      "lastChild",
      "firstChild" // and so on
    ],
    i = name.length; i--;
    delete self[name[i]]
  );
  return self;
}
We could use above function in this way:
Element.prototype.appendChild = function(el) {
  cleanUp(this).children.push(el);
  return el;
};

Still Boring ...

We could also automate the creation of the cleanUp() function, making simpler also the definition of all these lazy accessors. So, how about this?
function defineLazyAccessors(proto, descriptors) {
  for (var
    key, curr, length,
    keys = Object.keys(descriptors),
    i = 0; i < keys.length;
  ) {
    curr = descriptors[
      key = keys[i++]
    ];
    defineLazyAccessor(
      proto,
      key,
      curr.get,
      !curr.enumerable
    );
    if (curr.preserve) keys.splice(--i, 1);
  }
  length = keys.length;
  return function cleanUp(self) {
    self || (self = this);
    for(i = 0; i < length; delete self[keys[i++]]);
    return self;
  }
}

var cleanUp = defineLazyAccessors(
  Element.prototype, {
  children: {
    preserve: true,
    enumerable: true,
    get: function () {
      return [];
    }
  },
  firstChild: {
    get: function () {
      return this.children[0];
    }
  },
  lastChild: {
    get: function() {
      return this.children[
        this.children.length - 1
      ];
    }
  }
});

Benchmark All Together

OK, it's time to test what we have optimized until now. The test would like to simulate an environment where most common operations are Element instances creation and firstChild and lastChild access:
function benchIt(Element) {
  // 200 instances
  for (var i = 0; i < 200; i++) {
    var el = new Element;
    // 5 appendChild of new Element per instance
    for (var j = 0; j < 5; j++) {
      el.appendChild(new Element);
    }
    // 100 firstChild and lastChild access
    for (j = 0; j < 100; j++) {
      result = el.firstChild && el.lastChild;
    }
  }
}
For some reason, and I believe it's a sort of false positive due benchmark nature, Chrome is able to optimize those repeated getters more than expected. It is, however, the only one faster in this bench but this is kinda irrelevant, if you understood the point of this post ... let's summarize it!

Getters Are Powerful Because

  • can be inherited, and manipulated to improve performance when and if necessary
  • can help complex objects to initialize one or more heavy property later on, and only if necessary
  • could be tracked in an easier way, simply adding a notification mechanism per each time the getter is accessed
  • APIs look cleaner and transparent for their users
Enough said? Your comments are welcome :)