My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Thursday, June 14, 2012

Ranting About Racing Engines

Update ... regardless this rant is still valid ... I have updated for both Web and node my es6-collections following stricter currently available specs.


This is happening right now, or better, since this race started a while ago between alternative browsers and the idea of bringing JavaScript every-bloody-where ...

On Engines Fragmentations

This happens since ever in vehicles engines and we can all see the difference in themes of both prices and CO2 emissions. Every major car/motorbike engine manufacturer is competing against others.
N times the amount of money spent, N times the number of patents slightly different and potentially able to make better engines production slower/farer due patents lifecycle.
Almost zero joined effort ... same studies over and over with of course different solutions, and these are always welcome, but rarely a shared effort between teams able to bring the 0 emissions, ecologic and usable engine we are all dreaming about ( look at all these prototypes with batteries rather than gasoline ... look at the very first and only hybrid diesel engine, etc etc ... now ask yourself how long will it take before we can all afford these engines for real ... )

The Web Is Different ... But

Well, at least there are tons of groups trying to bring some Harmony between JavaScript engines.
I believe is still good that there are different implementations such V8, Nitro, SpiderMonkey, Chakra, Whatever ... but when it comes to the race, all these engines are adopting early versions of specs that are still under definition and this is bad 'cause ...

The Side Effect

Works only in Google Chrome is the very first side effect of this fragmentation and while earlier/faster adoption of most recent standards and drafted specs can be considered one step forward, the idea that a Web page works only in a single browser, and no matter which one is it, bring us back to year 2000 ... have we learned anything since?
Chrome Engineers are great and extremely fast, same is for all Webkit contributors, as well as SpiderMonkey and all other engines but if we find 2345678 different APIs that works inconsistently across all JavaScript engines I believe we are doing it wrong.
Here just a few examples:
  • W3C effort to bring more HW access through the Web
  • PhoneGap APIs to expose native access to JavaScript and Web Pages
  • webOS APIs to expose native access to JavaScript Apps
  • Boot 2 Gecko APIs to have HW access through JavaScript
  • ChromeOS APIs to have lower level access
  • Safari Mobile APIs not present in other mobile browsers
  • Opera and Mobile proprietary namespace ( really guys, please drop that window.opera thingy to start with ... )
  • Adobe APIs to interface Flash objects and JavaScript for Air or generic plugin content
  • newest ES6 APIs ... so cool, and so unstable at the same time ...
About latter point, I am unable to push my updated version of my es6-collections and I tell you why ...

The Curious Case Of Harmony Collections


Map, Set, and WeakMap respective prototypes are changing all the time ... oh well, this is a common side effect about adopting features that are under discussion on daily basis and not defined anywhere yet.
In any case, I would like to use the native constructor where available, and this is true for both Firefox (Aurora) and Chrome (Canary with enabled experimental features).
While updating my es6-collections code I have realized, after changing tests and implementing the desired behavior, that Aurora has a size() method that does not exist in Canary and that Map#set(key, value) does not return the value as it does in Canary.
Who is correct? Who is wrong? It doesn't actually matter because if these methods are frozen in the prototype, I cannot fix anything there and I need to create another subset with a different name in order to obtain a fixed and consistent version of these constructors across all platforms and most likely penalizing Aurora or Canary being unable to use respective native implementation ...

Why Native

Native is fast, native is good, native is the way to go and the reason we create polyfills: to be able to remove them once our current A grade target browsers support already this or that API and it works as expected and as fast as possible.
About working as expected once we deal with a native API ... I don't even want to start this conversation 'cause there are too many things to consider here ... my take on this is: don't fix browsers bugs unless it's IE that won't update automatically and only if IE is an interesting target browser for your app.
As easy as that ... if a browser vendor realize the gravity of a bug the team will fix it asap ... if we show easy work-arounds to a specific bug all over the place they won't consider that problem as a show stopper.
Even better, always file a bloody bug with proper description, a link to specs, and a use case that makes sense so that these guys can properly understand the gravity and what is this about ... OK? A bit more effort from the Web community itself about filing bugs can only bring a better Web for all of us.
In any case, if we cannot trust native behavior across platform, we need to create yet another boring library able to fix all these things for us ... and this is getting ridiculous, imho ... I don't want to re-fix the browser for every single native call I do from the JavaScript core ... I just would like to use the programming language and its native "things" and focus on something else, something more, something productive!
Today for a basic web page with a stupid form to submit, we add minimum 200Kb of fixes through any sort of "lightweight" library ... isn't it? Is this what the meaning of "JavaScript everywhere" is becoming day after day? If you don't use the library to fix native stuff you cannot do much?

Not Only JavaScript

You don't need me to tell you that the CSS world is even more messed up than JavaScript one ... and the reason is basically the same: fragmentation of rules and new features using prefixes all over the place which results in 5 times bigger CS with repeated things all over the place ... the most distunging piece of "code" ever in the Web history that ... needs to be fixed with libraries, again? meh is the maximum expression I have for this without being vulgar ...

What Can We Do

I have no idea ... and in my specific case, talking about es6-collections, I have to wait and nothing else. I have to wait in order to be sure one behavior is correct, while another one is not, and finally penalize browser A or browser B feature testing the inconsistency and throwing away native constructors in favor of shimmed one ... so that my tests, at least tests, will be consistently green across all supported engines/platforms.
Was my fault in first place to propose polyfills for something still that unstable and not properly implemented in these browsers developers channels? Probably yes, but many others are bringing few ES.Next things through libraries and polyfills so be aware that things might screw up without notice from a day to another one due automatic updates and if you don't follow all these libraries on daily basis you might find yourself in a situation where all your code needs to be rewritten ... and I guess none of us would like that much this situation ...

Shim The Spec

To avoid misunderstanding, polyfills for already official and approved specs are always welcome. This is the case of ECMAScript 5 or 5.1, as example, where things are not going to change any soon while everything related to Harmony and ES6, if the shim is even possible, should be categorized as "experimental, might not work, I knew it was going to break somewhere" ... and this is true even for what I am proposing with es6-collections: it's cool! ... and be careful, even if the shim you are using is not mine.

P.S. the current version has keys and values plus does not work with edge cases such NaN or -0 when it comes to Map keys ... the local version I have is green everywhere and implements specs properly so stay tuned if you want a closer ES6 version of these constructors API while start avoiding the usage of keys and values properties if you are using that code already ( about NaN and -0 I don't bother, I think using these values as keys is an error in any case ... however, next version will have a better indexOf to match NaN and -0 too so that specs, those we know today, will be respected, forcing the implementation to do not trust native Array#indexOf and its inconsistent result when it comes to those values that are not reflective).

Last, but not least, more than asking clarification in the ECMAScript group I could not do ... so, even filing a bug in this case is kinda pointless since I don't even know what to write there except that another engine does something different about something that's not fully defined yet: Math.pow(MEH, 31)

Friday, June 08, 2012

Asynchronous Storage For All Browsers

I have finally implemented and successfully tested the IndexedDB fallback for Firefox so that now every browser, old or new, should be able to use this interface borrowed from localStorage API but made asynchronous.

Asynchronous Key/Value Pairs

The main purpose of this asyncStorage API is to store large amount of data as string, including base64 version of images or other files.
As it is, usually, values are the bottleneck, RAM consumption speaking, while keys are rarely such big problem.
However, while keys are retrieved asynchronously and in a non-blocking way, but kept in memory, respective values are always retrieved asynchronously in order to do not fill the available amount of RAM for our Web Application.

Database Creation/Connection

Nothing more than ...

asyncStorage.create("my_db_name", function (db, numberOfItems) {
// do stuff with the asyncStorage
});


Storing An Item

As it is for localStorage, but async

asyncStorage.create("my_db_name", function (db, numberOfItems) {
db.setItem("a", "first entry", function () {
// done, item stored
});
});

Please note that if the item was there already, it's simply replaced with the new value.

Getting An Item

As it is for localStorage, but async

asyncStorage.create("my_db_name", function (db, numberOfItems) {
db.getItem("a", function (value) {
// done, value retrieved
});
});

Please note that if the item was not previously stored, the returned value will be exactly null as it is for localStorage.

Removing An Item

As it is for localStorage, but async

asyncStorage.create("my_db_name", function (db, numberOfItems) {
db.removeItem("a", function () {
// done, no key a is present anymore
// so length is decreased already here
// and db.get("a") will send a null value
});
});


Removing All Items

As it is for localStorage, but async, and considering that only values in the specified database name will be erased, rather than all of them.

asyncStorage.create("my_db_name", function (db, numberOfItems) {
db.clear(function () {
// done, database "my_db_name" is now empty
});
});


Getting All Items Keys

If this is really what you need to do, bear in mind the API is the same used in the localStorage where indeed there's no way to retrieve all keys if not doing something like:

for (var
keys = [],
i = db.length;
i--;
) {
keys[i] = db.key(i);
}


Getting All Items

Well, the thing here is that you can store an entire object through JSON so if you need to save and get back everything, it's kinda pointless to store different keys, just use one.
However, this is how I would do this task:

for (var
object = {},
complete = function () {
alert("Done, all items in the object");
},
ongetitem = function (value, key) {
object[key] = value;
if (!--j) complete();
},
i = db.length,
j = i;
i--;
) {
db.getItem(db.key(i), ongetitem);
}


On Github, Of Course

You can find full API description in this repository. Please forgive me if the name was initially db.js, I believe the new one, asyncStorage.js, is much more appropriate (also another guy created a script called db.js so ... well, I have avoided conflicts with that library).

That's Pretty Much It

And I hope you'll start using this API to avoid blocking mobile and desktop browsers when you store a lot of data ;)

Monday, June 04, 2012

Working With Queues

Programming with queues is basically what we do in any case: it does not matter if we write code in that way, we simply think in that way.
This means that our logic flow is generally distributed in tasks where "on case X" we apply "logic/procedure Y" .. isn't it?

The Good'ol GOTO

The goto statement has been historically criticized, as well as the switch one, and in both cases is about entry and exit points in a generic workflow.
Nowadays, we can say the GOTO is not needed anymore thanks to functions, where rather than thinking "when this case occurs, goto this instruction" we call the required function in charge of that specific task providing arguments or context as we go.
We may then agree that GOTO is not really a must have while functions are, with all the power and flexibility we might need, and specially in JavaScript.

On Block-Scope

JavaScript has theoretically no block-scope concept, at least until very latest versions of ECMAScript where blocks can be written in the wild.
Blocks are cool for partially independent operations that should not affect at all the external scope/logic but if we think more about this, the usage of inline function expressions has replaced the block-scope concept for a while.
Even better, any function in JS could be considered a sort of equivalent of a block scope, with the advantage that we can re-call the same function as many times as we want, with limits in recursions, avoiding the GOTO and still using block-scopes.

All Together

What if we use as many functions as we need, in order to complete our flow, without compromising the external environment and being able to re-call segments of our flow when something goes wrong?
This can be easily done with a queue system, like the one showed below:


A Few Examples

Here a very basic example on how to use above queue system. There are two sequential things to do, waiting for some truish condition then do something.

Queue([
function (q) {
q.wait(document.body);
},
function onBodyPresent(q) {
document.body.innerHTML = "Hello queue!";
}
]);

The wait() method calls next "block" to execute, as next function, if any, only if the condition passed as argument is true, waiting otherwise "0 ms", if not specified differently, before the same "wait for" block logic is re-executed.
While this example does not show real potentials of queues based programming approach, the next one could do it.

!function () {
var
init = function (q) {
if (!result) {
number = Math.abs(prompt("factorial of:")) || 1;
result = 1;
}
q.next();
},
verify = function (q) {
if (1 < number) {
result *= number--;
q.unshift.apply(
q,
program.slice(
program.indexOf(init),
program.indexOf(verify) + 1
)
);
}
q.next();
},
showResult = function (q) {
alert(result);
q.next();
},
program = Queue([
init, verify, showResult
]).slice(),
result, number
;
}();

Above code is a factorial program: all logic blocks are known in advance and the queue is constantly re-populated until the condition in the middle is satisfied.
While performances are not the best, but generally speaking performances are never a problem when a queue logic is needed, since queues are awesome specially for asynchronous tasks that are rarely good for real-time programming, this factorial program does not use recursion, is quite easier to understand and debug, and it does not blow the RAM usage: functions are recycled as well as the queue which will never grow more than 3 indexes so just wait ... and the result at some point will appear ;-)

Asynchronous Example

As I have said already, queues are great for asynchronous tasks helping with indentation, never too many nested functions, logical workflow, each function does a little but it does it well, readability, functions are named in a semantic way but minifiers will simply shrink them once executed, and even if this does not look that OOP, I bet once we start getting use to this approach things will be easier than ever.

!function () {
// warning: this code is a proof of concept
// it won't work as it is ...
var
// query selector shortcut
$ = function (css) {
return document.querySelector(css);
},
// while user and pass fields are empty ...
login = function (q) {
q.wait(
$("#user").value.trim()
&&
$("#pass").value.trim()
);
},
// once user and pass are not empty anymore
verify = function (q) {
var xhr = new XMLHttpRequest;
xhr.open("post", "verify.php", true);
xhr.send([
"user=" + encodeURIComponent($("#user").value.trim()),
"pass=" + encodeURIComponent($("#pass").value.trim())
].join("&"));
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
q.result = xhr.responseText;
// call next function
q.next();
}
};
},
// verify if the user exists
authorized = function (q) {
if (q.result === "ok") {
q.push(ok);
} else {
// notify the error plus re-queue logic between
// login and authorized included
q.push.apply(q, [error].concat(program.slice(
program.indexOf(login),
program.indexOf(authorized) + 1
)));
}
q.next();
},
// end of the program
ok = function (q) {
alert("Welcome!");
// could go on with the same queue
// passing through a different program
},
// warning for the user
// plus resetting fields
error = function (q) {
alert("user or pass not recognized");
$("#user").value = "";
$("#pass").value = "";
q.next();
},
// clone of the whole program
// reused later to recycle segments
program = Queue([
login, verify, authorized
]).slice()
;
}();


Where Queues Are Used

Well, almost everywhere ... Testing Frameworks are usually based on queues, specially those with asynchronous support for different tests. Same is for JavaScript or CSS loaders, based on queues when it comes to order.
Any sort of stack/Array not representing data, is usually a queue we consume during our logic ... promises are queues too, and same is for events, these are all queues.
Should I explain more? Probably yes, but I'd prefer if you have a look at these examples, use the simple Queue function I have written, and create something awesome that makes the logic of your app cleaner and better organized.
Last, but not least, did I mention that logging a queue gives us instantly a logic workflow of what's going on and accordingly with where it's going on? console.log(q.slice()) anywhere you need and you'll see all named functions you gonna deal with after the one is executing in that moment.

Tuesday, May 29, 2012

on "manifestcontentavailable" event


Even native apps have something like a preload when it comes to launch them ... and guess what happens during that time?
Nothing that different from synchronizing content or looking for an update, isn't it?

Same Could Be For Mobile Web Apps
If you need your manifest to be loaded, in order to provide best UX ever, you might need a mechanism to be sure that everything that has to be there ... oh well, is simply there!

AppCache VS Binary
Well, one handy thing about the Application Cache Manifest for Web Apps is that you might update only a single part of your software.
As example, if you have external libraries you might decide to serve them a part, as external or independent content from the rest of the app ... and rather than download again the whole thing, the user will update only that dependency ... I mean, this is cool but rarely used due common build processes where a single changed comma of a single file will require the whole app file update, you know what I mean?

My "manifestcontentavailable" event proposal
Nothing really skilly here, simply a handy event that could be useful for any sort of app that would like to have everything available ASAP ... where everything is actually any file in the home page plus any file present in the manifest.
Conditions considered in this proposal are:
  1. the manifest has been fully downloaded, first visit in your page
  2. the manifest changed somewhere, new files have been replaced locally
  3. the manifest didn't change at all, everything is already available
  4. the user is offline, no need to even bother with new manifests or files
All other cases where the manifest is corrupted ... well, aren't a matter of this event since we all have tools to be sure that the manifest points ti existent files and everything is OK, right?

This is basically it ... a loading spinner in charge of entertainment for new users and those waiting to update their content, a basically instantly available page once users are offline.
The main point is: users download the whole thing once connected and, theoretically, never again thanks to the manifest mechanism.
The page itself doesn't do much more than show a spinner while the real stuff is being downloaded.
Once this content is available, and except for those JSONP calls to our favorite online service, the experience should be as smooth as possible without hundreds of spinners in the wild during our Web Ap interaction ... and I believe this is cool!
The script, easily minifyable in a bounce of bytes, should do the trick with all modern/mobile browsers and in a totally unobtrusive way ... if you listen to that event, you get it, otherwise nothing else will change.

Why
I believe we are not getting the full power of this manifest concept so far and a proper event able to notify what's going on there, easy to setup, use, and configure, could be a must have for many Web Apps out there but please feel free to comment here what'ya think about this approach, side effects, or problems I might have not considered, thanks!

Saturday, May 12, 2012

Rubik's Cube Analogies in IT Development World

Oh well, during my free time without laptop I came up with this talk about Rubik's Cube and Development process.

I'll let you enjoy the video, hopefully ..., and leave comments here, if necessary.

The big summary is: there's a lot to learn from the cube, so take this as a hint to learn how to solve it and find even more analogies on your daily basis tasks.



Update
if you ever wondered how I created the talk cube situation, I have simply used RGBA as clockwise or anti-clockwise approach ... if you have a solved cube, consider the Alpha as the White, and turn anti-clockwise White, Blue, Green, Red ... you'll have a quite messed up cube you can always solve easily via Red, Green, Blue, White, or you gonna solve it the way you know, the way I did, probably with 2 shortcuts in the middle I have ignored for this kind of talk.

Tuesday, April 10, 2012

Graceful Migration From ES3 to ES5

Just slides I have showed today here @ nokia for a tech talk, hope you'll enjoy them, cheers

Update
Here you can find the updated version of the Object.forEach proposal: gist 2294934
Main changes are about some inconsistent behavior in Safari, not it should work without problems.
Thanks to @medikoo for his hint.

... rock'n'roll ...

Friday, March 16, 2012

On Obtrusive Polyfills

I'll try to make the point short and simple, starting with the tl;dr version: if you create a polyfill and during features detection you realize this cannot work, drop it and let other libs deal with it.

Broken Object.defineProperty

This is just one case really annoying. Bear in mind I am not talking about "making perfect polyfills", this is almost impossible most of the case, I am talking about broken runtime even if you did not really do anything that special.
It was the case of Object.defineProperty shimmed through es5-shim code ... this library is good but that method, as many others, are simply broken.
I was trying to add a setter/getter in IE < 9 and shit happened, more specifically a thrown exception happened ... the repository itself explains that this method and others should fail silently ... so it's OK for the developer to know about it and avoid trusting these methods ...


This Is Not OK

If I check if Object.defineProperty is there and this is all I need to know, I don't want to double check which version of IE, if IE, it is, neither I want to check anything extra.
I simply trust the fact the browser, or some library, fixed the problem for me and I can use this method as it is.

if ("defineProperty" in Object) {
// I want to use it, wouldn't check for it otherwise
}

A broken shim doesn't make anyone life easy while that simple condition in my code could redirect my logic in a completely different approach.
Since in IE < 9 that method simply does not make sense to exist, then it should not exist, period.

Remove The Broken Shim

I hope, and expect, that es5-guys will simply delete Object.defineProperty afterward, once realized it's not usable.
I would not compromise entirely theObject.create but again, if defineProperty() is not there there is no bloody point on using create() too.
Let other libraries deal with the fact this is missing in their own way since every single case could be different: someone simply need to inherit another object, someone else might need the method as it should be.

As Summary

A shim/polyfill that works 90% is OK, specially if the API is covered 100% and even if behind the scene things are not identical, but a polyfill that let a piece of code think everything is fine and the whole app breaks who knows when and why should not be exposed at all or, if you really want to bring similar functionality, it should be exposed prefixed, i.e. Object.$defineProperty() so that the rest of the code is aware that such method exists but it's limited.

Thanks for understanding

Friday, March 09, 2012

A Tweet Sized Queue System

The Code

This is the 138 bytes version:

function Queue(args, f) {
setTimeout(args.next = function next() {
return (f = args.shift()) ? !!f(args) || !0 : !1;
}, 0);
return args;
}

While this is the 96 one:

function Queue(a,b){setTimeout(a.next=function(){return(b=a.shift())?!!b(a)||!0:!1},0);return a}


The Why

In almost every QCon London talk I have heard the word asynchronous. Well, the aim of the tweet sized function I have just written above is to make things easier.
After I have implemented builder/JSBuilder.js for wru, so that Python is not needed anymore to build the tiny library, I needed to make builder/build.js usable in a way that asynchronous calls through node won't bother, at least visually, the normal flow of the script.
After that I have thought: "why not making a generic Queue function with Promises like approach" ?

The What

It's quite straight forward, you create a queue, you use the queue. You might eventually pollute the queue or change its order, since the queue is nothing different from a generic Array with an extra method on it: next().
Here a basic example:

Queue([
function (queue) {
// do amazing stuff asynchronously
setTimeout(function () {
queue.message = "Hello Queue";
queue.next();
}, 500);
},
function (queue) {
// do something else
alert(queue.message);
// Hello Queue
queue.next(); // not necessary at the end
// does not hurt thought
}
]);


That's Pretty Much It

So, the queue argument is the array created initially to define the ordered list of function.
This means you can recycle this variable even adding custom properties but most important is that you can change the queue via Array.prototype.splice/unshift/pop/push() methods and decide the future or the next function to execute runtime, if needed.
The only extra bit is about returning true or false in case there was a next() to execute or not.

Last, but not least, of course you can create new Queue([]) inside queues ... so, this is actually more powerful than many other queue systems I have seen so far and ... hey, 96 bytes: enjoy!

Thursday, March 08, 2012

I Heard You Like To Write Less

Update apparently this proposal is being considered in es-discuss ... not the implicit return so far but the "redundant" function keyword.
This is last draft from Brendan Eich:

FunctionDeclaration:
function Identifier ( FormalParameterList_opt ) { FunctionBody }
Identifier ( FormalParameterList_opt ) [no LineTerminator here] { FunctionBody }

ShortFunctionExpression:
Identifier_opt ( FormalParameterList_opt ) [no LineTerminator here] { FunctionBody }
Identifier_opt ( FormalParameterList_opt ) => InitialValue



I'm @qconlondon waiting for the next talk so I decided to take a couple of minutes to blog about this little experiment.
In ES-Discuss developers are still talking about function keyword, if it's needed or not.

I believe the fact CoffeeScript has no explicit function is one of the major reasons devs have been attracted so I made it even simpler.

Just Drop The Function

With current ES 3, 5, or 5.1 syntax I can't really see problems or ambiguity removing that keyword but if you find a case that could fail please let me know.
Of course if we use a keyword we should be able to understand the difference, i.e. with for, if, or others, and this simple test page is just an experiment: you write some code, you "functionize" it, and if the code does not run it's showed highlighted in red.
The RegExp is quite simple and straight forward and parsing time should never be a problem even for big projects.
The return is implicit, a bit Ruby style, but of course you can chose to return in the middle of a function.
Thing is, the very last statement will be returned and you can play with results.

I wonder if anyone would use such syntax on daily JavaScript coding. The parser needs improvements in any case so just think about it and let me know. Cheers

Friday, March 02, 2012

What's localStorage About

I've read even too many articles recently talking about storing anything you want in the localStorage ... and I believe this is wrong in so many levels.
First of all, localStorage is not a database, 'cause if you consider it a database you should have considered a database document.cookie before ...

As Simple As That

document.cookie has been there, problematic as it is, since ever.The limit of this technique has always been the 2Mb on average across different browsers.
Have you ever asked yourself what the hell is document.cookie doing and how comes it's instantly available?
Well, it's about storing some stuff in your hard disk or your browser (SQLite) database, so that every single request will send this file loads of key/values pairs through the HTTP request.
The side effect of this technique is that more you pollute cookies in your site to remember stuff, the more the interaction will be slower, specially on mobile, since all these info have to be sent to the server for each bloody request.

What's localStorage Good For

First of all don't forget localStorage does synchronous I/O which means it's blocking and the bigger it is, the slower it will be to retrieve or store data. Secondly, don't forget that if every script in your page is using it, the possibility that its rich 5Mb gonna be filled up are higher.
As summary, the moment you start storing files there you are kinda doing it wrong. You don't know how long it takes to retrieve or store data in the localStorage and the moment you feature detect this delay you have already potentially compromised your page/app visualization time.
localStorage best practice is to keep it small, as small you have kept, hopefully, until now, any document.cookie related task.
Of course if your code is the only one using localStorage, and you have things under control, you may think to use it as generic key/value paris DB but on Web, where usually more than a framework/library is in place, you can't pollute this storage as much as you like because is like polluting the global scope with any sort of crap ... and I hope we got this is bad, right?

As Less As You Can Remembering It's Public

True story bro, if you care about your code behavior don't trust, believe, use, this storage that much. Store what you think is necessary and nothing more, use proper DBs interfaces to store more, use the browser cache if you have your server under control, use the manifest if you want to make your app available off-line.
If you are thinking to speed up, without considering localStorage side-effects, the initialization of your page, storing your script once because it's too heavy ... well, the moment you both block the I/O to read that script and the moment you gonna evaluate it, you have gained zero performances boost in 90% of the cases.
Keep it small, think about other libraries, never be greedy with the localStorage, the worst case scenario is that your web page will temporarily kill your same startup time, or the stored data is more than 5Mb and we are screwed, or other libs in your page may be so greedy to erase everything that's not in their prefixed key namespace in a call

for (var key, i = localStorage.length; i--;) {
key = localStorage.key(i);
if (key.slice(0, 3) !== "my-") {
localStorage.remove(key);
}
}

Without considering the fact that a single clear() call performed from another library would destroy all the data you rely on.

Bear In Mind

document.cookie should be used as client/server channel while localStorage is client only matter. Put in cookies what the server should know, and put in localStorage what you would like to remember for the user always keeping in mind, as explained before, that its data is available for any other lib in your page.

So ... now you know ;)

Saturday, February 18, 2012

JavaScript Test Framewors: more than 30 + 1

After @szafranek hint and suggestion, wru landed almost at the end of this Wikipedia List of unit testing frameworks page.

If you use this tweet size hand made imperfect script in the wikipedia page console:

a=q="querySelectorAll",[].map.call(document[q](".mw-headline,.wikitable"),function(v,i){i%2?a=v.textContent:o[a]=v[q]("tr").length},o={}),o

You gonna see that JavaScript is third when it comes to number of test frameworks ... and not sure that's good, anyway, here a quick description of mine.

About wru

You can find most info in github page itself but essentially wru is a generic purpose, environment agnostic, JavaScript tests framework compatible with both client and server, where client means every bloody browser, and server means Rhino, node.js, and recently phantom.js too.
To be really fair, wru is not exactly a unit test framework since it does not provide by default anything related to mocks or stubs.
However, wru is so tiny and unobtrusive that any 3rd parts library could be integrated without effort in whatever test you want + need.

wru In A Nutshell

Well, the first thing to do with wru is to write code such:

wru.test([
{
name: "the test name, for feedback purpose",
setup: function (obj) {
// the OPTIONAL setup property performed before the test
// obj will be a freshly new created object
// with a single test lifecycle ... reusable within the test
},
test: function (obj) {
// obj comes from setup, if any
wru.assert(true); // a generic truish condition
setTimeout(wru.async(function () {
// a generic asynchronous condition
// where you will inevitably assert something
wru.assert("this is also true", true);
}), 1000);

// the handy possibility to reuse assert itself
if (wru.assert("this is a truish value", obj)) {
// do stuff with obj
// or simply log the object
wru.log(obj);
}
},
teardown: function (obj) {
// the OPTIONAL teardown property performed after the test
// obj comes from the test function, is the same
// passed through setup, if present, and test
}
}
]);

After this you are basically ready to go.
Both assert() and async() accept one up to two arguments: the optional description of the test, recommended if you want at least understand what's going on and where if something failed, and the second parameter which is returned in both cases.
Via assert() the condition/expression is evaluated as it is, truish or falsy, and returned if addressed.
Via async() the callback is wrapped in an unobtrusive way and this is simply to let wru know that something is going to happen, hopefully in a reasonable timeout of 10 seconds.

How To See Results

Once you have your test in place, you can re-use the same code all over supported environments.
The cool part about wru is that templates for all scenarios are automagically created at build time.
You don't even need to npm install wru to use the test, or include it in a page via script tag, you can simply grab a template and copy and paste or replace, during a build process, the test. This will make your life easier than any setup oriented test framework, isn't it?

Write Once, Run Everywhere

Isn't this the dream we all have about JavaScript in both browsers and server side environments? As far as I know wru is the only one that supports all these environments with a unified, cross platform, and easy to remember API. (until I have discovered JS Class via @jcoglan)
Main principles behind wru? KISS and YAGNI, so that everyone can start, at least, writing tests for what's needed, without any ulterior waste of time about environment, library dependency, or programming language setup.
And what if you want to test platform specific gotchas ? Oh well, you can still recycle the whole test and check the number of positive results, as expected, at the end ... well, not all the code we are writing should work cross platform, but even in this case, wru gonna make you familiar with tests and tests logic so it's a win win in any case.

Pluggable UT Libraries

You must admit the name of this framework is absolutely crap. I didn't even know how to call it in a manner that no other frameworks would have been affected, so I sticked with my blog initials, WebReflection, and the one out of Unit in it, until I introduced this tiny library in one of those amazing BerlinJS events where someone suggested Where Are You acronym, and I simply loved it ... uh wait, let's go back in the topic ...
Any external library able to mock or stub your code should be easy to integrate in a wru test without problems.
In this case you may need to include this library upfront via script tag, in the Web html template, or inside any server side related template through require() or some other mechanism.
For browsers, you may consider JSMock, as example, but others which aim is to provide mocks or stubs functionality should be all supported without problems.

About Asynchronous Logic

Let's face reality, asynchronous tests are created exclusively to test the condition after the asynchronous callback has been executed, and this is exactly the wru expectation, you call for async? you wait for async.
If you get how async work you'll realize that you don't have to expect() anything, simply do what your test should do and trigger the done() at the end.
This comes from one of the most appreciated asynchronous pattern out there, Promises, where you simply wait for a done() to be called.
wru does the same, except the equivalent of done() is assert() which is the trigger.
If you have truly complex asynchronous logic, here a common pattern you might find useful with wru.

wru.test([{
name: "async",
test: function () {
var expectation = wru.async(function (arg) {
wru.assert("everything went better than expected", arg === "OK");
});
// your whatever async logic you need
var xhr = new XMLHttpRequest;
xhr.onreadystatechange = function () {
// called multiple times during xhr lifecycle
if (xhr.readyState === 4) {
// async, inside async
doSomeStuffOutThere(xhr.responseText, function (e) {
expectation(e.result); // hopefully the string "OK"
});
}
};
}
}]);

in few words, you don't need to expect a thing since once you define a single asynchronous callback in your test logic, you can trigger it once everything has been tested and, if that will never happen, the timeout will automatically flag the test as failed.

wru Output

Things are kept simple in this case as well, with the happy exception for the cursor.
I mean, you don't really need to stare at it, but the cursor is a visual indication that everything is not just stuck, but it's simply waiting for things to happen, if ever.
A classic infinite loop or endless generator is able to freeze the lifecycle of our app, and only a visual feedback will be able to tell you the truth since any other method, specially in browsers where tests are showed only once executed, won't be able to give you a thing ... except a crash in the best case scenario.
The cursor may interfere with the normal output but, at least when it comes to server side tests, whatever external tool will be able to remove cursor noise in the log and analyze anything that happened during test execution, from green to red or black, providing you an instant feedback if something went wrong.

Improve As You Go

wru is not a universal answer to JS tests but hopefully a good start point or most likely everything you need if the project, and your tests, are not that complex.
Being that simple, the best thing about this library is that we could reproduce its API everywhere else in few lines of code, transforming, as example, your tests into another UT framework without much effort.
The fragmentation of JS tests out there is massive, so I don't really want to go deeper into this, just think that this library made asynchronous tests that easy, as well as normal synchronous one and without interferences, thanks to the chosen namespace, out there.
What else about wru ... well, give i a try :)

Friday, February 17, 2012

If You Don't Get It, Go And Get It!

Oh well, a rant against another one ... how lovely is this? Just trying to make your week end, right?
I am talking about this misleading post with indeed 29530+ views and just 1 Favorited entry (right now) that must be the post author itself since I can't even check and click that red link ... anyway ...

At the very beginning I thought that was a sarcastic post .. like, the opposite of reality, then I have realized it wasn't ... or was it?

V8 is not server-class ?

Define "server class programming language" first ... 'cause I have tried to search it in Google (with quotes) and result was like a single entry that indeed pointed to some Java stuff ...
This argument is kinda boring in 2012, specially against a general purpose programming language as JS is, you don't say?

I mean, doooooode, should I remind you the Java Applet joke early in the Web era? So it was fine for a server-class programming language to do client side stuff? Or it's just a matter of core functionalities, where a project that never even landed in its 1.0 status keeps growing like hell and already showed its high performances muscles against all other modern scripting oriented languages such PHP (without HipHop), Ruby or Python, and others?

Maybe I should simply mention that via C++ you can write your own modules ... just in case ...

Callback spaghetti is bad ?

Let me guess your thoughts, wizard of multiple threads developer ... how you handle asynchronous stuff and how many headaches this caused? I bet you are big fun of races and non-trustable lines of code, isn't it?
Well, with node you'll never have this problem within the language itself, but of course you can write your own module able to use all possible cores and cause yourself headaches about emitting events with consistent and ordered, if necessary, results. You know strongly typed languages so deal with them if you want to improve modules performances.

its nigh-on-impossible to follow the code 6 months later

Oh ... really? There you are server framework pattern developer, you don't understand your own code if the language is JavaScript ... let's blame the chosen technology that caused your frustration: achievement unlocked!

Non-blocking != fast != scalable

As well as
blocking != fast != scalable
but I see you have valid points here, such
  • scalability has very little to do with raw speed, of course it has nothing to do with raw speed if architecture is over-bloated
  • Just because you're fast does not mean you're scalable ... thanks captain obvious, so how come you underline scala performances later on?
  • Node.js isn't even that fast. You can do much better with Scala and its a much nicer language, to boot ... my brain simply tried to divide by zero here ... your argument is that node is not that fast 'cause scala is faster but node performances should not matter?


Performances

The only reliable tests I know about different programing languages is The Computer Language Benchmarks Game.
Now, the only test that makes a concrete difference there, able to screw up final score in a meaningful way, is pidigits, a test that penalizes JavaScript with the inclusion of /home/dunham/shootout/bench/Include/javascript/biginteger.js that gosh knows how badly big integers could perform once simulated in JavaScript.
What you should care about the whole test page, is that not a single test has more code than scala equivalent, preserving in any case performances with basically irrelevant performances gap in real world scenarios.
This is the beauty of V8, the first engine that pushed JS so freaking far away known performances and before it was cool!
I would rather respect it, rather than blame it ... but that just me I guess ...

JavaScript doesn't even have namespaces ?

After you talk about modularity? Do you know that in node.js everything is basicallyy a module? Do you know that any namespace starts from a root, which is called global object in Javascript world, where the community nowadays is freaking aware of namespaces conflicts?
Do you know that every time you define your own object in Javascript you basically create a namespace so that JS had namespace since the very beginning of his concrete history?
it's clear to me you don't ... so please, stop talking about stuff you don't know!
JavaScript namespaces have been there since ever and the best part is that JavaScript supports multiple inheritance through prototypal nature where you don't even need to write kilometrics namespaces to obtain a single bloody "class" out of it.
it's true that in 2000 JavaScript was abused, it's absolutely a lie that nowadays any sort of well known library isn't aware of namespace possibility.
I would add modularity that resolves namespaces automatically, and in an ordered way as Java guys like in folders structure, through any sort of loader that has been developed from a developer as skilled as you are.

people who are really crazy about Node.js are people who only knew JavaScript to begin with

for a project entirely based on C++? Sure, node.js guys are all freaking idiots developers that do not deserve anything in the server side ... isn't it?
I am Zend Certified Engineer and an AS2 (ECMAScript 4th + C#) Certified Developer with some C, C++, Python, and Java background experience that sticked with JavaScript the day ActionScript 3 became a Java like programming language ... and guess what? I am happily writing JavaScript on daily basis and I am sick, programming since year 2000, to hear developers from one language complaining about other programming languages blaming developers that are using them as if they don't know what are doing.

Respect JavaScript

... and stop thinking as if it's a toy language you, in first place, is not able to get. I have no hacking idea about scala and I am blogging since ever without complaining a single time about Scala developers.
You think Scala is what you need? You think scala is what you know? Go Scala, for gosh sake, but don't ever even try to blame another community if you didn't spend at least 2 years behind that programming language, and with a decent programming background, specially if you can't even get your own code after 6 months.
Let's stop this, cause programming today is way too far from perfect, and rather than picking best things out of every language, we keep blaming others and acting like 5 years old kids.
Enough!

Excellent About node.js

Performances are good enough, and code and skills reusability, something anyone with an IT related BSC learned, is absolutely awesome, fresh, new, and productive, without even considering potentials for companies budget.
It's true in JS community itself that the fragmentation of engines never made things easy, the reason a cross platform JS developer should be valued as much or more than a JVM one rather than usually less, due background knowledge per each environment, imo, but it's true as well that node.js makes things eventually easy for new comers in client/server programming world too.
As a Zend engineer, I have complained many times about PHP as programming language due intrinsic non-sense all over, and as JavaScript developer I keep complaining about lack of proper knowledge of the language, still rarely studied properly in Universities, basically the most important language ever in the Web, client and server, field.
Being easy to learn, same success PHP had years ago, many developers from any sort of language are first of all welcome, secondly have an easy way to do things via patterns that, once learned, may not look so smart from other languages point of view, but take care already of many common bottlenecks or problems the web has seen so far.
As summary, Node.js is a great technology that eventually made it where Rhino, Cocoon, and other JS server side related projects, failed. Node.js is easy, fast enough, natural, and junior to senior prone when it comes to server side development.
Scalablity, once again, is not a programming language feature, is a developer mind-set plus skills related matter so think about it any time you decide to blame a technology you don't really understand.

Thursday, February 16, 2012

Berlin JS - RegExp Slides

Here they are!
(published live)

If you want to test examples remember to replace weird keynote double-quotes with normal one :)

Enjoy JavaScript Regular Expressions

Tuesday, February 14, 2012

JSON.stringify Recursion + Max Execution Stack Exceeded

I believe this is a common problem, and we had a similar one today while debugging.
JSON methods do not support recursion ... which is the only thing I am really missing back to PHP serialize days.

Recursion Is Bad

Well, I would say cyclic references are never that good but sometimes these may happen and, specially while testing and debugging, it's more than useful to understand what happened there.
If you have cyclic/cross references in your code I suggest you to use approaches which aim is to avoid these kind of direct links.
Harmony Collections, specially Map and WeakMap, are indeed good helpers to reference indirectly objects without creating, hopefully, first level links and/or recursions.

How To Serialize Anyway

JSON.stringify() accepts a second argument called replacer.
I won't explain more than MDN about its potentials, but it can be really handy to avoid recursions.
A simple way to do it is indeed to store in a stack already parsed objects, included the object itself.
Some other extra operation may be handy too so the debug will be as complete as possible.

var replacer = function (stack, undefined, r, i) {
// a WebReflection hint to avoid recursion
return function replacer(key, value) {
// this happens only first iteration
// key is empty, and value is the object
if (key === "") {
// put the value in the stack
stack = [value];
// and reset the r
r = 0;
return value;
}
switch(typeof value) {
case "function":
// not allowed in JSON protocol
// let's return some info in any case
return "".concat(
"function ",
value.name || "anonymous",
"(",
Array(value.length + 1).join(",arg").slice(1),
"){}"
);
// is this a primitive value ?
case "boolean":
case "number":
case "string":
// primitives cannot have properties
// so these are safe to parse
return value;
default:
// only null does not need to be stored
// for all objects check recursion first
// hopefully 255 calls are enough ...
if (!value || !replacer.filter(value) || 255 < ++r) return undefined;
i = stack.indexOf(value);
// all objects not already parsed
if (i < 0) return stack.push(value) && value;
// all others are duplicated or cyclic
// mark them with index
return "*R" + i;
}
};
}();

// reusable to filter some undesired object
// as example HTML node
replacer.filter = function (value) {
// i.e. return !(value instanceof Node)
// to ignore nodes
return value;
};

A simple example about above function could be this one:

// how to test it
var o = {a:[], b:123, c:{}, e:function test(a,b){}};
o.d = o;
o.a.push(o);
o.c.o = o;
o.c.a = o.a;
o.c.c = o.c;
o.a.push(o.c);
alert(JSON.stringify(o, replacer));

Above alert will produce this kind of output:
{"a":["*R0",{"o":"*R0","a":"*R1","c":"*R2"}],"b":123,"c":"*R2","e":"function test(arg,arg){}","d":"*R0"}
which is surely not as bad as an exception, isn't it?

The Max Execution Stack Problem

Even using a stack variable, in order to avoid duplicated entries, the reason 255 < ++r is necessary is that the generic object may reference in one or more properties a DOM node.
Specially in big applications, the number of nodes, all unique, could be able to reach the function limit.
A tricky way to know this limit, which is browser and engine dependent, could be this one:

(function (Function, MAX_EXECUTION_STACK) {
if (MAX_EXECUTION_STACK in Function) return;
Function[MAX_EXECUTION_STACK] = function (i) {
try {
(function max(){
++i && max();
}());
} catch(o_O) {
return i;
}
}(0);
}(Function, "MAX_EXECUTION_STACK"));

// browser dependent
alert(Function.MAX_EXECUTION_STACK);

Unfortunately in the replacer we cannot use this number in any case because we don't know how many other times the function itself will be called but a good compromise, able to generate objects almost impossible to debug, would be Function.MAX_EXECUTION_STACK / 100 so the limit will scale accordingly.
In all other situations where we still have recursion and max execution stack problems but we are those calling our own function, this limit could be more than handy, i.e.

var
i = 0,
fn = function (obj) {
for (var key in obj) {
if (++i < Function.MAX_EXECUTION_STACK) {
parse(obj[key]);
fn(obj[key]);
}
}
}
;

... so now you know ...

Monday, February 13, 2012

Web Workers - Current Status

A quick one about workers after few tests.

Mobile

Workers are apparently nowhere in Android devices stock browsers. The only one able to bring workers seems to be Chrome in ICS.
As alternative, both Opera Mobile and Firefox Beta work without many problems.
About iOS, version 4 does not support workers while version 5 does and quite well.

Desktop and Data URI

Workers are almost fine everywhere except IE9 ( surpriiiiiiiiiise ) but there is one thing not a single browser does except again Firefox and Opera, accepting "data:application/javascript;" with or without base64 encoded code.
On mobile side this is supported again by Opera Mobile and Firefox Beta without problems but on desktop, and not only ...
  1. Safari works only with external files while inline data uri are supported only via file protocol
  2. WebKit nightly does not support inline data uri even through File protocol
  3. Chrome does not support inline data uri neither via file protocol nor online
  4. Safari Mobile does not support inline data uri, at least online
  5. Chrome Mobile does not support them at all


Why Bother With Data URI

Quite simple, Workers are a mechanism to detach some logic from the main thread and execute it in the background. The possibility to create inline Workers means we could create a sort of Threads manager, delegating runtime ad-hoc functions to perform certain tasks handling all requests from and to the main page.

Workers Until Now Are Not Good

Not only the serialization and deserialization problem does not scale with large amount of data, probably the only reason you would think to use a worker for some job, but the DOM security exception thrown with data URI and for no reason on earth is yet another limit for this technique.
Current status, except for data URI, is that you may need webkitPostMessage rather than simply postMessage in order to at least optimize data transfer between global objects, but on the other hand, the dream we all have about "HTML5" keeps fading out every time I understand I need to prefix something, either if I use CSS3 or JavaScript (i.e. requestAnimationFrame).

Disappointed, nothing else to add.

Sunday, February 05, 2012

JS1K - Markdown

Well, it does not look that good in the js1k page, so here the link to the official demo hosted by my site.
Check the source code there, the page is entirely written in Markdown :)



I submitted this demo because Markdown is widely used, and I believed loved too, by many developers and specially in github, but I may consider to create the whole thing in less than 1kb using gzip compression if people will like the idea.

Friday, February 03, 2012

Love ALL The Web

update no selection problems anymore ... eat the cake now

Maybe 'cause San Valentine is coming, maybe 'cause Chris Williams started it, promoting an end to negativity, or maybe 'cause the web has been recently under attack ... no matter why, the topic of this js1k contest is Love so here I am introducing my proposal, the very first demo submitted this year.
Click the link once to see it in action through following iframe:



What Is It

A simple script able to bring random harts behind the mouse or the finger in any page you want, even Facebook :)
In order to do this, you can bookmark the link dragging into your bookmark bar, change the name (i.e. LOVE), and start surfing surrounded by little harts any new page you visit by simply clicking it. I know it's a silly demo, but it was a quick and interesting experiment I could recycle with any sort of different shape since the graphic is directly drawn on every little canvas.

Technical Details

  • compatible with all modern mobile and desktop browsers, included IE9
  • unobtrusive, it should not affect much the normal behavior of the generic page
  • based on requestAnimationFrame where supported
  • lazy pointer evaluation, no matter if your tablet is attached to a deck with a mousepad, touch screens and other pointers are all supported at runtime
  • it fits in 1020 1023 bytes after Closure Compiler Advanced minification plus extra clean up performed by YUI Compressor (plus few manual tweaks for numbers) ... once gzipped, it fits in about 0.5Kb
  • 1 hour of work, included mobile and cross browsers tests ( plus 20 minutes to think about a solution and fix the text selection problem )
  • easy to recycle, a single change in the draw function and anything you want could be displayed, even smoke effect or rainbows


Discoveries During Creation

  • performances boost with requestAnimationFrame is unbelievable, if you compare current Opera browser VS Chrome, Safari, Webkit, or Firefox, you'll see the difference against setTimeout
  • IE9 and IE10 awesome canvas performances become crap once more than a canvas is created at runtime. If you see how slow this thing could be in my Netbook in IE compared to others, you'll rethink about how fast canvases are in MS browser: it is that fast, but only if there is one of them
  • Opera still doesn't support requestAnimationFrame even if prefixed with the little o
  • the canvas arc() method is completely freak, only few browsers give what you expect. Compare the bigger hart on click in Chrome and Firefox, as example, Chrome gonna show Mickey Mouse like hart
  • YUI Compressor after Google Closure Compiler Advanced is able to produce a slightly smaller output preserving functionality
  • Chrome DOM inspector is really smart. While Safari and Webkit becomes much slower during inspection, Chrome seems to use a "requestDOMInspection" like mechanism to do not slow down the page during DOM changes


Have fun with JS1K

Wednesday, January 18, 2012

ES6 Harmony Collections Fast Polyfill

Well, just in case you read RSS and you missed my tweet ... everything you need to know in github repository.
Have fun with Harmony Collections

Monday, January 16, 2012

On EventEmitter In node.js

Today I had a whole node.js session and I have spent a bit of time looking at current EventEmitter implementation.
I have twitted already that it sucks as it is, and while it's really trivial to implement the same for any browser, I believe many mistakes have been made about the API. Here the list:
  1. on() is a shortcut for addListener but there is no off() as removeListener's shortcut (inconsistent)
  2. add/removeListener is not DOM friendly, we cannot reuse an EventEmitter in the browser without double checking if those methods exist
  3. removeAllListeners() accepts no arguments and cleans up the whole emitter but there is no way to retrieve all listeners via listeners() passing no arguments (again, inconsistent)
  4. there's no possibility to use handleEvent property, as already defined in the EventListener, once again objects are not reusable between client and server
  5. no duplicated checks for both addListener and removeListener, this is totally inconsistent against DOM EventTarget plus I found it kinda hilarious since there is also a max number of listeners allowed defined via setMaxListeners method ... if we add same listener twice, it's fired twice ... a non-sense for those coming from the web. We also need to removeListener twice or more 'cause we have no idea if same listeners has been added by mistake twice so ... it's just screwed up, removing a specific listener may be not enough and there is no easy way to check if the listener has been removed or not
  6. there is no interface to define events plus the single argument is not a DOM Event like, not even an object with data property and at least a type that points to the event name ... once again, not possible to reuse anything on DOM world


Why All Of This Is Bad

node.js is bringing a lot of non JavaScripters and non browser friendly JS developers into its community and this is the good part. What is absolutely bad is that if node.js won't be minimally aligned with the rest of the code in the browsers out there our life as "one language everywhere" will become harder than ever.
I have personally created wru which runs in all browsers and many server side JS environments but what I would like to avoid is to write twice any sort of test because of weirdly implemented APIs.
If it's about shortcutting, as example, on() and off() are more than welcome but why on earth the long version should be addListener rather than addEventListener?
Why events passed as objects since ever in client JS should be passed as string with optional extra data as second argument?
Where are defaults control over events fired across more listeners?
Why on earth handleEvent is not supported and a listener can be only a function which most likely will require a bind to something else?
Why is it possible to erase all listeners without even knowing "who set them where" but it's not possible to retrieve all of them so that at least we could act accordingly with the event name following some rule rather than "just remove them all" ?

I hope somebody will agree with me, considering flakey points I have already described and changing or improving ASAP this EventEmitter API ... it would be sooooooo coooool to be aligned in both worlds for at least the most used pattern/procedure ever in JS world, don't you say? Thanks for your attention.

Sunday, January 15, 2012

Y U NO use libraries and add stuff

This is an early introduction to a project I have been thinking about for a while.
The project is already usable in github but the documentation is lacking all over the place so please be patient and I'll add everything necessary to understand and use yuno.

Zero Stress Namespace And Dependencies Resolver

Let's face the reality: today there is still no standard way to include dependencies in a script.
If we are using a generic JS loader, the aim is to simply download files and eventually wait for one or more dependency in order to be able to use everything we need.
The require logic introduced via node.js does not scale in the browser due synchronous nature of the method itself plus the sandbox not that easy to emulate in a browser environment.
The AMD concept is kinda OKish but once we load after dependencies, there is no way to implement a new one within the callback unless we are not exporting.
I find AMD approach surely the most convenient but still not the best one:
  1. we cannot implement a provide like procedure, whenever this could be handy or not
  2. it's not clear within the module code itself, what we are exporting exactly

Specially the last point means that AMD does not scale properly with already combined code because AMD relies in the module/folder structure itself ... so, cannot we do anything better than what we have so far?

The yuno Concept


Directly out of a well known meme, yuno logic is quite straightforward:
  • automagically resolved path, you point once to yuno.js file in your page and you are ready to go
  • compatible with already combined files (smart builder coming soon)
  • yuno.use() semantic method to define dependencies, if necessary
  • yuno.use().and() resolved callback to receive modules AMD style once everything has been loaded
  • yuno.add() standard ES5 way to define new namespaces, objects, properties, or constructors ( so no extra note in the documentation is needed )
  • cross referenced dependencies automagically resolved: if two different scripts needs same library, this will be loaded once for both
  • external url compatible, because you may want to include a file from some known CDN rather than put all scripts in your own host ( speed up common libraries download across different libraries that depend on same core, e.g/ jQuery )
  • modules, namespaces, or global objects, cannot be reassigned twice, which means if we are adding twice same thing we are doing it wrong, but if we are not aware of other script that added same thing before we have a notification
  • something else I may decide to add after this post
Here some example:

// define a jQuery plugin
yuno.use(
"jQuery",
"extraStuff"
).and(function (jQuery, extraStuff) {
yuno.add(jQuery.fn, "myPlugin", {value:function () {
// your amazing code here
}});
// we may opt for just this line
jQuery.fn.myPlugin = function () {};
// in order to export our plugin
// however, the purpose of yuno is to have
// a common recognizable way to understand
// what the module is about
// plus the "add" method is safer
});

Let's imagine that extraStuff contains similar code:

// define extraStuff
yuno.use(
"jQuery"
).and(function (jQuery) {
yuno.add(jQuery.fn, "extraStuff", {value:function () {
// your amazing code here
}});
});

Both plugin and extraStuff needs jQuery to be executed ... will jQuery be loaded twice? Nope, it's simply part of a queue of modules that needs to be resolved.
As soon as it's loaded/added once, every module that depends on jQuery will be notified so that if the list of dependencies is fully loaded, the callback passed to and will be executed.

Y U NO Add

Modules are only one part of the proposal since we may define a script where no external dependency is needed.

// note: no external dependency, just add
yuno.add(this, "MyFreakingCoolConstructor", {value:
function MyFreakingCoolConstructor() {
// freaking cool stuff here
}
});
// this points to the global object so that ...
MyFreakingCoolConstructor.prototype.doStuff = function () {
// freaking cool method
};

The yuno.add method reflects Object.defineProperty which means for ES5 compatible browsers getters, setters, and values, are all accepted and flagged as not enumerable, not writable, and not configurable by default.
Of course we can re-define this behavior but most likely this is what we need/want as default in any case ... isn't it?
For those browsers not there yet, the Object.defineProperty method is partially shimmed where __defineGetter/Setter__ or simply the value property will be used instead.
Bear in mind this shim may change accordingly with real needs but so far all mobile browsers should work as expected plus all Desktop browsers except IE less than 9 ... not so common targets for modern web sites.
Last, but not least, yuno logic does not necessarily need the add call so feel free to simply define your global object or your namespace the way you want.
However, as I have said before, the add method is a simple call able to make things more robust, to speedup and ensure notifications, and to use a standard, recognizable pattern, to define our own objects/functions being sure nobody did before thanks to defaults descriptor behavior which is not writable and not configurable indeed.

To DOs

This is just an initial idea of what the yuno object is able to do but few things are in my mind. On top of the list we have the possibility to shortener CDN calls via prefixes such "cdn:jQuery", as example, in order to use most common CDNs to load widely shared libraries.
Last, but not least, the reason I am writing this is because I am personally not that happy with any solution we have out there so if you are willing to contribute, please just leave a comment, thanks.