My JavaScript book is out! Don't miss the opportunity to upgrade your beginner or average dev skills.

Tuesday, October 25, 2011

JS getCSSPropertyName Function

I was re-checking @LeaVerou talk at jsconf.eu looking forward to see mine there too to understand how to improve and specially what the hell I have said for 45 minutes :D

Lea made many valid points in her presentation but as is for supports.property case, you never want to go too deep into a single point of your talk so ... here I come.

getCSSPropertyName Function

This function aim is to understand if the current browser supports a generic CSS property. If property is supported, the whole name included prefix will be returned.

var getCSSPropertyName = (function () {
var
prefixes = ["", "-webkit-", "-moz-", "-ms-", "-o-", "-khtml-"],
dummy = document.createElement("_"),
style = dummy.style,
cache = {},
length = prefixes.length,
i = 0,
pre
;
function testThat(name) {
for (i = 0; i < length; ++i) {
pre = prefixes[i] + name;
if (
pre in style || (
(style.cssText = pre + ":inherit") &&
style.getPropertyValue(pre)
)
) return pre;
}
return null;
}
return function getCSSPropertyName(name) {
return cache.hasOwnProperty(name) ?
cache[name] :
cache[name] = testThat(name)
;
};
}());

The function returns a string or null, if no property has been found.

// enable HW acceleration
var cssPropertyName = getCSSPropertyName("transform");
if (cssPropertyName != null) {
node.style.cssText += cssPropertyName + ":translate3d(0,0,0);";
}

Please feel free to test this function and let me know if something went wrong, thanks ;-)

Thursday, October 20, 2011

My BerlinJS Slides

It was a great event today at @co_up in @berlinjs meet-up and here are my sides about wru which accordingly with today meeting means where are you, directly out of SMS syntax.

Enjoy ;)

Playing With DOM And ES5

A quick fun post about "how would you solve this real world problem".

The Problem

Given a generic array of strings, create an unordered list of items where each item contains the text of the relative array index without creating a singe leak or reference during the whole procedure.
As plus, make each item in the list clickable so that an alert with current text content occur once clicked.

The assumption is that we are in a standard W3C environment with ES5+ support.

The Reason

I think is one of the most common tasks in Ajax world. We receive an array with some info, we want to display this info to the user and we want to react once the user interact with the list.
If we manage to avoid references we are safer about leaks. If we manage to optimize the procedure, we are also safe about memory consumption over a simplified DOM logic ...
How would you solve this ? Give it a try, then come back for my solution.



The Solution

Here mine:

document.body.appendChild(
/* input */["a","b","c"].map(
function (s, i) {
this.appendChild(
document.createElement("li")
).textContent = s;
return this;
},
document.createElement("ul")
)[0]
).addEventListener(
"click",
function (e) {
if (e.target.nodeName === "LI") {
alert(e.target.textContent);
}
},
false
);

Did you solve it the same way ? :)

Wednesday, October 19, 2011

Do You Really Know Object.defineProperty ?

I am talking about enumerable, configurable, and writable properties of a generic property descriptor.

enumerable

most likely the only one we all expect: if false, a classic for/in loop will not expose the property, otherwise it will. enumerable is false by default.

writable

just a bit more tricky than we think. Nowadays, if a property is defined as non writable, no error will occur the moment we'll try to change this property:

var o = {};
Object.defineProperty(o, "test", {
writable: false,
value: 123
});
o.test; // 123
o.test = 456; // no error at all
o.test; // 123

So the property is not writable but nothing happens unless we try to redefine that property.

Object.defineProperty(o, "test", {
writable: false,
value: 456
});
// throws
// Attempting to change value of a readonly property.

Got it ? Every time we would like to set a property of an unknown object, or one shared in an environment we don't trust, either we use a try/catch plus double check, or we must be sure that Object.getOwnPropertyDescriptor(o, "test").writable is true.
writable is false by default too.

configurable

This is the wicked one ... what would you expect from configurable ?
  • I cannot set a different type of value
  • I cannot re-configure the descriptor
Fail in both cases since things are a bit different on real world. Take this example:

var o = Object.defineProperty({}, "test", {
enumerable: false,
writable: true,
configurable: false, // note, it's false
value: 123
});

Do you think this would be possible ?

Object.defineProperty(o, "test", {
enumerable: false,
writable: false, // note, this is false only now
configurable: false,
value: "456" // note, type and value is different
});

// did I re-configure it ?
o.test === "456"; // true !!!

Good, so a variable that is writable can be reconfigured on writable attribute and on its type.
The only attribute that cannot be changed, once flagged as configurable and bear in mind that false is the default, is configurable itself plus enumerable.
Also writable is false by default.
This inconsistency about configurable seems to be pretty much cross platform and probably meant ... why ?

Brainstorming

If I can't change the value the descriptor must be configurable at least on writable property ... no wait, if I can set the value as not writable then configurable should be set as false otherwise it will loose its own meaning ... no, wait ...

How It Is

writable is the exception that confirms the rule. If true, writable can always be configurable while if false, writable becomes automatically not configurable and the same is true for both get and set properties ... these acts as writable: false no matters how configurble is set.

How Is It If We Do Not Define


// simple object
var o = {};

// simple assignment
o.test = 123;

// equivalent in Object.defineProperty world
Object.defineProperty(o, "test", {
configurable: true,
writable: true,
enumerable: true,
value: 123
});

Thanks to @jdalton to point few hints out.

As Summary

The configurable property works as expected with configurable itself and only with enumerable one.
If we think that writable has anything to do with both of them we are wrong ... at least now we know.

Sunday, October 16, 2011

The Missing Tool In Scripting World

Few days ago I was having beers with @aadsm and @sleistner and we were talking about languages and, of course, JavaScript too.
That night I have realized there is a missing process, or better tool, that could open new doors for JavaScript world.

The Runtime Nightmare

The main difference between scripting languages and statically typed one is the inability to pre optimize or pre compile the code before it's actually executed.
Engineers from different companies are trying on daily basis to perform this optimization at runtime, or better Just In Time, but believe me that's not easy task, specially with such highly dynamic language as JavaScript is.
Even worst task is the tracing option: at runtime each reference is tracked and if its type does not change during its lifecycle, the code could be compiled as native one.
The moment a type, an object structure, or a property changes, the tracer has to compile twice or split the optimizations up to N exponential changed performed in a single loop so that this tracer has to be smart enough to understand when it's actually worth it to perform such optimization, or when it's time to drop everything and optimize only sub tasks via JIT.

Static Pros And Cons

As I have said, statically typed languages can perform all these optimizations upfront and create, as example, LLVM byte code which is highly portable and extremely fast. As example, both C and C++ can be compiled into LLVM.
There is also a disadvantage in this process ... if some unexpected input occurs runtime, the whole logic could crash, be compromised, or exit unexpectedly.
Latter part is what will rarely happen in scripting world, but it can be also a weak point for application stability and reliability since things may keep going but who knows what kind of disaster an unexpected input could cause.

What If ...

Try to imagine we have created unit tests for a whole application or, why not, just for a portion of it (module).
Try to imagine these tests cover 100% of code, a really hard achievement on web due feature detections and different browsers behaviors, but absolutely easy task in node.js, Rhino, CouchDB, or any JS code that runs in a well known environment.
The differential Mocking approach to solve the web situation requires time and effort but also what JS community is rarely doing, as example, is to share mocks of same native objects in both JS and DOM world. This should change, imo, because I have no idea how many different mocks of XMLHttpRequest or document we have out there and still there is no standard way to define a mock and listen to mocked methods or properties changes in a cross platform way.
Let's keep trying imagine now ... imagine that our tests cover all possible input accepted in each part of the module.
Try to imagine that our tests cover exactly how the application should behave, accordingly with all possible input we want to accept.
It's insane to use typeof or instance of operator per each argument of each function .... this will kill performances, what is not impossible is to do it in a way that, once in production, these checks are dropped.
Since with non tested input we can have unexpected behaviors, I would say that our application should fail or exit the moment something untested occurs .... don't you agree?
How many less buggy web apps we would have out there ? How much more stable and trustable could we be ?
The process I am describing does not exist even in statically typed languages since in that case developers trust unconditionally the compiler, avoiding runtime misbehavior tests ... isn't it ?

The Point Is ...

We wrote our code, we created 100% of code coverage and we created 100% of expected inputs coverage. At this point the only thing we are missing to compile JavaScript into LLVM is a tool that will trace, and trace only, the test while it's executed and will be able to analyze all cases, all types, all meant behaviors, all loops, and all function calls, so that everything could be statically compiled and in separate modules ... how great would this be if possible today?

Just try to imagine ...

Saturday, October 15, 2011

Object.prototype.define Proposal

Somebody may think that defineProperties is boring and I kinda agree on that.

The good news is that JavaScript is flexible enough to let you decide how to do that ... and here I am with a simple proposal that does not hurt, but can make life easier and more intuitive in modern JS environments.

Unobtrusive Object.prototype.define




How To

Well, the handy way you expect.
The method returns the object itself, so it is possible to define one or more property runtime and chain different kind of definitions, as example splitting properties from method and protected properties from protected methods.

var o = {}.define("test", "OK");
o.test; // OK

Multiple properties can share same defaults:

var o = {}.define(["name", "_name"], "unknown");
o.a; // unknown
o._a; // unknown

Methods are immutable by default and properties or methods prefixed with an underscore are by default not enumerable.

function Person() {}
Person.protoype.define(
["getName", "setName", "_name"],
[
function getName() {
return this._name;
},
function setName(_name) {
this._name = _name;
},
"unknown"
]
);

// by convention, _name property is not enumerable

var me = new Person;
me.getName(); // unknown

me.setName("WebReflection");
me.getName(); // WebReflection

for (var key in me) {
if (key === "_name") {
throw "this should never happen";
}
}


Last, but not least, if the descriptor is an object you decide how to configure the property.

var iDecide = {}.define("whatIsIt", {
value:"it does not matter",
enumerable: false
});
for (var key in iDecide) {
if (key === whatIsIt) {
throw "this should never happen";
}
}


100% Unit Test Code Coverage

Not such difficult task for such tiny proposal.
This test simply demonstrates the proposal works in all possible meant ways.

As Summary

We can always find a better way to do boring things, this is why frameworks, in all sizes and purposes, are great to both use or create. Have fun

Thursday, October 13, 2011

about JS VS VBScript VS Dart

You can take it as a joke since this is not a proper comparison of these web programming languages.

JS Dart VBScript
types √ √ sort of
getters and setters √ √ √
immutable objects √ √ √
prototypal inheritance √
simulated classes √
"real" classes √ √
closures √ √
weakly bound to JS √ √
obtrusive for JS (global) may be √ √
obtrusive for JS (prototypes) may be √
operator overload √
cross browser √
real benefits for the Web √ ? ?

If you are wondering about JS types, we have both weak type and strong type, e.g. Float32Array.
When StructType and ArrayType will be in place then JS will support any sort of type.

about that post

I have been blamed and insulted enough so I removed the possibility to comment and I also invite you again to do not stop reading the title of a generic post here or anywhere around the net.

I would like to summarize few parts of that post:

on real world we should use the proper flag in order to generate files where only necessary parts of the library is included
...
I agree that at this stage can be premature to judge the quality of Dart code, once translated for JavaSript world
...
Google is a great company loads of ultra skilled Engineers
...
n.d. I have proposed a fix for Dart code
...
you may realize how much overhead exists in Dart once this is used in non Dart capable browsers
...
Was operator overload the reason the web sucks as it is?
...
everything 2 up to 10 times slower for devices, specially older one, that will never see a native Dart engine in core
...
Not Really What We Need Today
...
What are the performances boost that V8 or WebCL will never achieve ?
...
What is the WebCL status in Chromium ?
...
Where is a native CoffeeScript VM if syntax was the problem ?
...
Doesn't this Dart language look like the VBScript of 2011 ?

You can understand the whole post is not about the number of lines, it's indeed about what this extra layer means today for the current web.

I beg you to please answer my questions, any of them, so that I can understand reasons behind Dart decision.

I have also always admired Google and its Engineers, and I am asking, after GWT and Dart, why many of them seem to be so hostile against JavaScript, the programming language that made Google "fortune" on the web ( gmail, adsense, and all successful stories about Google using massively JavaScript )

Thanks for your patience and please accept my apologies since I followed the blaming mood rather than ignore it or better explain what I meant.

All of this is for a better web or a better future of the web, none of this should fall down into insults.

Wednesday, October 12, 2011

What Is Wrong About 17259 Lines Of Code

Is the most popular, somehow pointless, and often funny gist of these days.

It's about Dart, the JavaScript alternative proposed by Google.

Why So Many Lines Of Code

The reason a simple "Hello World" contains such amount of code is because:
  1. the whole library core is included and not minified/optimized but on real world we should use the proper flag in order to generate files where only necessary parts of the library is included
  2. whoever created such core library did not think about optimizing his code
What am I saying, is that common techniques as code reusability do not seem to be in place at all:

// first 15 lines of Dart core
function native_ArrayFactory__new(typeToken, length) {
return RTT.setTypeInfo(
new Array(length),
Array.$lookupRTT(RTT.getTypeInfo(typeToken).typeArgs));
}

function native_ListFactory__new(typeToken, length) {
return RTT.setTypeInfo(
new Array(length),
Array.$lookupRTT(RTT.getTypeInfo(typeToken).typeArgs));
}

ListFactory is nothing, I repeat, nothing different from an Array and since the whole core is based on weird naming convention, nothing could have been wrong so far doing something like:

// dropped 4 lines of library core
var native_ListFactory__new = native_ArrayFactory__new;

Please note methods such Array.$lookupRTT and bear in mind that Dart does not work well together with JavaScript libraries since native constructors and their prototypes seem to be polluted in all possible ways.

Not Only Redundant Or Obtrusive Code

While I agree that at this stage can be premature to judge the quality of Dart code, once translated for JavaSript world, is really not the firt time I am not impressed about JavaScript code proposed by Google.
Google is a great company loads of ultra skilled Engineers. Unfortunately it looks like few of them have excellent JavaScript skills and most likely they were not involved into this Dart project ( I have been too hard here but I have never really seen gems in google JS libraries )

// line 56 of Dart core
function native_BoolImplementation_EQ(other) {
if (typeof other == 'boolean') {
return this == other;
} else if (other instanceof Boolean) {
// Must convert other to a primitive for value equality to work
return this == Boolean(other);
} else {
return false;
}
}

// how I would have written that
function native_BoolImplementation_EQ(other) {
return this == Boolean(other);
}

Please note that both fails somehow ..

native_BoolImplementation_EQ.call(true, new Boolean(false)) // true
// so that new Boolean(false) is EQ new Boolean(true)

... while if performances were the problem bear with me and look what Dart came up with ...

124 Lines Of Bindings

Line 80 starts with:

// Optimized versions of closure bindings.
// Name convention: $bind_(fn, this, scopes, args)

... and it cannot go further any different from what you would never expect ...

function $bind0_0(fn, thisObj) {
return function() {
return fn.call(thisObj);
}
}
function $bind0_1(fn, thisObj) {
return function(arg) {
return fn.call(thisObj, arg);
}
}
function $bind0_2(fn, thisObj) {
return function(arg1, arg2) {
return fn.call(thisObj, arg1, arg2);
}
}
function $bind0_3(fn, thisObj) {
return function(arg1, arg2, arg3) {
return fn.call(thisObj, arg1, arg2, arg3);
}
}
function $bind0_4(fn, thisObj) {
return function(arg1, arg2, arg3, arg4) {
return fn.call(thisObj, arg1, arg2, arg3, arg4);
}
}
function $bind0_5(fn, thisObj) {
return function(arg1, arg2, arg3, arg4, arg5) {
return fn.call(thisObj, arg1, arg2, arg3, arg4, arg5);
}
}

function $bind1_0(fn, thisObj, scope) {
return function() {
return fn.call(thisObj, scope);
}
}
function $bind1_1(fn, thisObj, scope) {
return function(arg) {
return fn.call(thisObj, scope, arg);
}
}
function $bind1_2(fn, thisObj, scope) {
return function(arg1, arg2) {
return fn.call(thisObj, scope, arg1, arg2);
}
}
function $bind1_3(fn, thisObj, scope) {
return function(arg1, arg2, arg3) {
return fn.call(thisObj, scope, arg1, arg2, arg3);
}
}
function $bind1_4(fn, thisObj, scope) {
return function(arg1, arg2, arg3, arg4) {
return fn.call(thisObj, scope, arg1, arg2, arg3, arg4);
}
}
function $bind1_5(fn, thisObj, scope) {
return function(arg1, arg2, arg3, arg4, arg5) {
return fn.call(thisObj, scope, arg1, arg2, arg3, arg4, arg5);
}
}

function $bind2_0(fn, thisObj, scope1, scope2) {
return function() {
return fn.call(thisObj, scope1, scope2);
}
}
function $bind2_1(fn, thisObj, scope1, scope2) {
return function(arg) {
return fn.call(thisObj, scope1, scope2, arg);
}
}
function $bind2_2(fn, thisObj, scope1, scope2) {
return function(arg1, arg2) {
return fn.call(thisObj, scope1, scope2, arg1, arg2);
}
}
function $bind2_3(fn, thisObj, scope1, scope2) {
return function(arg1, arg2, arg3) {
return fn.call(thisObj, scope1, scope2, arg1, arg2, arg3);
}
}
function $bind2_4(fn, thisObj, scope1, scope2) {
return function(arg1, arg2, arg3, arg4) {
return fn.call(thisObj, scope1, scope2, arg1, arg2, arg3, arg4);
}
}
function $bind2_5(fn, thisObj, scope1, scope2) {
return function(arg1, arg2, arg3, arg4, arg5) {
return fn.call(thisObj, scope1, scope2, arg1, arg2, arg3, arg4, arg5);
}
}

function $bind3_0(fn, thisObj, scope1, scope2, scope3) {
return function() {
return fn.call(thisObj, scope1, scope2, scope3);
}
}
function $bind3_1(fn, thisObj, scope1, scope2, scope3) {
return function(arg) {
return fn.call(thisObj, scope1, scope2, scope3, arg);
}
}
function $bind3_2(fn, thisObj, scope1, scope2, scope3) {
return function(arg1, arg2) {
return fn.call(thisObj, scope1, scope2, arg1, arg2);
}
}
function $bind3_3(fn, thisObj, scope1, scope2, scope3) {
return function(arg1, arg2, arg3) {
return fn.call(thisObj, scope1, scope2, scope3, arg1, arg2, arg3);
}
}
function $bind3_4(fn, thisObj, scope1, scope2, scope3) {
return function(arg1, arg2, arg3, arg4) {
return fn.call(thisObj, scope1, scope2, scope3, arg1, arg2, arg3, arg4);
}
}
function $bind3_5(fn, thisObj, scope1, scope2, scope3) {
return function(arg1, arg2, arg3, arg4, arg5) {
return fn.call(thisObj, scope1, scope2, scope3, arg1, arg2, arg3, arg4, arg5);
}
}

I really don't want to comment above code but here the thing:

Dear Google Engineers,
while I am pretty sure you all know the meaning of apply, I wonder if you truly needed to bring such amount of code with "optimization" in mind for a language translated into something that requires functions calls all over the place even to assign a single index to an array object

No fools guys, if you see functions like this:

function native_ObjectArray__indexAssignOperator(index, value) {
this[index] = value;
}

you may realize how much overhead exists in Dart once this is used in non Dart capable browsers.
These browsers will do, most likely, something like this:

try {
if (-1 < $inlineArrayIndexCheck(object, i)) {
native_ObjectArray__indexAssignOperator.call(object, i, value);
// or object.native_ObjectArray__indexAssignOperator(i, value)
}
} catch(e) {
if (native_ObjectArray_get$length.call(object) <= i) {
native_ObjectArray__setLength.call(object, i + 1);
}
try {
native_ObjectArray__indexAssignOperator.call(object, i, value);
} catch(e) {
// oh well ...
}
}

rather than:

object[i] = value;


Early Stage For Optimizations

This is a partial lie because premature or unnecessary optimizations are all over the place. 120 lines of binding for a core library that will be slower not only on startup but during the whole lifecycle of the program cannot solve really a thing, isn't it?

The Cost Of The Operator Overload

This is a cool feature representing other 150 lines of code so that something like this:

1 + 2; // 3

will execute most likely this:

// well not this one ...
function ADD$operator(val1, val2) {
return (typeof(val1) == 'number' && typeof(val2) == 'number')
? val1 + val2
: val1.ADD$operator(val2);
}

// but this
ADD$operator(1, 2); // 3

// with recursive calls to the function itself if ...
ADD$operator(new Number(1), new Number(2));

I am sure we all can sleep properly now that operators overload landed on the web, a feature that works nice with matrixes and vertexes as shortcut for multiplication is finally able to slow down every single addiction.
Did we really need this? Was operator overload the reason the web sucks as it is?
If so, I can't wait to see PHP moving into the same direction directly in core!

Which Problem Would Dart Solve

I am at line 397 out of 17259 and I cannot go further than this right now but I think I have seen enough.
I have heard/read about Dart aim which apparently is "to solve mobile browsers fragmentation".
Of course, mobile browsers, those already penalized by all possible non performances oriented practices, those browsers with the lower computation power ever, will basically die if there is no native Dart engine ... everything 2 up to 10 times slower for devices, specially older one, that will never see a native Dart engine in core and that for this reason will have to:
  • download the normal page ignoring the script application/dart
  • download via JavaScript the whole Dart transpiler
  • once loaded, grab all script nodes with type application/dart
  • translate each node into JavaScript through the transpiler
  • inject the Dart library core and inject every script
From the company that did not close the body tag in its primary page in order to have fastest startup/visualization ever, don't ya think above procedure is a bit too much for a poor Android 2.2 browser?
Bear in mind mobile browsers are already up to 100 times slower on daily web tasks than browsers present on Desktop.

Not Really What We Need Today

I keep fighting about what's truly needed on the web and I have said already surely not a new programming language ( and also ... guys you had already GWT, isn't it ? ).
I would enormously appreciate if anyone from Google will explain me why Dart was so needed and what kind of benefits can it bring today.
I can see a very long therm idea behind but still, why we all have to start from the scratch breaking everything we published and everything we know about web so far ?
Why this team of 10 or 30 developers did not help V8 one to bring StructType and ArrayType and boost up inferences in JavaScript ?
Why Dart ? What are the performances boost that V8 or WebCL will never achieve ? What is the WebCL status in Chromium ?
Where is a native CoffeeScript VM if syntax was the problem ?
... and many more questions ... thanks for your patience.

update ... I have to ask this too:
Doesn't this Dart language look like the VBScript of 2011 ?
Wasn't VBScript an Epic Fail ?

Sunday, October 09, 2011

Taking The Bat-Formula To The Next Level

When you wake up on Sunday morning with upside-down stomach and batcode in mind, you may realize it's time to rest a bit.

with (/*Bat*/Math) Array(16).join(
pow(/*JOK*/E/*R*/, cos, E/*vil*/)
) + "batman";

The output is the same produced by the original bat-formula:

'NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNbatman'

Have a nice Sunday.

A Better is_a Function for JS

In 2007 I have posted about get_class and is_a functions in JavaScript in order to simulate original PHP functions.

Well ... that was crap, since a much simpler and meaningful version of the is_a function can be easily summarized like this:

var is_a = function () {
function verify(what) {
// implicit objet representation
// the way to test primitives too
return this instanceof what;
}
return function is_a(who, what) {
// only undefined and null
// return always false
return who == null ?
false :
verify.call(who, what)
;
};
}();

... or even smaller with explicit cast ...

function is_a(who, what) {
// only undefined and null
// return always false
return who == null ?
false :
Object(who) instanceof what
;
}

An "even smaller" alternative via @kentaromiura

function is_a(who, what) {
return who != null && Object(who) instanceof what;
}


Here a usage example:

alert([
is_a(false, Boolean), // true
is_a("", String), // true
is_a(123, Number), // true
is_a(/r/, RegExp), // true
is_a([], Array), // true
is_a(null, Object), // false
is_a(undefined, Object) // false
]);

As twitted few minutes ago, an alternative would be to pollute the Object.prototype:

Object.defineProperty(Object.prototype, "is_a", {
value: function (constructor) {
return this instanceof constructor;
}
});

// (123).is_a(Number); // true

However, this way would not scale with null and undefined so that per each test we need to check them and this is boring.
Finally, I would not worry about cross frame variables since via postMessage everything has to be serialized and unserialized.

Thursday, October 06, 2011

implicit require in node.js

playing with Harmony Proxy I came out with a simple snippet:


The aim of above snippet is to forget the usage of require ... here some example:

module.sys.puts("Hello implicit require");

var fs = module.fs;
fs.stat( ... );

It's compatible with nested namespaces too and if there are non JS chars in the middle ... well:
var Proxy = module["node-proxy"];

Wednesday, October 05, 2011

bind, apply, and call trap

quick one out of ECMAScript ml

var
// used to trap function calls via bind
invoke = Function.call,
// normal use cases
bind = invoke.bind(invoke.bind),
apply = bind(invoke, invoke.apply),
call = bind(invoke, invoke)
;


What Is It

This is a way to trap native functions method in a handy way. Used in a private scope, it can address these methods once so that we can rely nobody can possibly change them out there for some script injection and only if we are sure the script has been loaded at the very beginning.

How To Use Them

Here few examples:

// secure hasOwnProperty
var hasOwnProperty = bind(invoke, {}.hasOwnProperty);
// later on
hasOwnProperty({key:1}, "key"); // true
hasOwnProperty({}, "key"); // false

// direct slice
var slice = bind(invoke, [].slice);
slice([1,2,3], 1); // 2,3
slice(arguments); // array

// direct call
call([].slice, [1,2,3], 1); // 2,3
// direct apply
apply([].slice, [1,2,3], [1]); // 2,3

// bound method
var o = {name:"WebReflection"};
o.getName = bind(
// the generic method
function () {
return this.name;
},
// the object
o
);
o.getName(); // WebReflection
That's pretty much it, except if we don't trust native Function.prototype, we should not trust anything else as well so maybe it's good to use these shortcuts just because they are handy ;)

Monday, October 03, 2011

Dear Brendan, Here Was My Question

I had the honor to personally shake the hand of the man that created my favorite programming language: Brendan Eich!

I also dared to ask him a question about ES6 and I would like to better explain the reason of that question.

I have 99 problems in JS, syntax ain't one

I don't know who said that but I completely agree with him.
Here the thing: one of the main ES6 aim is to bring new, non breaking, shimmable, native constructors such StructType, ArrayType, and ParallelsArray.
We have all seen a demo during Brendan presentation and this demo was stunning: an improvement from 3~7 to 40~60 Frames Per Second over a medium complex particles animation based, I believe, on WebGL.

These new native constructors are indeed able to simplify the JS engine job being well defined, known, and "compilable" runtime in order to reach similar C/C++ performances.

These new constructors can also deal directly behind the scene, without repeated and redundant "boxing/unboxing" or conversion, with canvas, I hope both 2d and 3D, and images.

All of this without needing WebCL in the middle and this is both great and needed in JS: give us more raw speed so we can do even more with the current JS we all know!

Not Only Performances

The harmony/ES6 aim is also to enrich the current JavaScript with many new things such bock scopes, let, yeld, destructured and any sort of new syntax sugar we can imagine.
It is also planning to bring a whole new syntax for JavaScript so that the one we known won't be recognizable anymore.

I Have Been There Already

I am Certified ActionScript 2.0 Developer and back at that time, Adobe bought Macromedia and before Macromedia changed the ActionScript language 3 times in 3 years and a half: insane!!!
The best part of it is that everything that was new and not compatible anymore with ActionScript 1, syntax speaking, was possible already before and with exactly same performances: the SWF generator was creating AS1.0 compatible bytecode out of AS2.0 syntax

AS 2.0 was just sugar on top indeed but it was not enough: in order to piss off even more the already frustrated community, ActionScript changed again into something Javaish ... at least this time performances were slightly better thanks to better engine capable to use types in a convenient way.

It must be said that at that time JIT compilers and all ultra powerful/engineered tricks included in every modern JavaScript engine were not considered, possible, implemented ... "change the language is the solution" ... yeah, sure ...

Rather than bring the unbelievable performances boost that V8 Engine, as example, brought to JavaScript in 2007, performances boost that keep improving since that time and almost in every engine, they simply changed the whole nature of the language breaking experience, libraries, legacy, and everything that has been done until that time: this was the Macromedia option, the one that failed by itself and has been acquired, indeed, by the bigger Adobe.

Back in these days, the ActionScript 3.0 community is simply renewed and happy ... now, try to imagine if tomorrow Adobe will announce that ActionScript 4 will be like F#, a completely different new syntax, that most likely won't bring much more performances, neither concrete/real benefits for the community or their end users.

Is this really the way to go? Break potentially everything for the sake of making happy some developer convinced that -> is more explicit or semantic than function ?

CoffeeScript If You Want

As somebody wrote about W3C, why even waste time rather than focus on what is truly needed ?
Didn't CoffeeScript or GWT teach us that if you want a language that is not JavaScript you can create your own syntax and if the community is happy it will adopt the "transformer" in their projects ?
Didn't JavaScript demonstrate already that its flexibility is so great that almost everything can be recompiled into it ?
Emscripten is another example: legacy C/C++ code recompiled out of its LLVM into JavaScript ... how freaking great must be this "JavaScript toy" to be capable of all of this ?
We all know now how to create our own syntax manager, and many developers are using CoffeeScript already and they are happy ... do they need ES6 sugar? No, they can use CoffeeScript, isn't it? Moreover ...
The day ES6 will be CoffeeScriptish the CofeeScript project itself will probably die since it won't make sense anymore.
The day ES6 will be CoffeeScriptish all our experience, everything written about JS so far, all freaking cool projects created, consolidated, and used for such long time demonstrating these are simply "that good" won't be recyclable anymore.
Also, how should we suppose to integrate for cross browser compatibility, the new JS for cooler browsers, and the old one for "not that cool yet" browser?

Continuous Integration

SCRUM teaches us that sprints should be well planned and tasks should be split down in smaller tasks if one of them is too big.
What I see too big here is an ECMAScript milestone 6 which aim is to include:
  • the performances oriented constructors, the only thing truly needed by this community now
  • the block scoped let, generators, destructured stuff + for/of and pseudo JS friendly sugar that can be implemented without problems in CoffeeScript
  • the class statement, over a prototypal language we all love, plus all possible sugar and shortcuts for the function word, once again stuff already possible today but if truly needed, replicable via CoffeeScript

Is it really not posible to go ES 5.3 and bring what's needed with as much focus as possible on what's needed so that the community can be happy as soon as possible and think about what's not really needed after?

Wouldn't this accelerate the process ?

As Summary

Mr Eich, it's your baby, and I am pretty sure you don't need me to feel proud of it. It's a great programming language a bit out of common schemas/patterns but able to survive for years revolutionizing the World Wide Web.
It's also something "everything can fallback into" and I would rather create a Firefox extension able to bring CoffeeScript runtime in every surfed page as long as we can have intermediate releases of these engines, bringing one step a time all these cool features but prioritizing them accordingly with what is missing.

I thank you again for your answer, which summary is: "we are already experimenting and bringing these features in SpiderMonkey ..." and this is great but we are talking about meetings, decisions, and time in the meanwhile to agree about everything else too, specially new syntax.

I am pretty sure that following one step a time we can already have a Christmas present here since I don't see how StructType and ArrayType can be problematic to implement, and eventually optimize later, in every single engine.

These constructors should be finalized in some intermediate specification of the ECMAScript language, so that everybody can commit to it, and every single body would be gradually happier about JavaScript each half year.

In 2013 most likely new powerful CPU/GPU will be able to handle heavy stuff we are trying to handle now ... so it's now that we would like to be faster and it's now that we need these constructors.

I have also shimmed down these constructors already so that incremental browsers upgrades will make these shims useless but performances will be increased whenever these are applied ... a simple example:

var Float32Array = Array, // better shimmed
Int32Array = Float32Array
....

I use similar code already and on daily basis: it does not hurt much, it works today everywhere, and it goes full speed where those constructors are available.

A whole new syntax incompatible with current specifications could be good and evil at the same time plus it will take ages before every engine can be compatible with it ... we all know the story here.

I am pretty sure I am saying nothing new here and I do hope that Harmony will bring proper harmony between what we have now, what we need now, and what we would like to have tomorrow, using projects like CoffeeScript if we really can't cope, today, with this beautiful unicorn.

Thank you for your patience

Sunday, October 02, 2011

Me At JSConf.EU 2011

About my JSConf.EU Talk.

Here my JSConf EU 2011 Slides, and here again the speaker rate (only if you have seen the talk, pls).

update I forgot to mention lazy features detection oject proposal!

Thanks everybody, it has been a great week end :)

Wednesday, September 28, 2011

RIA VS OS

... have you ever thought about it ? I did few times in my 11+ years of RIA centric career!
Even if it's like comparing potatoes with tomatoes I'd like to share my thoughts about it, would you mind ?

What we always laughed about OS


  • the blue/gray screen with an incomprehensible error message

  • the Message Box with some rant about some memory address failure

  • the unresponsive OS due some broken application able to make everything else stuck regardless the number of CPU and thread the OS can handle

  • the "quit program" explicit action that does not quit the program

  • any sort of security issue

  • the change/update that requires a system reboot



What we are always "scared about" online


  • the white screen due some JS/CSS failure for the current browser

  • the forgotten alert or console.log inside some try catch with some rant about generic error message (or even worst, the unmanaged error)

  • the unresponsive DOM/web page due some broken piece of JavaScript able to make everything else stuck regardless the number of CPU and WebWorkers the Browser can handle

  • the "close window/tab" explicit action that takes ages due some greedy onunload operation

  • any sort of security issue

  • the change/update that requires a page reload



We are all in the same field

Architecture matters, experience matters, performances matter, investigations matter, code quality matter, unit tests matter, UX is essential, and UI only attractive.
This is the software development world, no matters which language, no matters which customer, no matters which company ... isn't it? So why things keep being the same in every software field ?

Delivery, delivery, delivery !

The main reason many applications out there, web or not, will rarely be that good.
The scrum purpose is to make theoretically well organized baby steps and the agile often utopia due constant, drastic, changes you may want to face during an already planned task while you are implementing the task itself.
The time ? The worst enemy when it comes to quality. As I wrote in September 2009 we are loosing software quality due temporary solutions that last forever, decisions made without real world use cases to consider and everything else web developers are complaining, as example, about W3C decisions.
Is W3C that bad ? I think it's great, and I think we should all appreciate the Open Source effort we can read, support, or comment, on daily basis as is for JavaScript future if you are tough enough to face people proud by default about their decisions ... they can understand, they can change their mind.

The direction ?

Too much new stuff in too short time that could imply future problems when it comes to maintainability and backward compatibility, the day somebody will realize: "something went terribly wrong here!"
The legacy code that blocks improvements, the country or corporates stuck behind old, deprecated, un-secure, old pseudo standard that are causing them more damage than saving.
Is everything going to produce cheap with screwed up quality? Well, this is apparently the tendency about clothes, cars, and something I always wondered about: if your Master/Phd is from the most expensive and qualified institute in this world, how would you feel to work underpaid in some emerging country able to provide everything you have been instructed for in 1/10 of your salary and with much higher units per months ?
Either that institute won't be worth it anymore, or you gonna feel like everything you learned did not really make sense.
This is a potential side effect of out sourcing too, the cheap alternative to a problem that many times is not truly solved, simply delegated and delivered with lower quality but respecting, most of the time, the deadline that once published will make max 70% of customers happy, loosing 30% until they'll face same problems with next "brand" they decide to follow.

Dedicated to

All those persons out there that do their best in daily basis believing in their job, whatever it is.
All those persons underpaid still able to put best effort and provide results regardless the country/economy situation.
All those workers that would like to have more time to do things better, and all those Open Sources/Standards Maker that should be more distant from this frenetic delivery concept, and a bit more focused on doing things properly and able to last much longer ... we would not need so many changes, software speaking, if things were already great and this is what I have always tried to do with all my old projects, often forgotten, still working after 5 or more years of untouched code, and under newer versions of the same programming language.
I had more time back at that time, and things are still working.
I am missing products like the original Vespa, out of Italian company that can still run in your city street and with 30 years on its shoulder: can your cheap scooter, software, architectural decision, do the same?

Monday, September 26, 2011

About Me At JSConf EU

I know I am not in the list of speakers page yet, but I am actually in the official schedule already.

It's about jsconf.eu and my talk on Sunday morning at 10:45 entitled ...

Buzz It For Real !

... the tortuous road to Mobile HTML5 Apps

For the very first time in my life I will not represent just myself during a conference. This time I will talk about few ideas, problems, and solutions, we have faced during the "still in beta" development of our Mobile HTML5 Applications.

I will talk about some problem completely ignored by majority of HTML5 developers providing concrete real-world examples, and solutions, over tested code.

I know Sunday comes after the first conference party and I hope you, as well as me, won't be too drunk to follow my talk :D

Of course a SpeakerRate page was a must have so see you there and enjoy the conference!

Saturday, September 17, 2011

An Introduction to JS-Ctypes

Update

If you have time follow the whole story in es-discuss mailing list while if you don't have time here the quick summary:
js-ctypes purpose is different from JS.next typed structs/arrays so it looks like it was my mistake to compare tomatoes and potatoes.
I bet everybody else in this world could have compared these two different beasts due identical name, look, and similar usage.
If ctypes are not used outside JS these are not JIT optimized in any case so now we know why performances are so slow compared with JS code.

On new Struct({literal:pairs}) VS new Struct(literal = pairs) there is still no answer and even if it's obviously possible to avoid an object creation per each created instance, recycling a single object and refreshing its properties same as we could do with properties descriptors and Object.defineProperty I have pointed this out in that way on purpose since I can already see a massive usage of that unoptimized pattern and I would like to know that engines are able to optimize that pattern Just In Time or tracing it.

More questions, "flames", and answers about this topic in the link I have already posted at the very beginning.


A few days ago I had a quick chat with Ben Green about statically defined JavaScript structs.
He reminded me "somebody wrote something about faster JS objects" and I remember I saw it as well but I could not find the bloody source until I crashed again into Brendan Eich blog, more specifically the My TXJS talk post.

JavaScript Binary Data

The slide I am talking about is at page 14:

const // the statically defined and typed structs
Point2D = new StructType({x:uint32, y:uint32}),
Color = new StructType({g:uint8, g:uint8, b:uint8}),
Pixel = new StructType({point: Point2D, color: Color}),
// the static collection
Triangle = new ArrayType(Pixel, 3);

new Triangle([
{point: {x:0, y:0}, color: {r:0, g:0, b:0}},
{point: {x:5, y:5}, color: {r:10, g:10, b:10}},
{point: {x:10, y:0}, color: {r:20, g:20, b:20}}
]);

"Mind Blown!" as first reaction, then I decided to investigate a bit more during the evening in order to bring some better feedback and have a better understanding of this concept ... but how did I do that?

js-ctypes and Mozilla

Even if landed and approved only recently in JS.next, ctypes have been available in Firefox since version 4.
I like the fact Mozilla keeps surprising me as one of the most advanced environment when it comes to JavaScript world but before getting too excited, we'd better keep reading this post.

The (ideal) Purpose

Dave Mandelin in Know Your Engines slides enlightened us describing how things get faster behind the interpreted JavaScript scene. As scripting language developers we would like to do not care at all about details such "do not change variables type" but as I have asked during falsy values conference: "what about objects and their properties?"
JS-Ctypes seem to be the "ideal kick asses performances" trick we all were waiting for: an explicit, yet scriptish, way to describe well known structures in order to make the engine able to optimize and compile these structures runtime and boost up performances.
This concept is not new at all in programming world.

Cython

From Wikipedia:
Cython is a programming language to simplify writing C and C++ extension modules for the CPython Python runtime. Strictly speaking, Cython syntax is a superset of Python syntax additionally supporting:
- Direct calling of C functions, or C++ functions/methods, from Cython code.
- Strong typing of Cython variables, classes, and class attributes as C types.
Cython compiles to C or C++ code rather than Python, and the result is used as a Python Extension Module or as a stand-alone application embedding the CPython runtime
I do believe it comes natural to compare js-ctypes to Cython and I am pretty sure initially this was the exact purpose of the Mozilla extension or, at least, Mozilla folks idea.
Ironically this is the same reason js-ctypes are not available by default in Firefox and others except via extensions.

// if not in an extension, deprecated but
// the only way to bring js-ctypes inline in a web page
netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');

// import ctypes
Components.utils.import("resource://gre/modules/ctypes.jsm");
Bear in mind above code will not work online. In order to test ctypes in Firefox we need to accept privileges risks offline ( file://ctypes.test.html ).
The reason is simple: rather than decouple the power of ctypes from the ability to use compiled libraries or dll, Mozilla put everything into a single module making its usage basically pointless/impossible for Web applications: big mistake!

A Reasonable Shim

It's about 3 years or more I am writing examples and proposals in this blog about "strict typed JavaScript" but this is not the case.
If we want to shim in a good way js-ctypes we should actually forget the type part or performances will be extremely compromised per each bloody created object.
Unit test speaking, once we are sure that Firefox runs all our cases, we'd better trust nothing bad will happen in all shimmed browsers.

try {
netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');
Components.utils.import("resource://gre/modules/ctypes.jsm");
} catch(ctypes) {
// a minimal ctypes shim by WebReflection
this.ctypes = {
ArrayType: function ArrayType(constructor, length) {
var name = (constructor.name || "anonymous") + "Array";
return Function("c", "".concat(
"return function ", name, "(o){",
"var i=(o||[]).length;",
length ? "if(i!=" + length + ")throw 'wrong length';" : "",
"if(!(this instanceof ", name, "))",
"return new ", name, "(o);",
"this.length=i;",
"while(i--)",
"this[i]=new c(o[i]);",
"};"
))(constructor);
},
StructType: function StructType(name, fields) {
for (var key, current, proto = {}, init = [], i = 0; i < fields.length; ++i) {
current = fields[i];
for (key in current) {
if (current.hasOwnProperty(key)) {
init.push("this['" + key + "']=o['" + key + "']");
proto[key] = null;
}
}
}
return Function("p", "".concat(
"function ", name, "(o){",
"if(!(this instanceof ", name, "))",
"return new ", name, "(o);",
init.join(";"),
"}",
name, ".prototype=p;",
"return ", name
))(proto);
}
};
}
To make things even faster, I have adopted an "inline compiled JS" technique so that each defined struct will do most basic tasks per each instance creation.
Following an example about js-ctypes usage:
// as is for native cnstructor, no need to "new"
const Point2D = ctypes.StructType(
"Point2D", // the struct name
[ // the struct description
{x: ctypes.int},
{y: ctypes.int}
]
);

// a struct can be used to define a collection of same type
const Segment2D = ctypes.ArrayType(
Point2D, // the value type
2 // the length
);

// if length is specified, this must match during construction

// if no length is specified any amount of elements can be created
const Line2D = ctypes.ArrayType(Segment2D);

// no need to invoke all constructors
// as long as the Array/Object structure
// matches the defined one
var line = Line2D([
[
{x: 0, y: 0},
{x: 10, y: 10}
], [
{x: 10, y: 10},
{x: 20, y: 20}
], [
{x: 20, y: 20},
{x: 30, y: 30}
]
]);
Even if geometrically speaking above example does not make much sense, being a line by definition represented by infinite number of points, I am pretty sure you got the logic.

Still NOT JS.next

The struct definition is slightly different from the one shown by Brendan Eich but at least the ArrayType signature seems to be similar.
If what Brendan showed is actually true, we will not have a way to define statically typed getters and setters.
Not that a function per each get/set can improve performances, but I consider this a sort of limit over other statically typed programming languages.

10X Slower

Surpriiiiiiiiiiiseeeeeee!!! Even Firefox Nightly performs like a turtle on steroids over statically typed collections and here the test you should save in your desktop and launch via file protocol.
If you see the alert, ctypes have not been loaded ... but if you test in on Firefox via file protocol and you allow the module, you will not see any alert but an actual benchmark of three different types of collections:
  • a generic Array of Objects
  • a typed collection of typed objects
  • an Int32Array implementation over int values with an object creation per each loop iteraction
I don't know what's your score ( and I could not manage to test it via jsperf ) but at least in my MacBookPro numbers are 110ms for ctypes VS 19 or 16 for other two tests.

What The Fuck Is Going On

Pardon my french but I could not describe in a better way my reaction ... however, I have an idea of what's happening there ...

Slow Binding

If ctypes are checking and transforming runtime all values in order to provide nicely written Errors somebody screwed up the speed boost idea here. I would rather prefer to see my browser implode, my system crash, my MacBook explode than thinking every single bloody object creation is actually slower than non statically defined one!
"check all properties, check all types, convert them into C compatible structs, bring them back to JS world per each index access" ... I mean, this cannot be the way to make things faster.
The operation could surely be more expensive in therms of Struct and List definitions but for fuck sake these cannot be trapped behind the scene: these must be instantly available as hidden pre compiled/pre optimized objects and if some assignment goes wrong just exit the whole thing!

Static Is Not For Everybody

Let "week end hobbyists" use JS as they know but give JS the native power of C. Don't try to save poor JS kids/developers here, you either bring this power in or you don't.
Any application that will screw an assignment over a statically typed collection or struct does not deserve a place in the web, as well as any sort of broken C code cannot be compiled or it will kill the execution if something goes wrong runtime.

I am not joking here, think about those developers that actually know what they are doing and forget for once the "too easy to use" concept: we all desire to handle statically typed code via JS and we expect a massive performances boost.

Double Memory Consumption

The typed part of JavaScript seems to ignore a little detail: every object will require both non statically typed structure, {x: Number, y: Number} plus its statically typed equivalent: Point2D.
I am not sure engines can optimize that much here and thinking about mobile platforms I wonder if TC39 team is actually thinking "Desktop only" ... WebCL seems, once again, a much better alternative than ctypes here 'cause if all these operations will mean higher memory footprint and slower interaction we are in a no-go specification that should never land in JS world.
We really can implement by ourself strict type checks so either ctypes bring something powerful and fast or I can see already a lot of effort, implementation speaking, for zero income, real use cases speaking.

const Point2D = ctypes.StructType(
"Point2D", // the struct name
[ // the struct description
{x: ctypes.int},
{y: ctypes.int}
]
);

// how it is now in ES.next too
var p = new Point2D(
{x: 123, y: 123} // why on earth!
);

// how it should be in ES.next
var p = new Point2D(
// no object dependeny/creation
x=123,
y=123
);
Above example is just one out of millions way to better initialize a statically typed structures. Since JS.next will bring new sugar in any case, unless these objects used to initialize a structure will be completely ignored/discarded runtime, creating holes in therms of object reusability, the creation of a complementary object per each static instance is a non-sense.
In few words, no need to overcomplicate engines when these will be already compatible with named defaults function arguments, isn't it?

As Summary

C could land into JavaScript but it must be done properly. A too hybrid solution could bring double problems and all I have tried to do in this post is collaborate with the initiative bringing thoughts and tests.
I hope this part will be specified and implemented properly, removing the "native dll binding" we don't need on te web, neither we do for node.js modules.
Sure it's a nice have, but once we can write proper modules based on statically typed structs and collections, there won't be such big need of pre-compiled C stuff and all cross platform problems at that point will be solved on browser engine level, rather than on JS specific C module side.
Any sort of thoughts and/or clarification will be more than appreciated but right now all I can say is: avoid this extension, don't try to screw with native system libraries, don't use this extension thinking it will bring more efficient, fast, powerful, code into your app.
Thanks for your patience

Saturday, September 10, 2011

My New Programming Language

yeah, you read it correctly ... we all need another better programming language because everything we've done until now sucks.

What Sucks

  • the fact we don't learn by mistakes, which means all of us should instantly try to create a new "secretly open source programming language" so that the rest of the world can only endure it once it's out, rather than contribute to make it better/needed as it's happening since at least 5 years with JavaScript in all possible, and truly open, channels

  • the fact Java, .NET, and all others failed ... 'cause we are still looking for a new programming language in these days where C++ X11 has been approved while C never died

  • the fact we keep thinking that performances are possible only with compiled languages forgetting that better algorithms, better practices, better tools to develop and track leaks, memory consumptions, CPU/GPU cycles, can make any software fast enough or ...

  • ... the fact new standards are coming to help us with performances, as is for OpenCL, and new techniques are already available to speed up common tasks, as is for Statically Typed Collections

  • the fact we are blaming JavaScript because is the most used programming language and as is for "the most used whatever thing" out there more people will complain about it and even more people will enjoy it ( e.g. the unbelievable growing speed of node.js community and all latest server side JS related projects )

  • the fact if a programming language is part of the scripting group it's considered a toy regardless the fact any sort of application out of billions is working right now out there without major security, performances, or design problems

  • the fact that compiled programming language developers are not necessarily superior or more skilled than scripters ... the world of Software would be perfect otherwise and the Web as we know it, the good one, would not exist

  • the fact that if a programming language is appreciated and used by senior professionals as well as week end hobbyist must mean that language is weak and it needs to be substituted

  • the fact that experience is a key, and with a new language it will be completely lost and all sort of inevitable problems or solutions will not be instantly available to the community

  • the fact that if it's possible to translate this new fantastic language into JavaScript for backward compatibility, everything new this language will bring tomorrow was already possible today


As Summary

Looking forward for the revolution and looking forward to forget that OpenCL even existed.
Let's hope at least all Operating Systems companies will agree, let's hope it will be the universal language we have been dreaming about since ever ... let's hope ... and sorry for this surely not needed rant.
Update A must read post from BrendanEich who is apparently sharing my point of view with more technical reasons.
Alex Russel also on Google & the Future of JavaScript gives us something more about what's going on there, nice one.

Saturday, August 27, 2011

OS X Lion Automator And Mobile NOKIA Maps

When I have read this article about Making Desktop Webapps in Lion my first thought was "cool!" instantly followed by "what about an experiment with Mobile NOKIA Maps WebApp?" ... and here I come :)

m.maps.nokia.com

is the beta project I am working on right now together with a bounce of HTML5 geeks :P in order to bring the mature NOKIA Maps experience on Android 2.2+, iOS4+, and others already supported or "coming soon" devices.
Optimized for mobile but still usable with Desktop Chrome or Safari browser, the web app is quite "cute" seen in iPhone or other medium and small screens and this experiment was about bringing same "cuteness" on my Mac Mini as well: partially successful!

Lion Automator And WebPopup Limits

Unfortunately it is not possible to customize that much the popup browser user agent and I am not even sure what kind of engine is used there ...
With iPhone UserAgent the version exposed is 3 and Webkit 430+.
When it comes to iPad UserAgent the version is 4 while with Safari UA the version is the current one.
GeoLocation API does not seem to work, and the cache seems to be cleaned every time the app is closed.
Unfortunately these limits make the current beta less cool than usual, specially because every time the app is closed the storage seems to be reset which means last position is not shown next time, history and suggestions do not show off and even more annoying the routing "home to place" is not available due missing location.

Grab The Desktop App For OSX Lion

I have prepared everything you need to launch m.maps.nokia.com in your OSX Lion so you can give it a try.

Mobile NOKIA Maps For Automator and if necessary extract its content and click on the .app file.

I swear I did nothing different from what Andy Ihnatko described in its article, except changing the icon with the one downloaded automatically on iPad if you pin the website to your home screen.

If you are in OSX Lion give it a try and play around but bear in mind this beta offers much more on your smartphone ;)