Microsoft.jQuery.Unobtrusive.Validation version 3.2.0, errorPlacement and knockout templates–oh my

The problem:

Client-side validation (with jQuery validate + Microsoft’s jQuery Unobtrusive wiring things up via data attributes) was ignoring the markup I designated for errorPlacement (<div data-valmsg-for=”myCorrespondingInputToValidate”/>), and instead, was happily placing the generated error label right next to the input (which is the default jQuery validate behavior).  The resulting visual effect (using some bootstrap styling) was something like this:

screenshot

I was pretty sure this was working not long ago, but I had been shuffling things around so I figured I had probably jacked it up somewhere.  I compared my markup against a page that was not experiencing the visual oddity and the markup was similar.  What I did notice, however, was that page had it’s markup contained in the page, while mine was broken into numerous knockout templates.  Hm….that *is* a difference…so the digging begins….

We have numerous custom client modules for dealing with the intersection of jQuery, knockout and unobtrusive, so it took a bit of time to discover the root cause, but I’ll spare you that pain.  The bottom-line:  the newest version of the unobtrusive library made a subtle, yet-impactful change to the selectors they use when parsing the initial document. 

screenshot2

In my code, I have all my inputs broken into knockout templates, which get injected into the DOM after the point the unobtrusive module fires.  If you look at the source for jQuery validate, you’ll see that “he who calls validate first wins” with regard to settings, as they are essentially loaded one-time and re-used.  So, even if we called $jQval.unobtrusive.parse(document) again, it would be too late – the settings have already taken-hold.  This translated into a loss of the errorPlacement function which unobtrusive would normally wire up for us, if that call to validationInfo.attachValidation had occurred.  And that is why the default behavior of inserting a label after the validated form field occurred. 

Ok, yay, so now what?

On one hand, you could argue that this is potentially-flawed in the design – or at least could be exposed as some sort of option.  I could also see this being somewhat edge-case-ish in that literally *all* my markup with inputs is separated into separate knockout templates and that may be somewhat unusual I suppose.  So, while you could look into changing the unobtrusive library (via forking or something along those lines), I opted for a less-dangerous approach (we’ve already forked other libraries and this would be one less to keep track of): I met the requirement of at least one form field with data-val=”true”:

<form id=’myForm>

<!– This is here so jquery.validate.unobtrusive
      will set jquery validate’s settings –>
       <input type="hidden" data-val="true" disabled="disabled" />

Cheese?  Maybe.  Reasonably clear and effective?  Yeah.

Of course, many other options exist.

Horrifying resume – MINE! The story of a hypocrite…

So, I recently printed out my resume to take with me to a meeting (I didn’t have a stone table to etch it into).  Turns out that puppy had grown to an astonishing 7-pager.  Um, yeah…so…with some candid feedback from a reviewer – and about .1 second of my own review, and I was quickly aware that I had let my resume turn into a huge steaming pile of suck over the years.  I had simply piled on experience blobs written in a hasty vague manner.  Heck, I’ve spent years reviewing others’ resumes only to let mine become total crap.  So, needless to say, I’m cleaning it up *a tad* and trying to back-pedal from my hypocrisy.  Oh, the horror…

( shameless plug:  there is a link to my latest resume on my about page )

Web performance – That’s what’s on the menu. Interested?

Wow…just realized its been nearly a *year* since I posted last.  WTF?  OK, that needs to change, because I’ve been digging very far into the SPA world during the past 18 months and as of late, much deeper into the intricacies of optimizing client-driven applications.  I might have made the same lofty “optimize” statement eight years ago, but that would have involved an entirely different technology stack and toolset.  We’re moving past server optimization whereby you instrument your .net call stacks and figure out you have some  bloated architecture, or going nuts with late-binding, or have some over-the-top sql query hitting every non-indexed column it can find.  Nope, I’m presuming we’ve already done/solved that and instead focusing on the sort of performance issues that surface once the request leaves the asp.net pipeline, rips through IIS and hits the HTTP train over to the user.

Web performance *after* the server means everything from analyzing your resource and network utilization (do you have 1 compressed minimized js file or 25 bloated ones??), down to a seemingly-innocuous css class assignment that causes your page to slow down or cause jank.  Web optimization means taking full advantage of the rich set of tools available to look very closely at things like javascript heap allocations to search for bloat or maybe objects that are totally over-staying their welcome way after the last garbage collection cycle has run.  Maybe you’ve taken advantage of some of chrome’s (or Canary’s) flags to expose rich additional functionality and internals exposure.  These tools can help us figure out that a simple hover effect for something like an increased border width (or anything that changes the geometry within the DOM for example) you added to that cell in your pseudo-table just caused a nasty reflow and paint, slowing down the user experience.  We use these same tools to help us figure out why we’re clobbering the browser, getting in the way of frames (forget 16 frames per second, we’re down to like….1….) and causing a jank-fest for your users.

Ouch!  That’s a lot of stuff to learn about, but there’s good news!  It’s actually fun as hell to work on.  It’s very challenging, and can be time-consuming, however, the results can be extremely rewarding both for you in your role as an awesome web developer – and most importantly – and rewarding to your users who get to have a rockin’ app.  Who the heck wants to interact with a single page app if the damn thing freezes all the time…heck they’ll be asking for something silly like Silverlight at that point.  Yuck.

So, I’m pretty certain this is considered witchcraft to some people, but I hope to blog a bit more in the future to provide some helpful experience I’m gaining first-hand as well as point out great resources and show you it’s not witchcraft at all.  So, to that end, we’ll wrap up this post by giving high praise to Google.  They have the tooling that makes this stuff reasonable.  Using chrome’s developer tools, or speed tracer, or deep-dive into chrome://tracing, they got it all.  If you are unfamiliar with this world, take a look into the Chrome developer world to get started.  Also, check out some of the sessions from the awesome Chrome Dev Summit 2013 on youtube.

I probably have like 2 people that read this blog, but my guess is this topic may generate some interest as we continue to move away from postback purgatory and onto this new wonderful single page frontier.  Let me know if this sounds interesting or useful.  That may help me gauge how much effort to put into this blog…clearly I need to step that up 🙂

Farewell Fire[fox | bug]?

I’ve been using chrome’s developer toolbar a lot lately when running my QUnit javascript unit tests (and choosing to debug in my default [chrome] browser).  The debugging experience overall is better – particularly exceptions.  Plus, the color coding is nice to have (yeah, i know firebug has an addin for that, but it seems to destabilize firebug).  The DOM/Memory auto complete is a little wonky in chrome (perfect in firebug), but that’s pretty minor.  I guess this is nearly farewell to Firefox/firebug.

Visual Studio 2012 – the Gift that keeps on giving

I must say, Visual Studio 2012 is a very significant improvement for development in numerous ways.  I would say, nearly every day for the past few weeks I encounter something else handy that this version brings.  If you have not upgraded, you are missing out – do so soon.  Today’s example: I can inspect the details of a TFS changeset without an annoying modal window.

It’s the “Gift that keeps on giving the whole year through”

Stupid simple javascript instrumentation

I’ve blogged about this little “instrumentation” approach before, but here’s a the final, simple result (note: in this case, “instrumentation” really just means wrapping each method, tracking the start/end times and reporting the overall runtime for each method in milliseconds..nothing fancier…stupid simple).

You can find the complete code, along with 4 examples run to demonstrate here.

select and selectMany for Javascript Arrays

I’m a huge javascript fan.  My favorite language by far.  Easy to bolt on new functionality.  In most cases, using already-established libraries like the ever-awesome underscore library suffices.  For example, their collection-related functions are phat-ass!  Occasionally it’s nice to add functionality that boosts productivity.  Here’s a potential couple of them:

Array.prototype.select = Array.prototype.select || function (projector) {
    var result = [];
    for (var i = 0; i < this.length; i++) {
        result.push(projector(this[i]));
    }
    return result;
};

Array.prototype.selectMany = Array.prototype.selectMany || function (projector) {
    var result = [];
    for (var i = 0; i < this.length; i++) {
        result.addRange(projector(this[i]));
    }
    return result;
};

 

Combining a few other handy routines (like addRange and Where) to array, here’s a simple bin that shows selectMany in action.

Simple javascript instrumentation

OK, so I have to look into a knockout binding to figure out why it’s pausing at times (taking too long).  So, sometimes I like to instrument routines whereby I essentially log the time the routine took to execute so I can find the culprit more quickly.  In javascript (and leveraging the ever-awesome underscore library), this is a simple task.  We can rip through all the functions in the object and wrap them so that we can intercept those calls.  Within the interception workflow, we log the start time, run the original call, then calculate the difference between start and now and hand that back to the consumer.  Here’s the entire simple implementation of this instrumentation:

var MyCompany = (function (kernel, $) {
    var items = [];
    var public = {};
    public.start = function () {
        items.push(new Date());
    };
    public.stop = function () {
        var now = new Date();
        var item = items.pop();
        return now – item;
    };
    var wrap = function (context, original, handler, funcName) {
        return function () {
            public.start();          
            original.apply(context, arguments);
            handler(public.stop(), funcName);
        };
    };
    public.instrument = function (target, handler, suppliedFunctionName) {
        handler = handler || function (funcName, invocationTime) {
            console.log(‘Time to execute “‘ + funcName + ‘”: ‘ + invocationTime);
        };
        if (_.isFunction(target)) {
            return wrap(target, target, handler, suppliedFunctionName || ‘ (Not specified)’);
        } else {
            _.each(_.functions(target), function (funcName) {
                target[funcName] = wrap(target, target[funcName], handler, funcName);
            });
        }
    };
    kernel.instrumentation = public;
    return kernel;
})(MyCompany || {}, jQuery);

Here’s a usage example.  First, we’ll create a couple helpers for writing to the window and simulating long-running code:

var writeIt = function(message){
  document.write(message + '<br/>');
};
var sleep = function(milliSeconds){
    var startTime = new Date().getTime();
    while (new Date().getTime() < startTime + milliSeconds); 
}
 
Then here’s our object we’ll instrument shortly:

var person = {
  firstName: ‘jason’,
  lastName: ‘harper’,
  sayName: function(){
    sleep(1000);
    writeIt(this.firstName + ‘ ‘ + this.lastName);
  }
};


Figure out what you want to do with the instrumentation informs you a method finishes:

var callback = function(invocationTime, funcName){
  writeIt(‘Time to execute “‘ + funcName + ‘”: ‘ + invocationTime);
};
Now go ahead and instrument your object and call the method as normal:

MyCompany.instrumentation.instrument(person,callback );
person.sayName();

You can also instrument just a function:

var sayHello = function(){
    sleep(400);
    writeIt(‘uh…hello?’);
};
sayHello = MyCompany.instrumentation.instrument(sayHello,callback, ‘sayHello’ );
sayHello();

 

Here’s a complete bin that shows this fully working.

Javascript Memoization with underscore

If you’ve not discovered the handy use of Memoization, I encourage you to look into it and take advantage. 

My Use case

So, I was generating a large UI on the client using knockout.  I had pre-loaded some “lookup data” (imagine id/name pairs in memory) that I’d repeatedly look up as I built a grid of sorts.  So, imagine having to do that lookup hundreds, or thousands of times, for only a handful of unique ids – thus repeating the calls a LOT!  This is where memoization shines.

The awesome underscore library has a simple implementation to use.  Here’s a simple bin which demonstrates the usage (note I put my own hasher in there, as the default hasher only keys off the first parameter to your routine – which you’ll see, is the same in my case for all 4 calls).

Bin or jsFiddle

So I’ve been a big fan of JSFiddlefor a while now. However, Rex has continued to sing JSBin’s praises and having seen it’s latest incarnation, I’m starting to consider that one instead. I especially love the auto-run feature.

I think jsFiddle is still considerably more popular, but I also do find the site’s performance to suck sometimes too.  Hm, perhaps a turning point?