Sunday, February 14, 2016

JavaScript : measuring performance

I recently learned a couple great tricks for measuring performance in JavaScript. There's always the profiler in the browser, but that's a bit verbose if you need to just A/B two ways of doing something. The following techniques are great lightweight approaches that you can use when writing or performance tuning some code.


basically it returns NOW as in RIGHT NOW which you can capture in a var for later use. Here's a basic pattern for use:

var s =;
//do something
console.log( - s);

this will log the time "do something" took to complete in ms.

performance is a property of the window object, therefore it is NOT available on nodejs. You can use the following in that environment:

2. console.time("key"); console.timeEnd("key");

use it like this:

//do something

depending on the js engine, you'll get an output like:

"key" 3.002 ms

If you want to measure more than just the execution time, like memory usage, i/o usage, and processor use these quick-and-dirty functions won't get you that. Look for a future post on those topics...

Monday, February 8, 2016

Credit Card EMV Security Chip

I was at the checkout yesterday and they started using the new chip reader. First of all, I'm glad this is more convenient than the old when you have to jam the card just a bit further into the machine or it won't take. Not only is it less convenient, but it's still only marginally more secure.

You've heard the saying "Something you have, something you know", right? It's about multifactor authentication, that added layer of security which means someone can't just have something and gain access to the secured entity. In the case of the CC this is an authentication which is missing. The clerks can check your ID, but most often they don't. So consider that for in-person transactions there is no authentication other than you have the card. Something you have.
There is some hope for the future though since eventually the chip-n-sign cards will maybe be converted to chip-n-pin cards. In my experience you don't have to sign for a lot of transactions, only over certain amounts or even at a place you visit often. From what I can tell, the system doesn't even really authenticate against the signature anyways its just some form of extra work for the consumer - perhaps another illusion of security.

Credit cards are easily dropped, lost, stolen (by someone you know or pick pocketed). Most of the time after, they can just be plugged in and used without anyone knowing since there is usually no ID check.
According to what is presented on, with their "nothing to see here, go back into your homes" message - it's not only going to cost over $16-billion to switch to chip-n-* cards, but there will also be an extortion style switch to merchants to purchase the POS devices that read the chips or else pay for the fraud. Ultimately the costs will be passed down to consumers and/or investors. The site claims chip-n-pin cards reduced fraud in Europe, by how much it didn't say.

But, since any measures for improving security will result in increased inconvenience and a redoubled effort to hack the new thing. Two things will result - added inconveniences for consumers and the thing will eventually be hacked anyways. I would propose using a thumbprint reader, but its very likely that would get hacked too. So lets just stick to convenience at scale and focus on making thieves pay for their crimes instead of everyone else.

Friday, January 29, 2016

How To Write More Maintainable BDD Tests

Given we are writing BDD style requirements
And we are using cucumber syntax
And we want to be able to respond to changes
When we write our requirements
Then we should write them in a way that requires as few changes to them when requirements change
And we should minimize the number of tests that are impacted by changes in requirements.

We can write such tests in the following way:

Feature: Say Hello
User Story: As a marketer, I want the system to greet users by name when they log in so that the system feels more user friendly.

Given a user with the name "bob"
When bob logs in
Then the system should say "Hello, Bob!"

Since this is a simple case, it is sufficient to write the test in this way.

For a more extensive use case that is dependent on bits and pieces of data from many sources with complex business rules, it would be better to define the test cases in the following way:

Feature: Greet User
User Story: As a marketer, I want the system to provide customized greetings to users based on their purchasing habits so that the users will be more likely to increase their overall purchases.

Wednesday, January 27, 2016

Lean Change

I've recently caught wind of Lean Change - basically a way to achieve continuous improvement. It seems the key is regular open communications. I was in the hot-seat for representing our team of five recerly in a QA event put on by CQAA(Chicago Quality Assurance Association). The lab was to put together an elevator pitch to sell BDD to an executive (for some reason we chose executive). So here's me as a developer "pitching" the event speaker acting as an executive.

We thought we'd try to sell on cost reduction, reduced time to market, improved quality, and Google does it so should we. I did my best but was met with resistance and the proposition that QAs and BAs were not doing their jobs and/or would not be needed in the new world order of BDD. Since I was a developer in a room full of QA engineers, I jokingly confessed that we would no longer need QA engineers if we used BDD.

This was basically a sweeping change approach and the pitch was a hard sell. I would not recommend selling directly to execs in most cases. Follow Lean Change - get folks involved in the change. Change a little at a time, but don't make anyone feel like they are the ones you want to change - QAs don't do this to DEV and DEV don't do this to BAs. It's really about working together as a team to resolve the issues, not about imposing change on others.

Tuesday, January 26, 2016

QA Perspective

I  gained a new perspective today after attending an event hosted by a QA group. The event was about how to influence change towards BDD (Behavior Driven Development). There was a remarkable turnout with a packed house and some folks sitting in the back without tables. I (as a developer) was in a room with a bunch of QA folks. The topic was BDD, which from my understanding so far was of more interest to developers than QA.

As it turns out...the focus from the QA standpoint was on changing the way development works. I've always viewed BDD as a fundamental change in the BAs worked. I guess it really goes to show that the ones downstream in any hand-off type of work are going to have the most interest in changing the patterns of those directly upstream.

This FURTHER reinforced this same realization I had last week when doing a penny passing experiment while in Agile training. The experiment, which seeks to show efficiency gains in iterative work over waterfall, goes like this -

setup - form groups of four, one person keeps time, the rest pass coins.

first run - pass a batch of coins one at a time from one person to the next. Each passer can't pass coins until he/she has the whole batch. Timer times from start to receiving all coins. This mimics waterfall and the coins represent deliverables.

second run - pass coins one at a time as soon as you have coins to pass. Time when the first coin reaches the end and when all coins reach the end. This is iterative.

In the waterfall process, ever one who is not passing coins is watching the coin passer. Clearly this takes longer and results in waste. In iterative, everyone is working with excitement and fervor to pass those coins along. Each is paying close attention to the upstream passer because that's where the coins are coming from. How about that?

Think of the iterative approach for a bit. If we do this over and over again, and we can talk in between each attempt - then we can make incremental improvements to the process of passing coins and seriously reduce our throughput over time. This is old-hat by now in software. Yet it seems that so many organizations are still struggling to work together to make improvements. Another antiquated notion seems to be the hand-off. Clearly the handoff leads to waste. Imagine the same penny experiment, but where each coin from the pile is passed by the whole team at once.

Wednesday, January 20, 2016

C# to JavaScript: Namespacing

Namespacing is one of the most important things you can do when using js. While namespaces are built-in constructs in C#, they are not an official part of the js language. That doesn't mean that you can't or shouldn't use them, on the contrary. In this post, I'm going to show you how to create namespaces and how to add your code to the namespace to not only avoid polluting the global context, but also to avoid potential collisions with variables and functions from other js libraries.

If you are foggy on what exactly a namespace is - my definition is that it's a logical container used to provide identity and scope to related code. If a class "Languages" in C# is defined in a namespace "TDLL.Countries", the class would be accessible from code in other namespaces via TDLL.Countries.Languages or Languages if the code file is has the "using TDLL.Countries" statement outside the class definition. Now that a baseline has been established, let's take a look at some common namespaces in js.

JQuery is definitely one of those libraries that is ubiquitous. I'd say it's the most commonly used js library in the world, but then again I don't have the numbers but I bet you've used it in some capacity. The JQuery library has a namespace - 'jQuery'. Also, that namespace has an alias '$'.

Click each "Say Hello" button in the jsbin demo below to see that both jQuery and $ are the same.

jQuery Namespace on

JQuery has some interesting documentation on the jQuery global namespace and the $ alias here.

And this snippet from the jQuery source code on GitHub is exactly how the global namespaces are assigned:

define( [...], function( jQuery ) {
    return ( window.jQuery = window.$ = jQuery );

in this case "define" is a function from RequireJS which is out of scope for this post. There's a lot that goes into the "jQuery" arg that is passed to the define callback function. It is defined in the files that are omitted in the sample shown then passed into define during build time to eventually add the function to the window object (global). jQuery gives you an option to release '$' in case there is a namespace collision with other libraries - also a good practice if you are aliasing.

Another way to create a namespace is to create the namespace first, then add everything to the namespace.

window.TDLL = {};
TDLL.learnSomething = function(){ ... };

Also you could do it using object notation like this:

window.TDLL = {
    learnSomething : function(){ ... },

One of the challenges in defining a namespace in either of those ways is that if the namespace is already defined, you'd basically overwrite it in subsequently loaded files. To avoid that you can do the following:

window.TDLL = window.TDLL || {};
TDLL.everyDay = function(){ ... };

This neat trick evaluates the first operand of the OR (||) expression as false if it is undefined.

Another way to do this is to use jQuery to merge your new object with the existing one.

window.TDLL = $.extend(window.TDLL, {
    everyDay : function(){ ... },

jQuery Merge Demo on

Using your namespace is simple (once you know it has been loaded anyways).


There are ways to ensure that your namespace has been loaded (by packaging all of your javascript together), but those are beyond the topic of this post and will be a topic for a future post.

Tuesday, January 19, 2016

JavaScript Logical NOT Operator

There are some interesting things to note about the logical NOT operator in JavaScript. What I've found is that if you aren't aware of these, you may find yourself coding up some defects without knowing it. Also, being aware of some important usages may improve your code by reducing complexity.

If you are coming from a strongly typed language or just getting started in JavaScript from ground zero, you may interested to know that the NOT operator (!) has different behaviors depending on the value type that the variable represents at runtime.

For example you may have a statement:
var x = !y;
while x will be assigned a boolean value when this statement is executed, y could be any type (string, number, object, NaN, undefined, null, function) and how each is negated depends on the type.

Plus the process of negation involves first converting the value of the variable to boolean, then negating the result.

When converting to boolean, the following types are always false - null, undefined, NaN.

These are always true - object, function.

But string and number depend on the value - 0 is false and an empty string "" or '' is always false, while any other number or string is true.

An array is an object so it is always true even if it is empty.

{} is also true.

How much fun is that!?

Coming from C#, it would be great to write:


instead of


But when you cam expect x to be of any type, it would get much more complex.

In js, if you are in a position where you cannot use a guard, but need some logic to check for a value you can do the following:

//do something useful

The underlying definition according to ecma 262 v5.1 of the ! is defined as negation of ToBoolean(GelValue(x))

where ToBoolean is defined here

and GetValue, which is more complex, is

Even though the abstract ToBoolean does not cover off on function type or array type (both object types) I listed those above for more clarity.

Since there is no ToBoolean in the JavaScript language, and there is a Boolean function (not to be confused with the constructor [new Boolean]), you'd be better of writing Boolean(x) instead of !!x for sake of clarity AND efficiency. Why have the runtime perform GetValue and ToBoolean twice?

The ecma standard defines the Boolean(x) function as simply calling the abstract ToBoolean(value)