Tuesday, November 29, 2016

Making the Most of Memory


Always so much to learn. For example, did you know that the time it takes to access your computers memory is MANY times slower than the time to access the CPU caches? What does this mean for how we structure our programs and architectures? We'd want to take full advantage of the CPU Cache (L1) and the L2 cache if we are concerned at all with application performance. What this means is structuring components and code so that caches can be used.


Even if we go only to the DRAM (and not disk cache), it only holds the data for ms before having to be refreshed...this means you have under 100ms (as of the article in 2008) to process everything in memory before it cycles and has to be refreshed.


All of this has implications on the design of your objects, structs, etc. SRAM in the L2 cache is limited in memory space due to power consumption and costs. Given this, it seems logical that keeping your data bags as small in size as possible would be of great advantage to performance. Additionally, limiting loops and batches could also help to improve performance.


Architecturally speaking, since operations are also cached in the L1, you'd want to keep the same operations (looking at you data access) on the same CPU/L1. So much to learn...


Read about it in depth at https://lwn.net/Articles/250967/

Tuesday, November 15, 2016

Bugs and Defects, What Issues Really Mean

In episode 3 season 1 of Mr Robot http://m.imdb.com/title/tt4730002/ there's a highly used metaphor about bugs. The lead character's (an elite hacker) narration says that most people think finding a bug is about fixing the bug. Basically goes on to say that it's really about finding the flaw in thinking. Sounds a little like root cause analysis.


The first bug ever was called a bug because it was literally caused by a bug in a relay. Certainly not working as designed. Read more about that at http://www.computerhistory.org/tdih/September/9/


Jimmy Bogard provided a clear interpretation of what a bug is (and isn't) and how to deal with it. He and his affiliates break it down to Bugs, Defects and Stories. They "stop the line" to fix Bugs. Bugs are showstoppers, while user feedback are classified as issues. They don't believe in Bug tracking in their shop.


Think of it this way-if the first Bug ever were not removed immediately, how would that system be expected to work?


Seems to me that Bugs are mostly caused by a change in the environment which was not accounted for in the design (or a complete failure to verify implementation against acceptance criteria). Defects, on the other hand, come down to a breakdown in understanding - a Defect is when the software meets the acceptance criteria, but doesn't do what it needs to do.


There's a saying attributed to Jeffrey Palermo - "a process that produced defects is itself defective."


When Defects and Bugs creep into a system, sure we can just go ahead and fix em, move on and push on. But sometimes, and especially if this happens a lot, it's time to fix the underlying defects that are causing the Issues (Defects and Bugs). Process re-engineering, training, the environment (are we on a stable platform?), deeper analysis, etc.






That first bug ever? Perhaps that lead to a cleaner environment when running programs. Maybe it lead to innovations in storage mechanisms. Maybe it lead to nothing. Certainly, if it was a recurring problem, something would've been done.

Tuesday, November 1, 2016

A Spec File Should...

When writing a spec for an interface (public facing functions/methods of a software module/api) and that spec is written in the same language and that spec is a runnable test that executes the module...it is a fantastic way to show others how the module should work and how to use the interface...so the spec should be written in a way as it conveys these things to the reader.


Imagine a spec written for a users API. And that API has a method to getUsers and that spec has the following description:


"it should all active employees"
and the API is exercised like this:


getUsers(true);


How safe is it to assume that the first input parameter is isActive. Or could it be null? Or some other flag?


To remove ambiguity in the spec, ALWAYS pass named variables...


includeInactive = false;
getUsers(includeInactive);


yeah, yeah most editors and IDEs show the variable name somehow. Not always going to be accurate, or clear, or built yet (write the spec first anyone?).

Tuesday, October 18, 2016

Polymorphism Can Save Time and Make Code More Fun

After a day of working with some rather procedural code and shoehorning it into something a bit more OO, I've concluded that this code base would've benefitted from some SOLID OOP early on.


This code base takes a set of data in a tree-like structure and generates an html tree view based on certain attributes of each data point. The tree views are not basic like a file tree, but are based on that with more features. And it needs to display differently depending on where it's being rendered. User authorization for each data point is an additional consideration.


Reasoning about this, we can see how polymorphism can benefit us here. For instance (pun intended), the basic flow of walking the tree of data would not change, except for the fact that some business logic applies to whether a node should be added or not - could be several reasons a node would not appear in the view (e.g., security, choice, etc). Filters could be used to achieve this depending on requirements. Or a class which implements filtering. Either way we slice it, the tree of data needs to be walked.


While walking the tree, whatever is rendering the tree view can either construct it on the go, or construct/serialize in one go. For the latter choice, it can hold a collection which can be added to, or receive a collection. That collection would closely model the data tree, but with whatever visibility rules applied to it by any business logic classes.


Whatever is walking the data tree should then take in business logic as a dependency. The polymorphism part is that different types of nodes can be added to the tree view depending on the result of that business logic. I'm thinking of an interface that takes in the data and returns a TreeViewNode. TreeViewNodes are FolderNodes or FileNodes (leaf nodes). But have additional properties which control other aspects of the nodes.


Hey, sounds like we can do this better than procedural with functional programming too!


(buildview (map transformToViewDataList ( filter context dataList) )




But I digress...with OO it would have different implementations of the interface for different business cases, but the same recursion. Heck you could even have different implementations of tree recursion if needed -


nodeMapperFactory = GetFactory(context);
rootNode = GetRootNode(data);
treeWalker.WalkTree(rootNode, nodeMapperFactory);


nodeMapper = nodeMapperFactory.Get(currentDataNode);


viewNode = nodeMapper.MapNode(currentDataNode);


TreeViewBuilder.Add(viewNode);


each(WalkTree(childNode) in children);


done.

Monday, October 17, 2016

Quick Tip: Study OSS

OSS repositories abound. Just pick your favorite bits of open-source and study the code that makes it all go! There's a lot to be learned by studying the source code. Even more by pulling it local and running it. Make a few changes and experiment with it. Most will come with automated tests. Go ahead and make up a few of your own.


They've got all the info out there on how to get the source code locally and a bunch of other info here if you need:
https://help.github.com/articles/good-resources-for-learning-git-and-github/

Tuesday, October 11, 2016

Getting Political

This is not a political blog, it's a tech blog. However, I watched the presidential "debate" last Sunday and I'm feeling it this week. What am I feeling? I'm feeling a sense of ugh - like many who have described it as an ugly, dirty feeling. But why?


I'm trying to sort out why this feeling was put on all of us - why do we feel this way after this debate? I have reports from one coworker who's husband is a political junkie - he was ok to turn it off after five minutes. I assured her that it didn't get any better.


I want to start with the Trump campaign, since its an interesting campaign marked primarily by the strategy of making the OTHER candidates unlikable. It's a smear campaign, plain and simple. There's an unfortunate side effect to this strategy, the rest of us (meaning Americans) are getting smeared right along with the others.


With comments that allude to how bad things are, we all get dragged through the mud. Sure there are some problems...everybody's got their problems and its just nothing new. What I haven't seen is a Trump who really says "I know how to fix it". Where's the "champion for change" attitude? He has more of a blame game..."its all wrong and its their fault". A whole lot of it shameful whining, frankly.




Can't stand to see my country that I love and all of her people dragged through the mud and put down for Trump's political gain. I wanted to put a finger on that sort of negativity we've seen in the #debate and see what kind of person behaves that way. I found one article https://www.davidwolfe.com/12-traits-negative-people-avoid/
which lists out some traits such as:


They never apologize.
Blame others for their mistakes.
Hope to see others fail.
and 9 other unpleasant things.


I found these aligning right up with what we've seen from him over the years.


Then there was this blurb on pessimism https://www.psychologytoday.com/basics/pessimism
it said pessimistic people make better leaders. I do think you can't have a blindly optimistic leader, else they'd get taken by everything. But I can't help think too much pessimism makes for bad morale. And when morale is bad, people aren't going to get moving to get things done. I've played enough civilization games to know how that works - eventually cruel lords incite revolt. It takes a healthy mix of tough-but-fair to be a good leader.


Now Clinton...she's a pro politician. She's running a well-devised campaign. Giving measured answers. Skirting tough questions and always going back to her agenda - I'm more electable. Laughing off whatever is thrown at her to try to break her. Always veering back toward her agenda and away from danger. Answer the fucking question will ya...


I do believe she is playing for the certain votes, being too nice while she's at it. She did go after Trump a bit by saying he's not fit to be president. Maybe let those allegations come out in other ways a bit more. Just a bit weak on defending past actions (hers and others). Could use a bit more Joe Biden attitude...that "I've been there and done that, you peon" sort of attitude..."and another thing, you don't know shit so quit acting like you do"...you know, but without actually saying it. Now that's class!


The private email thing...makes you wonder what was there. I'm not an advocate for invasion of privacy. She was dealing with a whole lot of terrorist countries - could be national secrets, could be shady deals, could be dirty emails to and from Bill, could be campaign strategy. Don't mix work and private emails people! I hear this all the time where I work, IT tells everyone not to do this. Do people do it anyways? Yes they do. Should they? No they shouldn't.


All of this and there wasn't really much talk about the issues, of which the parties are SO polarizing on. I don't think it's sensible to push for so much polarization for political gain (pandering for votes). Nothing good will ever get done that way. But since campaigns are run on polarization, that's what we're going to have - we get a whole lot of black/white on issues that really have a whole lot of grey area. Guns, foreign affairs, personal liberties, national security, taxes, GDP, debt; all are highly complex issues and the political campaigns try to box them up as simple black and white in order to manipulate the public into voting for them.


Just take the gun issue in and of itself - fear of change, even responsible changes, to gun laws are so far polarized as infringement of rights that we can't have sensible laws in place to keep gun sales within the law.


iCan't balance a budget. Can't appoint a judge to the highest and most respected court in the world. All because of this black and white attitude the parties are using to control votes. What if you are one way on some issues and the other way on others? God forbid we should have thought for ourselves without ganging up with party polarization...we might actually find ways to solve the issues. Who gains from that and who loses?


So both candidates kind of suck. They've both made mistakes. One makes Americans and men look bad, the other has some skeletons in her closet. Either way you better believe every other candidate is shaking in their shoes hoping their own secrets aren't getting dredged up from the depths. People make mistakes, it just happens. Good people do bad things, bad people do good things. It's not all black and white. What's important is how we move forward and how we learn from past mistakes. Part of me can't wait to see what kind of crazy shit gets dredged up next, but the other part want to see a much more positive run for the White House - one with far less casualties.

Monday, September 19, 2016

Is Light Speed Travel Possible?

Off topic for a software blog, but just a quick thought - if matter can be converted into energy and reconstituted back into the exact same form of matter again (all particles arranged as they were before), then yes...that would be one way of moving mass at light-speed. Ah...providing that the form of energy could be moved at light speed. Might take a while to convert matter to energy and back again though.


It's kind of like a chicken and egg problem isn't it? What converts the energy into matter? Wouldn't the transducer need to be in position at the destination first (e.g. light-years away)? Or perhaps there is some way to remotely manipulate particles from afar in such a way that they can be rearranged into a predefined arrangement. And only the energy would be sent across. Guess that would be more or less making a copy of the matter rather than actually moving it via its energy form. Guess that would be more or less cloning matter than transporting it.


Bah, can't help myself, let's do a software thing...they say that a 64-bit number (a long) can hold a number bigger than all the atoms in the Universe times 200. And we can store those numbers all day long. How many numbers, bits, etc...would it take to store all the atoms in a frog? Assuming we had the necessary transducers to take in a signal and reconstruct the matter, what would it take to beam up a frog Scotty? Assume also that we can digitally encode matter and send it to the other side. Hint: Apply problem solving technique from last post to solve this problem.


Someone should make a GitHub project out of this...oh wait, I'm on it...here it is.

How to solve any problem in 5 steps.


How to solve any problem in 5 steps...


1. Understand the problem...for real. This means establishing criteria for success as well.
2. Use understanding of problem to search for solutions which are acceptable to those impacted.
3. Weigh solutions to find optimal candidates. This step may involve testing feasibility.
4. Implement optimal solution.
5. Measure for success.
if success, end; Else, repeat.


All steps depend on previous steps. Therefore, the most critical step is step 1. Step 4 may be somewhat arbitrary depending on the outcome of step 3. That is to say, there may not be an optimal solution per se, but a number of similar pseudo-optimal solutions - in that case pick one or start more than one and let the best emerge (if possible). After step 3, there may be no feasible solution. In that case end.

Why Learn Lisps?

In learning Clojure, I came across this fantastic series of tutorials https://aphyr.com/posts/305-clojure-from-the-ground-up-macros


This specific topic on macros explains a bit about expanded function trees and how macros in Clojure and some other Lisps can be used to take advantage of those to rewrite code. This evoked a memory of an MIT OCW class video I watched years ago which was taught using the Scheme flavor of LISP. Since then, MIT switched to Python. But this made me think about how different it must be to start by learning to program by directly writing expression trees. Isn't this what compilers create when they compile languages like Java, C, C#, etc? So...learning LISP languages like Clojure will give you a sense of what is actually happening when those languages are compiled and will give you a base to write a compiler (if you haven't developed those things already). With so many CS programs centered on pragmatism these days (e.g. programming) it may be worthwhile for many CS grads and self-learners to pick up some Clojure with this perspective in mind.

Thursday, September 8, 2016

What We Can Learn About Generalization From Clojure

Clojure is a functional programming language based on LISP (List Programming) but built on top of Java runtime. There's a version built on JavaScript as well. A great resource for learning Clojure can be found here.


The link to the Clojure resource goes to the third part of the lessons because I want to highlight how that particular walkthrough goes from something specific, to something more general when it comes to the recursive function. It basically builds a map function from the ground up.


Hold that thought in mind and think about how software is built and how it can generally be improved over time. Software is "soft", it is malleable. It is meant to be changed so that we humans can take something that is so general "hardware" and plug in some instructions, in a language that we understand, for the generic computer to perform specific operations.


Generalizing functions/methods is the holy-grail of software. Big idea, I know. But here's why I say that - when we generalize functions they become reusable. When they are reusable, we don't have to keep rewriting them. If we never strove for re-use, everything could be hardware - hardcoded ICs that were task specific. And we wouldn't have very good technology at all. That was the state at one time. But since we have Software, we have re-use.


But, we can't generalize world. We wouldn't choose to use different languages if we could generalize indefinitely. Or the generalization would be so unwieldy that it would become a language of its own. But isn't that what a software language is? An ultimate generalization? Point is, there's a limit to generalization. It's situational most times and depends on the use cases.

Tuesday, September 6, 2016

Coordinator Pattern for Distributed Enterprise Applications

When building software for the enterprise, particularly where data access to one or more source will be shared across multiple application domains, and where distribution and reuse through distribution is a virtue, it would be helpful to use a Coordinator as the authority for the unit of work. A Coordinator is a service which accepts a single incoming command and issues one or many commands to resources (data access services). The resources are separated into read and write resources for each source. The read resources may be used by the presentation layer or the Coordination Layer, but the write layer (which specializes in executing commands) may only be used by the Coordination Layer.
A Coordinator is similar to a Unit of Work pattern. The key difference is that it must be hosted as a service and use data access as services.
The Coordinator owns the transaction. It must begin and either rollback or commit the transaction. The transaction may be distributed across many data sources (throughout the writer resources). The writer resources must be designed in a way that handles concurrency. For example, if using an ORM you may need to load then update or insert any specific data entities involved in the transaction. These writer resources should handle the entire transaction for the data source which it wraps (typically one database). The Coordinator should send commands to any resources and raise any events necessary upon executing the commands.


The Coordinator should listen for events (as in event based architecture) and respond to those events by issuing those specific commands to the resources which it consumes. Those resources may be shared amongst other Coordinators. Resource commands should be synchronous so that transactions can be used appropriately. The Coordinator is an architectural pattern which can be used to implement a Unit of Work pattern in a distributed, service-based architecture. Coordinators should be consumed asynchronously using event-driven mechanisms. Coordinators should respond to events raised by other Coordinators or by users via the presentation layer.


Through the use of Coordinators in conjunction with the other patterns expressed in this article, a distributed system can be built such that it is loosely-coupled, highly-cohesive and easily-maintainable. The system will be loosely-coupled in that the communication between Coordinators occurs via events. Zero, one, or many Coordinators can respond to an event. Any Coordinator can be changed and (if hosted independently) redeployed without impacting any other Coordinator. The level of cohesiveness depends on the scope built into each Coordinator. Each Coordinator could handle one Unit Of Work. In this case each one would have a single service method exposed. This method may perhaps share a contract and differ only by location. In this case the method definition would need to describe an event and the Coordinator would need to load the Event details it needs via resources. In this case, an Event Store would be needed. Else, the payload would be a serialized packet of all of the data relevant to the event. This is a deeper topic and will require more in-depth exploration in order to weigh in completely on the interface mechanisms involved. The maintainability is affected by the number of moving pieces, but also by having a standard mechanism of control over the various state changes involved in a complex system.

Tuesday, August 23, 2016

Thinking About Dynamic Forms Using Angular pt 2

In the last post I started thinking about some of the problems we encounter with using forms for inputting data into a system. There may be other ways to input data and a well-designed system will be capable of accepting input from other sources. However, the focus of this series is on using forms - specifically via a web browser.


Since a well-designed system will accept input in a platform agnostic way, this leaves us open to design the user interface in a number of ways. For purposes of this thought experiment I'm going to assume we have the capabilities in place to accept input into the system in some standard way - perhaps through a service or some RESTful api for example. It doesn't matter so long as we can stay focused on collecting data from the user and sending that data into the system.


One problem we may have is incremental changes. Users have experienced automatic saving and it's no longer a novelty - we can and should do this in order to meet user expectations.


Another problem we may face is logging changes to data. Additionally, we may face changes to the form itself. Let us also take into consideration a form in which a user can add fields as needed.


From what I've gathered so far, we can apply two commonly used technologies to this set of problems and cone up with a solution that meets these needs. Here's what I have in mind:


Using Angular (or even your favorite SPA technology) and a data object with this interface: { template, data, metadata } we can store each form instance in a document database and load it in one go.


The template property holds several bits of info: the template text, version, source, form name, id, ... , anything else? }


The data holds just that - it's complete dynamic and goes with the template.


Metadata holds things like userId, action, date, ip, client device info, whatever else is relevant.


Any action the client takes - save, submit, delete...are saved as a new version of the document. Some document databases have this built-in. I'm thinking of one in particular which can even save files (js, html template, etc) along with the data.


In Angular 1 we can use directives to enable the forms. Dynamic directive creation should be achievable since its all js. All we really have to do is create components or directives server side and serve up the js and template. The template is either assigns to template or templateUrl of the directive/component. I've become accustomed to components so I'll prefer those with '=' (two way binding) which allows them to update objects in parent scope which are linked.


The data in the template would be interpolated using a standard controllerAs: 'vm' so that all data can be accessed in the templates via vm.property.


That leaves behaviors...not sure about these yet. I'm going to think a bit about this one...until next time it's a good place to start.

Thinking About How to Build Dynamic Forms with Angular

There are many times when we need to use forms to collect user data. Make no mistake, this is not where the app begins or ends, but only a small part of an app. There are several platforms, frameworks, etc that we have to create forms. The "classical" approach is to present a form which handles CRUD operations on a database, updating the tables directly. In some cases this may be an adequate approach, but in time it falls short of an optimal solution in many applications. There are several reasons for the shortcomings and some inelegant solutions have been used to make up for them.


Often the form requirements can change over time. In the classical style, a developer makes some changes to the form and the database schema - sometimes altering existing data in the process by dropping or changing columns. The new Form is deployed and going forward we have our new data. But what about the historical data? Perhaps an archive is used, perhaps an additional database and app version. Three versions later, a monolithic app can get messy fast when supporting multiple versions. This is inelegant solution #1.


Sometimes it is beneficial to have dynamic forms, several CMS solutions exist for such things. Not being overly familiar with the current CMS landscape, I'm certain my proposed Angular solution will be fitting of the CMS domain, however I will continue regardless since I believe we can one-up a CMS solution by enabling the user to add fields at their own discretion rather than admin only. Potential inelegant solution #2 - locking part of the solution into a specific platform.


Pre-emptive rebuttal to using Angular platform and locking into that: I propose a more robust solution with no specific template engine, Angular is a fitting engine at the moment since the dynamic nature of JavaScript lends itself to generating arbitrary forms. Plus the benefits of a document database backing will be clear once more details are presented. But enough about solutioning for now...lets continue defining the problem next post...I'm outta time ATM.

Monday, August 22, 2016

Decentralization of Business Tasks to Build in Better SOC

In the typical business organization, there are departments which specialize in specific business areas - HR, IT, Marketing, etc. To me this seems to be a benefit due to the specialized nature of each. There may be individuals in each department who have a knowledge and passion of another area and there may be some crossover, but by and large the units are siloed in their work - especially at larger institutions. Since the structure of systems have a tendency to model the structure of the organizations that create them (see Conway's Law) and many systems that tend to the needs of these units will provide some similar capabilities, it is up to the system designers at the enterprise to go against the natural tendency to design the systems along the same lines as business structure and design the system along the lines of capabilities. There are several benefits to doing so as described in this writing.
Reason 1: DRY - Do Not Repeat Yourself. This is a mantra of software development. This practice of not repeating yourself is commonly declared as a time saver as well as a defect deterrent, especially when there is a change in requirements for a feature. Consider a number of systems who have users and a user admin feature. During the course of development of this feature for each application a considerable amount of time was spent gathering requirement, doing mockups, designing, implementing, testing for this feature in EACH system. And when a defect is found, or security requirements change each system must be updated in turn. However, if DRY were applied at the enterprise level, there would be one user admin capability. That user admin would have behaviors designed to provide common functionality with respect to administering users - add/remove, assign role, change role, reset passwords, etc. The immediate benefit is that the features wouldn't have to be developed for each application. Other benefits are centralizes administration of all applications for system admins. Additionally, this enables "One level up" features to be developed where users can have access to all their applications centralized. One thing to consider with this design is the impact on individual applications due to changes to core features. Additionally, the system must be robust but flexible enough to allow extension by other systems. For example, a canned(packaged) solution may offer an API to its user administration. An idealized core user admin would be able to adapt to it so that it could also be administered centrally.


Reason 2: User Efficiency. Users are able to access actions, tasks and tools from a common location. Potential for more efficient interaction with technology.


Reason 3: Change. Centrally managed change. Changes happen in one place. Easier to find what is impacted by a change.

Tuesday, May 24, 2016

OnChatBot

Having recently delved into a world of conversational machines in the form of chat bots, I've learned a few things about that world. There are so many platforms with so many options to choose from, but there are some common threads.


One common thread is that there are different AI technologies that can plug into a chat bot. Two of those are language interpretation and psychological profiling. Language interpretation uses a series of mappings to determine the topic(s) of a user's statement. Psychological profiling uses data gathered from social media to guess what a person's personality traits are. Combine these with some other sources including a series of waterfall questions and you can tailor a bot to give personalized responses to the user.


Imagine having a bot app on your device or computer that monitors your activities and responds to your needs! Bot apps can be programmed to launch programs and run tasks. I'm not constructing a world where the bot takes commands and executes them, but a world where the bot truly learns your behaviors and serves your needs by executing tasks when you need them!


Do you open a certain program everytime a certain person calls? Do you need to dial in to a meeting when it begins? Do you spend time on Facebook every morning? How about composing emails? I'm not suggesting chat bots should do all this for you, rather that the technologies behind chat bots can and should automate some of the most routine and mundane aspects or your life so that rather than being a slave to the machine (email, launching programs, dialing phones, scheduling appointments) the machines can be your servant and assistant.

Monday, May 2, 2016

JavaScript Fun: Method of Wrapping JQuery getJSON to Handle Errors to Avoid Duplicated Code

Here's a great way to wrap JQuery and other libs in your own wrapper so you don't have to repeat yourself, repeat yourself. It's pretty basic and simple for now.



JS Bin on jsbin.com

JavaScript Fun: how to merge two objects, no recursion yet...

I like using objects in javascript. This is a basic and easy way to merge two of them. Good example is if you take in options into something.

JS Bin on jsbin.com

Saturday, April 23, 2016

5 Year Old and Recursive Algorithms

Just witnessed my son (who is 5 years old) programming a recursive algorithm (tail recursion) in a game called Light Bot on his iPad. Light Bot is a game where you line up a bunch of commands into a program that controls an on-screen robot.
The objective is to light up the blue squares on a board. There are commands to move forward, turn left, turn right, hop, light the square, and run a procedure. There are various layouts of squares, and numbers of commands and procedures that can be used to complete the challenges. Apparently he's made it to the challenges that require recursion in order to complete the levels. So proud!

Friday, April 15, 2016

Making Progress


Some recent thoughts on reward systems: 

The best approach is to have people choose their own benchmarks for rewards and their own rewards - sort of like personal goal setting. I watched a TED talk on the topic of rewards, some research showed that rewards for performance in knowledge work do not get better results - of course we all like rewards anyways. If the reward system is more intrinsic, as the research suggests, then I might be inclined to choose something to be rewarded for like self-improvement in some way. But the rewards for self-improvement are natural and intrinsic, not extrinsic like a trip to the vending machine or a piece of paper with a fancy border. Then it boils down to this - how do you get others to do what you want them to do? Or better yet, how to get others to behave in a way that provides the best benefit of the group (the organization).


First thoughts are that having the knowledge of the goals of the organization will allow individuals to come to the same conclusions. In this way we can clearly see and understand the costs and benefits of our actions. For example, if I know that we need to keep the Project A work light because I have all the information -  Product A is reaching end-of-life, we have other things to work on, and Project B is consuming more time than expected - then I can make decisions and act accordingly. Without it I cannot.

Lets take a concrete example where progress is straightforward to measure like digging a ditch. In order to measure progress and have some regular motivation and rewards for achieving goals, we might mark daily goals along the path of the future ditch. That would be a clear visible goal and that mark for can be set for each day according to the time available to dig the ditch. It would be relatively easy to take the length of the ditch and the total time available and divide it evenly. You’d also have to know how much ditch can be dug by the digging team each day. And then there's the real world with its chaotic factors that would affect the overall progress.

I'm not a ditch digging expert by any means, but I have dug a trench or two. Drawing on my limited experience and the power of imagination let's think of a few things that could affect the progress of a digging team. Soil texture - soil rich in clay is harder to dig through than silty soil, rocky soil would difficult as well. Weather - should be obvious. Obstruction density - tree roots, utilities, garbage, old-roads, etc. Health issues, injuries, equipment quality, people, alignment of the planets, etc...

With all of the potential causes of impedance to progress, unless something is obvious (thus preventable or unavoidable), tracking those daily goals will be important to maintaining the pace needed to dig the whole ditch. If the digging team is not meeting the goal for some particular day, how do you solve that problem?

Step one: Find out what the problem is. Why is the digging team not meeting its potential? Without that, you got nothing. Can't treat it like a black box and throw out things you think might work until you find something that does. Well you could, but that could do more harm than good.

Step two: See if the team knows the solution. Often the people doing the work will have an answer that works for them. Perhaps the issue is tree roots slowing them down. They might not have the tools they need to clear them out efficiently.

Step three: Implement the solution. Get the tools, people, system, whatever in place in order to get things on pace.

Sunday, April 3, 2016

Contain Dependencies

This has come up several times in various applications I've worked with. You have some dependency - lets say the MVC framework for example. That dependency is a certain version, lets say 4 for sake of discussion. You have multiple csproj files in your Visual Studio solution. One of those projects is the ASP.NET MVC 4 web project, another is a bunch of models, maybe another contains a collection of helpers. One of those projects which the web proj depends on also depends on MVC, but also has some other dependencies like StructureMap or NHibernate or Automapper.


Now imagine one or more of those projects is shared amongst multiple solutions since it contains re-usable code. If any of those projects have a 3rd party dependency updated to the latest (and greatest?), what happens to the shared project? It too must be updated. Once that happens, all other solutions which use it are impacted. But what if the consuming code doesn't even use the feature that depends on the 3rd party lib? Now you're stuck holding the bag anyways...


So here's the lesson - if you're writing a lib that is intended for re-use, separate any pieces of code that have external dependencies onto their own assemblies. For example, of you have a library of helpers, have a library of core helpers then a library of helpers for each other external dependency.


Helpers.
Helpers.MVC.
Helpers.EntityFramework.
Helpers.Automapper.


Maybe even version them if you see fit.


Helpers.MVC4.
Helpers.MVC5.


By separating those dependencies this way, you can avoid potentially crippling issues down the road where suddenly your applications won't compile and you don't know why. When you finally find out, you have to wade though oodles of muck to sort it all out. Plus its just good SoC :)


Happy Coding!

Wednesday, March 30, 2016

What Is a Repository Pattern For?

There is a myth that the reason for using a Repository pattern that goes a little something like this: "You use a Repository pattern so that you can swap out the database technology if you want to switch to something else." In this post, I will bust this myth.

First off, I'm not saying that the swapping rationale is not valid. I've encountered one instance where we actually wanted to switch the data-store technology but the cost and risks of doing so would be well beyond what we were willing to invest since the data access code bled into all the other code! In this case, had the original developer used some kind of abstraction or at least a clean data layer, we would have been able to reduce certain risks at a reasonable cost.

Chances are that you will not change the data-store technology. However, what you will likely want to do is some automated testing. If you are going to do that - and since it's a fast, cheap, and reliable way to increase quality and reduce bugs, you should do that. Hell, if you are doing any sort of testing at all besides end-to-end testing this applies to you too. Testing is the main reason for using the Repository pattern or a similar abstraction of the data.

You could run tests that are dependent on the database, or you could choose no to - especially early in development. If you choose to not depend on the db, then you will need to supply some stand-in values for the core business logic to function. In order to supply those stand-in values, you may want to read them from a different source like a flat file, or use one of the many test-doubles to suit your needs.

For any reasonably complex system - let's say one with multiple data sources - you may not have full control over the data source. You may not be able to change the values. Or maybe others are changing the values for some other efforts and adversely impact your efforts. When you are doing your development work and suddenly your program no longer works as expected or you cannot verify your work due to some other work which impacts the program's dependency - your progress will grind to a halt while you sort it out.

So what do you do? You could copy the db while in a known good state; or you can write up your own db and use that for a source. You could write a bunch of insert statements to set up the database with whatever values you need. You could even write new values to and read from the database for each test case. You could even add some special logic just for the tests that write to the database, even if your program does not require you to do so. However, using an abstraction can lead to a cleaner approach when it comes to testing your business functions.

With an abstraction of the data layer, you can wrap all of the nasty hobbitses of sql statements, ORMs, or whatever you have cleanly behind something that looks like the code you are writing in the layer you are working on. You can supply values to the business logic by mocking, stubbing, faking, or otherwise substituting the implementation of the abstraction to suit your needs. You can use a scientific approach to testing your code as you are implementing it by changing certain variables during different test scenarios.

For an example, let's consider a case where the system needs to send an email. Let's say the recipient list comes from some service via an API and the email itself comes from an internal database. And let's say we want to use an SMTP to send the email for now. All three of those things fall outside of the boundaries of the program. In the Hexagonal Architecture sense, the business logic depends on those but not on the specifics of the implementations of those. So go ahead and abstract them from your business logic.

Your business logic should: fetch the list of recipients, merge with the email template, send the emails. It should not care how you send the email or where the recipients or template come from. I would focus on testing the template mashing code and that the email is sent under the correct conditions with the correct values. I would try running different permutations of values through the tests - there are some techniques that I've found to work very well for testing logic this way while reducing the number of tests which can muddy up what the they should convey to other developers. Look for those in a future post.

Some of the resources that the emailer system consume can change (though mostly unlikely) and the Hex pattern can ease the transition. More importantly though, patterns like Repository aid writing and running the test code which is there to guide and make clear the intent of the business functions. The tests are there to offer an example of how the modules are to be used and to show how they should behave given specific scenarios. They are these to clean up the business logic from the data access code so you and other developers who work on the system don't have to wade through oodles of data access muck to sort out the business functioning of the system. In these ways the TCO of the system can be reduced since changes can be applied more smoothly.

Friday, March 25, 2016

If You Write The DataBase First You Are Doing It Wrong!

In an epiphany, I figured out why developers want to make the database first. In root cause analysis fashion, let's play the 5-whys game and find out.


1. Why start with the DB? You need data in order to start developing the rest of the app.


2. Why do you need data first? Because you aren't practicing TDD.


3. Why aren't you practicing TDD? Because it takes you longer.


4. Why do you think it takes longer? Because it means you have to write more code.


5. Why does TDD necessarily mean that you have to write more code? Because you've written Unit Tests before and its difficult to write them when your business logic depends on the data, so the Unit Tests become lengthy, take a long time to run, and are costly to maintain.


So it comes down to a myth that is invented from having done testing wrong in the first place. Perhaps there's an assumption that TDD is about testing, when it is really about design. Dr. Dobbs sums up the concept in this article, basically pointing out that the tests make you think about the code. There are endless sources on the web, in books and in magazines that can take you through the details. I will be staying focused on how TDD helps avoid the costs of developing the data layer first.


If your development efforts start with a test, the first thing you may soon notice is that you will need to provide some kind of data to the business logic. However, rather than writing a database right then and there you will use one or more of several patterns for substituting an actual DB in your code - repository pattern, resource, fixture, stub, mock, etc. This will allow you to focus on what the app DOES instead of the low-level details of a datastore. You will control the data that drives the logic in a scientific way for each scenario by providing the combinations of values that are expected for each scenario. The art is in knowing how to write the tests, which takes practice.


Imagine if you had a DB first and you operated under the assumption that certain bits of data would be needed for certain methods of accessing the data would be needed. Now when it turns out they are not needed, or that your assumptions were incorrect, you've actually just done a lot of unnecessary work - e.g. wrote more code which took longer and wasn't needed.


Eventually, and maybe in the first few iterations of test-then-code, you will begin to write some models that can be represented by a data model. As your application takes shape, you should be safely refactoring so that the entire application becomes more cohesive in time. You have the tests to back you in the refactoring process. One of the reasons to start TDD at the system interfaces.


Additionally, as you add features during the project, your manual testing or even Unit Tests will take longer to execute and you will end up creating and deleting data and having to update data update scripts to set up a bunch of data which is more code to maintain. In the end you will end up doing more work than of you'd written the tests while designing - iterating test, then code to make it pass, then test...bit by bit.


When you eventually get to the point where you will need to integrate, you will now understand what you really need to persist to a database, but not until it is needed. If you start in this way, the way of TDD, then you will know that you do NOT need the database to write the application and you will see it as the outer layer of the onion that it is.


One final nugget - the most important reason for a repository pattern is NOT so that you can swap the underlying data store technology, though it is a compelling myth. More details about this and how to start TDD in future posts.




Tuesday, March 1, 2016

Const and Static in C#

I learned the true difference between const and static in C# when it comes to class members. This did not come the hard way, but through a PluralSight course. I'm thankful for that!


Here's the difference:


const - set during compilation time, it seems like it is inlined. In other words if you have a const FEET from a class Yard like so:


class Yard
{
public const int FEET = 3;
}


and you use it like this:


...
var bar = baz * Yard.FEET;
...


You would be ok if Yard was in the same assembly, but if it's in another assembly and Yard.FEET changes, but the calling assembly doesn't get recompiled it'll still have its old value. Plus a const cannot be changed once compiled so you cannot compute it from something else like a config value.


With a static readonly member, you gain the benefit of computing the value at runtime AND if it the value changes (either by config or by code) and is in a separate assembly from the consumer (think GAC here) your calling code will retrieve the the new value.


Why use a constant ever? It's a bit more efficient during runtime since it gets inlined and saves a call. But let's be honest here, if you are using C# is that minor gain in efficiency really worth it? If that bit of performance makes a difference in your world, wouldn't you be using C or C++ instead? I'm sure a case can be made, but its not likely doing to be one unless you have millions of calls for the same thing. In that case there may be other optimizations that come first. Just saying...

Saturday, February 20, 2016

ABC Mouse Review

Having children during an age of technology means that children will be using technology to play and learn. One very popular educational children's game is ABC Mouse (ABCmouse.com). The game is very well put together and well advertised. It's a product of Age of Learning, Inc, a global initiative to prepare children for a successful school experience.


The game features age-appropriate activities ranging from puzzles to stories, songs and coloring. For example, there are original alphabet songs, counting games, matching games. Another key feature of the game is the ticket system. Each game produces a few tickets (similar to tickets at certain mouse themed children's pizza restaurants) that can be exchanged for prizes. The prizes are either related to the in-game fish tank, the hamster maze, the house, or avatar. Once a certain series of activities is completed (4-5 theme related activities) a prize and larger amount of tickets are awarded.


The reward system is pretty good and keeps children interested and motivated to play more. Tickets are earned but can also be purchased. There are two key things I would like to see from this game - and in the education system in general - goal setting and investing.


It would be great if the system had a way to set up a goal and work toward earning that goal. For example, if a specific fish costs 200 tickets, the game should have a way to encourage a child to set up some savings goal to earn enough tickets for that specific prize.


Another fantastic feature would be a way to invest tickets in something that produces tickets over time. Perhaps certain prizes like a specific fish or combination of fish could produce tickets if fed regularly. In any case, a mechanism to learn investing would be great. Currently, if a child wants more tickets, they have to work to earn tickets or have their parents buy tickets for them.


All in all, the game is great and I would recommend this to others. I will be making the suggestions to the makers of this game about goal setting and investing.

Sunday, February 14, 2016

JavaScript : measuring performance

I recently learned a couple great tricks for measuring performance in JavaScript. There's always the profiler in the browser, but that's a bit verbose if you need to just A/B two ways of doing something. The following techniques are great lightweight approaches that you can use when writing or performance tuning some code.


1. performance.now()


basically it returns NOW as in RIGHT NOW which you can capture in a var for later use. Here's a basic pattern for use:


var s = performance.now();
//do something
console.log(performance.now() - s);


this will log the time "do something" took to complete in ms.


performance is a property of the window object, therefore it is NOT available on nodejs. You can use the following in that environment:


2. console.time("key"); console.timeEnd("key");


use it like this:


console.time("key");
//do something
console.timeEnd("key");


depending on the js engine, you'll get an output like:


"key" 3.002 ms


If you want to measure more than just the execution time, like memory usage, i/o usage, and processor use these quick-and-dirty functions won't get you that. Look for a future post on those topics...

Monday, February 8, 2016

Credit Card EMV Security Chip

I was at the checkout yesterday and they started using the new chip reader. First of all, I'm glad this is more convenient than the old way...like when you have to jam the card just a bit further into the machine or it won't take. Not only is it less convenient, but it's still only marginally more secure.

You've heard the saying "Something you have, something you know", right? It's about multifactor authentication, that added layer of security which means someone can't just have something and gain access to the secured entity. In the case of the CC this is an authentication which is missing. The clerks can check your ID, but most often they don't. So consider that for in-person transactions there is no authentication other than you have the card. Something you have.
There is some hope for the future though since eventually the chip-n-sign cards will maybe be converted to chip-n-pin cards. In my experience you don't have to sign for a lot of transactions, only over certain amounts or even at a place you visit often. From what I can tell, the system doesn't even really authenticate against the signature anyways its just some form of extra work for the consumer - perhaps another illusion of security.


Credit cards are easily dropped, lost, stolen (by someone you know or pick pocketed). Most of the time after, they can just be plugged in and used without anyone knowing since there is usually no ID check.
According to what is presented on creditcards.com, with their "nothing to see here, go back into your homes" message - it's not only going to cost over $16-billion to switch to chip-n-* cards, but there will also be an extortion style switch to merchants to purchase the POS devices that read the chips or else pay for the fraud. Ultimately the costs will be passed down to consumers and/or investors. The site claims chip-n-pin cards reduced fraud in Europe, by how much it didn't say.


But, since any measures for improving security will result in increased inconvenience and a redoubled effort to hack the new thing. Two things will result - added inconveniences for consumers and the thing will eventually be hacked anyways. I would propose using a thumbprint reader, but its very likely that would get hacked too. So lets just stick to convenience at scale and focus on making thieves pay for their crimes instead of everyone else.

Friday, January 29, 2016

How To Write More Maintainable BDD Tests

Given we are writing BDD style requirements
And we are using cucumber syntax
And we want to be able to respond to changes
When we write our requirements
Then we should write them in a way that requires as few changes to them when requirements change
And we should minimize the number of tests that are impacted by changes in requirements.


We can write such tests in the following way:

Feature: Say Hello
User Story: As a marketer, I want the system to greet users by name when they log in so that the system feels more user friendly.

Given a user with the name "bob"
When bob logs in
Then the system should say "Hello, Bob!"

Since this is a simple case, it is sufficient to write the test in this way.

For a more extensive use case that is dependent on bits and pieces of data from many sources with complex business rules, it would be better to define the test cases in the following way:

Feature: Greet User
User Story: As a marketer, I want the system to provide customized greetings to users based on their purchasing habits so that the users will be more likely to increase their overall purchases.

Wednesday, January 27, 2016

Lean Change

I've recently caught wind of Lean Change - basically a way to achieve continuous improvement. It seems the key is regular open communications. I was in the hot-seat for representing our team of five recerly in a QA event put on by CQAA(Chicago Quality Assurance Association). The lab was to put together an elevator pitch to sell BDD to an executive (for some reason we chose executive). So here's me as a developer "pitching" the event speaker acting as an executive.


We thought we'd try to sell on cost reduction, reduced time to market, improved quality, and Google does it so should we. I did my best but was met with resistance and the proposition that QAs and BAs were not doing their jobs and/or would not be needed in the new world order of BDD. Since I was a developer in a room full of QA engineers, I jokingly confessed that we would no longer need QA engineers if we used BDD.


This was basically a sweeping change approach and the pitch was a hard sell. I would not recommend selling directly to execs in most cases. Follow Lean Change - get folks involved in the change. Change a little at a time, but don't make anyone feel like they are the ones you want to change - QAs don't do this to DEV and DEV don't do this to BAs. It's really about working together as a team to resolve the issues, not about imposing change on others.

Tuesday, January 26, 2016

QA Perspective

I  gained a new perspective today after attending an event hosted by a QA group. The event was about how to influence change towards BDD (Behavior Driven Development). There was a remarkable turnout with a packed house and some folks sitting in the back without tables. I (as a developer) was in a room with a bunch of QA folks. The topic was BDD, which from my understanding so far was of more interest to developers than QA.


As it turns out...the focus from the QA standpoint was on changing the way development works. I've always viewed BDD as a fundamental change in the BAs worked. I guess it really goes to show that the ones downstream in any hand-off type of work are going to have the most interest in changing the patterns of those directly upstream.


This FURTHER reinforced this same realization I had last week when doing a penny passing experiment while in Agile training. The experiment, which seeks to show efficiency gains in iterative work over waterfall, goes like this -


setup - form groups of four, one person keeps time, the rest pass coins.


first run - pass a batch of coins one at a time from one person to the next. Each passer can't pass coins until he/she has the whole batch. Timer times from start to receiving all coins. This mimics waterfall and the coins represent deliverables.


second run - pass coins one at a time as soon as you have coins to pass. Time when the first coin reaches the end and when all coins reach the end. This is iterative.


In the waterfall process, ever one who is not passing coins is watching the coin passer. Clearly this takes longer and results in waste. In iterative, everyone is working with excitement and fervor to pass those coins along. Each is paying close attention to the upstream passer because that's where the coins are coming from. How about that?


Think of the iterative approach for a bit. If we do this over and over again, and we can talk in between each attempt - then we can make incremental improvements to the process of passing coins and seriously reduce our throughput over time. This is old-hat by now in software. Yet it seems that so many organizations are still struggling to work together to make improvements. Another antiquated notion seems to be the hand-off. Clearly the handoff leads to waste. Imagine the same penny experiment, but where each coin from the pile is passed by the whole team at once.







Wednesday, January 20, 2016

C# to JavaScript: Namespacing

Namespacing is one of the most important things you can do when using js. While namespaces are built-in constructs in C#, they are not an official part of the js language. That doesn't mean that you can't or shouldn't use them, on the contrary. In this post, I'm going to show you how to create namespaces and how to add your code to the namespace to not only avoid polluting the global context, but also to avoid potential collisions with variables and functions from other js libraries.

If you are foggy on what exactly a namespace is - my definition is that it's a logical container used to provide identity and scope to related code. If a class "Languages" in C# is defined in a namespace "TDLL.Countries", the class would be accessible from code in other namespaces via TDLL.Countries.Languages or Languages if the code file is has the "using TDLL.Countries" statement outside the class definition. Now that a baseline has been established, let's take a look at some common namespaces in js.

JQuery is definitely one of those libraries that is ubiquitous. I'd say it's the most commonly used js library in the world, but then again I don't have the numbers but I bet you've used it in some capacity. The JQuery library has a namespace - 'jQuery'. Also, that namespace has an alias '$'.

Click each "Say Hello" button in the jsbin demo below to see that both jQuery and $ are the same.


jQuery Namespace on jsbin.com

JQuery has some interesting documentation on the jQuery global namespace and the $ alias here.

And this snippet from the jQuery source code on GitHub is exactly how the global namespaces are assigned:


define( [...], function( jQuery ) {
    return ( window.jQuery = window.$ = jQuery );
});

in this case "define" is a function from RequireJS which is out of scope for this post. There's a lot that goes into the "jQuery" arg that is passed to the define callback function. It is defined in the files that are omitted in the sample shown then passed into define during build time to eventually add the function to the window object (global). jQuery gives you an option to release '$' in case there is a namespace collision with other libraries - also a good practice if you are aliasing.

Another way to create a namespace is to create the namespace first, then add everything to the namespace.


window.TDLL = {};
TDLL.learnSomething = function(){ ... };

Also you could do it using object notation like this:


window.TDLL = {
    learnSomething : function(){ ... },
};


One of the challenges in defining a namespace in either of those ways is that if the namespace is already defined, you'd basically overwrite it in subsequently loaded files. To avoid that you can do the following:


window.TDLL = window.TDLL || {};
TDLL.everyDay = function(){ ... };

This neat trick evaluates the first operand of the OR (||) expression as false if it is undefined.

Another way to do this is to use jQuery to merge your new object with the existing one.


window.TDLL = $.extend(window.TDLL, {
    everyDay : function(){ ... },
});

https://api.jquery.com/jquery.extend/

jQuery Merge Demo on jsbin.com

Using your namespace is simple (once you know it has been loaded anyways).


TDLL.learnSomething();
TDLL.everyDay();

There are ways to ensure that your namespace has been loaded (by packaging all of your javascript together), but those are beyond the topic of this post and will be a topic for a future post.

Tuesday, January 19, 2016

JavaScript Logical NOT Operator

There are some interesting things to note about the logical NOT operator in JavaScript. What I've found is that if you aren't aware of these, you may find yourself coding up some defects without knowing it. Also, being aware of some important usages may improve your code by reducing complexity.


If you are coming from a strongly typed language or just getting started in JavaScript from ground zero, you may interested to know that the NOT operator (!) has different behaviors depending on the value type that the variable represents at runtime.


For example you may have a statement:
var x = !y;
while x will be assigned a boolean value when this statement is executed, y could be any type (string, number, object, NaN, undefined, null, function) and how each is negated depends on the type.


Plus the process of negation involves first converting the value of the variable to boolean, then negating the result.


When converting to boolean, the following types are always false - null, undefined, NaN.


These are always true - object, function.


But string and number depend on the value - 0 is false and an empty string "" or '' is always false, while any other number or string is true.


An array is an object so it is always true even if it is empty.


{} is also true.


How much fun is that!?


Coming from C#, it would be great to write:


if(!x){
return;
}


instead of


if(!string.IsNullOrEmpty(x)){
return;
}


But when you cam expect x to be of any type, it would get much more complex.


In js, if you are in a position where you cannot use a guard, but need some logic to check for a value you can do the following:


if(!!x){
//do something useful
}


The underlying definition according to ecma 262 v5.1 of the ! is defined as negation of ToBoolean(GelValue(x))


where ToBoolean is defined here http://www.ecma-international.org/ecma-262/5.1/#sec-9.2


and GetValue, which is more complex, is http://www.ecma-international.org/ecma-262/5.1/#sec-8.7.1


Even though the abstract ToBoolean does not cover off on function type or array type (both object types) I listed those above for more clarity.


Since there is no ToBoolean in the JavaScript language, and there is a Boolean function (not to be confused with the constructor [new Boolean]), you'd be better of writing Boolean(x) instead of !!x for sake of clarity AND efficiency. Why have the runtime perform GetValue and ToBoolean twice?


The ecma standard defines the Boolean(x) function as simply calling the abstract ToBoolean(value)
http://www.ecma-international.org/ecma-262/5.1/#sec-15.6.1

Tuesday, January 5, 2016

Abtracting Service Calls Using the Adapter Pattern

Consider the following scenario:

Given an environment which contains the following:

UserResource - a service which allows clients to lookup users and user information.
AdminResource - a service which allows clients to lookup admin details.
TaskApplication - an application which consumes the UserResource.
DocumentApplication - an application which consumes the UserResource.
AdminApplication - an application which consumes the UserResource and AdminResource.

And each of these applications have variances in their domain specific usage of the data provided by the UserResource.

When the applications are developed
Then you may want to build domain specific classes to represent the data you are using in each application.


For Example:

Suppose the AdminApplication needs to make a distinction between SystemAdminUser and User such that it would be appropriate to create a subtype of User for the SystemAdminUser (with additional detail) which the UserResource does not provide, the additional data would come from the AdminResource.

It may be better to create a new type rather than a subtype, I'll get into why in a bit. For now let's create a subtype from the UserResource like so...


namespace AdminApplication.Objects
{
    public class SystemAdminUser : User
    {
        public int[] SystemIds { get; set; }
    }
}
.
.
.
namespace AdminApplication.Adapters
{
    public interface ISystemUserResource
    {
        SystemAdminUser GetSystemAdminUser(int id);
    }

    public class SystemUserResource : ISystemUserResource
    {
        IUserResource _userResource;
        IAdminResource _adminResource;
        public SystemUserResource(IUserResource userResource, IAdminResource adminResource)
        {
            _userResource = userResource;
            _adminResource = adminResource;
        }

        public SystemAdminUser GetSystemAdminUser(int id)
        {
             var user = _userResource.GetUser(id);
             var admin = _adminResource.GetSystemAdmin(id);
             return Mapper.MergeMap(user, admin);
        }
    }
}

And when this adapter is consumes in the application, the application code no longer will need to have the dependency on the IUserResource and IAdminResource interfaces (assume this example uses DI/IoC to inject the concrete classes) but only on the one ISystemUserResource interface. There is an added benefit to the simplicity and reuse that will make the rest of the application code flow more naturally.

However, we do have a deeper issue with this choice. To illustrate, note the following consumer of ISystemUserResource:



using AdminApplication.Adapters;
using AdminApplication.Objects;
using UserResource.Proxies;

namespace AdminApplication.Web
{
    public class HomeController
    {
        ...
    }
    ...
}

See how we have the dependency on the UserResource library due to the class being derived from one of its types? This dependency will bleed all over the application code, the test code, etc. Additionally, we may also be passing quite a bit of information around the application that we do not need by way of properties on the User instances. Additionally, though not likely in this specific scenario, there may be differences in terminology in different domains for the same thing. It would be confusing to have an EmployeeId when in the context it is always referred to as the UserId.

In order to minimize the bleeding of dependencies through consumer code, it would serve well to abstract the services behind a domain specific adapter using an Adapter pattern, Façade pattern, or a combination of both.




namespace AdminApplication.Objects
{
    public class SystemAdminUser
    {
        public int UserId { get; set; }
        public string UserName{ get; set; }
        public int[] SystemIds { get; set; }
    }
}
.
.
.
namespace AdminApplication.Adapters
{
    public interface ISystemUserResource
    {
        SystemAdminUser GetSystemAdminUser(int id);
    }

    public class SystemUserResource : ISystemUserResource
    {
        IUserResource _userResource;
        IAdminResource _adminResource;
        public SystemUserResource(IUserResource userResource, IAdminResource adminResource)
        {
            _userResource = userResource;
            _adminResource = adminResource;
        }

        public SystemAdminUser GetSystemAdminUser(int id)
        {
             var user = _userResource.GetUser(id);
             var admin = _adminResource.GetSystemAdmin(id);
             return Mapper.MergeMap(user, admin);
        }
    }
}


There will be a slight increase in code complexity at first. But the long-term benefits of being able to upgrade services without much change, testability, and the reduction of the reference bleeding will be a future asset worth investing in.

New look

I changed up the template on this blog today to make it easier for readers to find more blog content. I thought the scrolling template presented to much to look at all at once and that it would be more useful to home in on specific posts.

Monday, January 4, 2016

Constraints Increase Creativity


In this episode of StarTalk, Neil is interviewing David Byrne (formerly of the Talking Heads). Early in the interview they touch on constraints and how they affect creativity. The basic idea is that complete freedom does not stimulate creativity like constraints. Constraints give us problems to solve. This is where creativity comes into play.


What this means in the Software arena is that when we are taking on some new requirement, we should be sure that the constraints are well defined. Having well defined constraints will stimulate our creativity. If they are not, that is the starting point of our work.


On the other hand, if there are too many constraints it is possible to completely stifle creativity. It's possible to be left with only one option or zero options. If this is the case, then it may be necessary to see which constraints can be relaxed. It would be best to come with options when asking to relax constraints. It would be useless to remove a constraint which does not improve options. So maybe find out which constraints are the linchpins first.