Thursday, December 28, 2017

Customer Redirection

When your company no longer offers a product and you run out of stock you have to inform your customers. This communication could lead to two different reactions from your customer. On the one hand they could buy an alternative from you, on the other hand they could look elsewhere for the same product.


You could be proactive in keeping your customer happy by offering an alternative that you sell. Make it as easy as possible for them - and while you're at it, offer it at a discount or send a sample size. They're already disappointed that a brand or product they have come to love and trust is no longer available. That broken trust could easily transfer to you...this is an opportunity to make it right.


Here's an anecdote to illustrate the point. Today my wife received a disappointing email that a key product in her melody of beauty products was no longer available, and this coming after she ordered a refill. Needless to say she is going to have to find another product - but which one and from where?


First, she'll scan the web for the same product. Then she'll see what others are recommending as an alternative. No one likes to go on a whim with this kind of thing, it's all about word of mouth or endorsement by someone she trusts.


She does trust the company she placed the order with, as do many women and men. This email is an opportunity to redirect her toward a viable alternative - if it costs more, offer a discount for the first purchase just to make things even out.


But remember, you've also got to make up for the negative impression so offer another discount on top of that or offer to throw in a selection from some items based on her shopping history (another opportunity to make a second product introduction). Use your customers' product ratings or in-house expertise to drive product selection. Or for a more advanced solution use machine learning.


With machine learning, we can build smarter solutions to match your customers with the right redirection. When you incorporate our solutions in your communications chain, you turn missed opportunities into sales opportunities.

Thursday, November 16, 2017

Culture of Interactions

The Agile Manifesto puts the value of "individuals and interactions over processes and tools". Meaning that a conversation is where the work starts. Meaning that if your processes/tools are getting in the way of human interaction then you aren't practicing this value and not getting the benefits of Agile. For example, work requests delivered long-form via email is not an interactive process.


Agile processes generally use User Stories as a starting point for any development work. Just what is a User Story? It's really a placeholder for a conversation, no details except a concise introduction of the user's goal - "As an end-user, I want to print my essay so that I can mark it up with red-ink and make revisions based on the markup." This is the user's perspective. It isn't even about the user story, it's really just there to keep track of something.


As a technologist, I might want to poo-poo the idea of printing and using ink...or even wasting paper as an environmentalist...but this user story is her story, not mine. Through conversation, there may be a change of heart, but that comes into the fold of negotiations.


Through the initial conversation, we should come up with some Acceptance Criteria. It's documented, but not written in stone - it's just a first pass. It should be fairly basic and relatively quick to implement. May not be complete yet, but could be. The Acceptance Criteria is also known as the Acceptance Test...User Test...Story Test. It should describe how the user would validate the implementation (then we automate it, but that's for another time).


"When I click a print button, the essay should print. I should be able to go to my printer and get the printed essay from the printer. All pages should be there."


What can go wrong there? If have any experience with printers, we know there are too many ways that can fail! Printer out of ink, paper, jammed, just not having it that day, not connected, off. Computer/OS not happy with printer config, dialog comes up before printing, not connected to network. Network failure. Mercury in retrograde.


Perhaps in some future world, printing will just work, but for now we need to revise the Acceptance Test...that's why we need to work on it together...


"I can't promise the essay will print, but I can make it so that there's a print button which will tell your computer to print all pages of the essay to the default printer, or I can have it launch the print dialog."


There's some negotiation. There's supposed to be, it's another Value. But first, we should try the simplest thing that works...do whatever the OS does when you want to print. And that just might get the job done.


Without the conversation, we'd have wasted half the project schedule on documenting something that was nearly impossible and then attempting to implement something that was doomed to fail anyways...at least when Mercury isn't playing nice!

Monday, October 9, 2017

Cause of Death: Optimization

So you've got this team - developers, business reps, product managers, QA engineers and they all need to work toward building out products that add value to the business (either directly, by reducing risk, or increasing efficiency). The business has some goals. You have some goals. Bonuses and career advancement and frankly job security are tied to those goals.

At the same time its difficult to find the right people. There are plenty of applicants but who can be trusted and who has the right level of expertise to jump right in and get things done because the backlog is growing and the work is piling up and everyone wants to know if you can deliver and of course when.

The New York Times wrote up a great article on how to find the right candidate. The thing is, once you've found the right candidate (well-qualified, energetic, enthusiastic, etc..) what happens next depends. Lets walk through the process and see why.

Ramping up


In order to make good on your deliverables, you need more headcount so you can get more done. You're at capacity and just can't ask your people to put in anymore. Once that's approved you now have the task of staffing the right person. You need to throw someone in straight away and start knocking things out because your way behind already and in order to justify the added cost, you want to show results.

So now you ask around internally - this is the standard pattern - if anyone knows someone who is ninjas at xyz. I'll tell you right now everyone who is ninjas is currently working. You'll need to do some sales - or typically hire a salesperson (recruiter).

So now you're in it with the recruiters for about 20% give or take. For a 100k salary, that's 20k...say I bet that 20k would make a great training budget! The person you really think you need actually costs more than $100k. First off in a supply and demand economy where supply is limited and demand is high you're on the downside of that. So $130 plus all the other costs and recruiter fees.

After the Onboarding

Once you've shelled out the upfront costs and now this maverick is on your team they'll need to be brought up to speed. For a while this will slow down overall production. One or more of your resources will need to be involved regularly in working with the newbie to get them up on the tools, practices, procedures.

It'll take some time before the dust settles. Lets estimate 4-6 months depending on the learning curve for a well-designed, documented, unit tested, and maintained system...1-2 years for most systems...longer for train wrecks. You are likely to be in the 1-2 year range and please don't lie to yourself because that's counterproductive and were focusing on being more productive. If you know I'm wrong in your case, then you probably don't need me to tell you all of this anyways, you can close the tab and carry on as you were.

Once the Dust Settles


If you're still with me and haven't gone back to balancing your budget or sipping your morning coffee while touring your friends' feeds, I'm glad you decided to stay because it's about to get real.

Taken on the surface and without considering long-term stability, hiring the right person in that moment is a greedy optimization. Greedy optimizations seek to optimize the whole by optimizing the parts of the whole with the hopes and assumptions...well...that it will improve overall performance. Often active stock traders attempt to do the same. Rarely does it work out; one reason is that active traders are likely to sell when things go south. In stock trading, emotions sabotage long-term results. In the same way, needs-of-the-moment hiring practices sabotages long-term health of the team when not considered along with overall team dynamics, culture, and performance implications.

A rockstar-unicorn-ninja-whatever is not going to solve your productivity issues if those issues arise from broader optimization issues - morale, communications and team cohesion especially. What WILL happen if there are deeper rooted issues is that the ninja will come on for a bit, do some things and ninja-smoke off when they figure out how deeply rooted the issues are which causes a degradation of the three most important factors to said ninja - autonomy, mastery and purpose. Not only do we strive for those three, but we are inherently social beings - nature calls for having social well-being. When our social well-being is not upheld we will fall behind our potential.

And you're still on the downside of the market, so there's that problem. I will guarantee you that salespeople are at it regularly pitching the great folks on your team. Any give one of them has inboxes full of emails and messages from recruiters. The other side of most of those openings are mirror images of yours in that their either adding or replacing staff because they've got a pile of work to get done yesterday. Some aren't but most are.

Solutions

You've got to offer something they don't. Compensation, bonuses, and benefits are a given in any decent market. Anyone worthwhile can get those anywhere. Your business may operate in non-IT markets and perhaps are on the upside of the supply chain (unlike in IT). But in your technology departments you're on the demand side - you need good people and the supply is limited.

In order to combat such a position and come out ahead you have to optimize for the long-haul. There may be some short-term goals that occupy much space in the present, but even if those are met you may end up in a position that doesn't set you up well for the next steps, the next goals...it's all about being strategic! Look for more global optimizations rather than local ones - unless the local optimizations truly do lead to a global optimum.

What you've got to do if focus on 2 things - optimizing your workflow and retaining talent. Both of those involve investing in the talent you've got on-hand. If you are willing to take a $20k hit for hiring new personnel or a $120k+ hit annually you certainly have the ability to work on improving the personnel you already have. Additionally, you can certainly afford to improve your system or process. Take a serious look at any bottlenecks - technical or otherwise - and address knowledge gaps, personnel conflicts and process issues BEFORE adding more calamity to the mix by hiring.

Get out there and spread some buzz in the developer community about your organization. When you're ready to hire - when your house is in order and you are prepared to properly on-board - you'll have developers coming to you rather than having to hard-sell. Basically you'd be marketing which makes it easier to sell. Having a great team, process, and organization to brag about is a great way to attract the top talent that you will now be able to retain.

Non-Solutions


Consider hiring all rockstar devs. They may all be fantastic in their own right. And at the same time if they can't work well together or communicate or the process hinders productivity anyways or they end up building the wrong things - it doesn't matter how good each one is.

And how about the case of pushing to get a release done? By pushing I mean everyone working late and just churning away...dropping principles and mounting up debt to get the thing done. That's a way to burn folks out and in this heyday of IT, that's a recipe for exodus. As far as turnover goes, going back to the active traders, fees are costly and chew away at the gains. You've got to have retention for the long-haul.

But that's not the only danger with churn and burn. What's left behind may end up slowing things down significantly in the future. Move too fast, slap things together, drop important practices and time for reflection, team building, etc and it'll only lead to further entrenchment of negative reinforcement - a negative feedback cycle. This has been well-known for a long time now.

Adding people will always slow things down for awhile. Adding them to a late project is classic mistake that no-one should be making anymore.

Outcomes of Successful Solutions

Once you've applied a successful solution, you should be able to compare the outcome to the baseline. That's right, make sure to capture your current state so that you can KNOW that you've solved solved the problem.

You should be seeing better employee retention, improvement of skills, lower stress, higher morale, engagement, proactive behavior, lower bug count if you'd like to address that metric, and more valuable production if your process is really in order. The latter may be beyond your own sphere of control, but perhaps it is not beyond your sphere of influence, but since this post is about the productivity of your own group it may well be beyond scope for now. However, that's arguable since your productivity should actually be measured by the actual value delivered. I'm sure to do a post in the future about that.

Friday, October 6, 2017

Cultural Evolution Part Three

First off, I must apologize for skipping over the topic I promised in the last post in this series. I'm working very hard on that promised post about the algorithm that looks good on paper. Something has come up during the course of a book I'm writing that I feel an urgency to blog about. It fits nicely with the theme of this series, so here it is.


Think about how we are biased from a very young age to play against other teams. To compete and even to think of the others as lesser beings or the enemy. We play soccer, softball, football and chess with the intent to beat the other side and celebrate victory. In our passive involvement in professional spectator sports, we get fired up about our team and even hurl insults at fans of the opposing team in and out of the stadiums and parks. Some of these are our colleagues we work with, others just fellow humans - probably good people too. But that's how we're biased from such a young age. It crosses political boundaries as well - cities, states, nations. As well as cultural boundaries. Always there is this notion of "the others".


In business, and at your company, you've got many teams working in different areas. This "team bias" that's built-in has to be overcome if your company is to act as a whole. While there will always be some level of internal competition, for better or worse, you can tilt those scales to better by setting and socializing the overarching goals of the company. In this way all if your players will be running in the right direction.


How does this help? Setting, socializing, and tracking business goals at the organizational level puts everyone together into the same team. Who or what do you compete for in that scenario? Rather than competing between departments, units, or teams the competition is aligned toward the achieving those goals. As always the organization's goals need to be in alignment with its mission, vision and values!




Thursday, September 28, 2017

Cultural Evolution Part Two

Yesterday I posted about how I evolved the process by improving the work tracking and planning tools. Today I will tell you about what we did that brought people together into a more cohesive team with high morale.


One day, I got an awesome board game as a gift (thanks to my amazingly thoughtful wife). It was a fantastic little strategy game that anyone could play. Gameplay involved dynamic shifts in both political alliances and strategy. This game was called Quorridor. There is beauty in its simplicity. It's easy to pick up which makes it the perfect catalyst for bringing people together for some clean, competitive fun. We played over lunch maybe a couple times per week. Our manager joined us and we all had a good time together!


What inspired this? Something stuck with me from when I was in the live sound business. One day, while working a gig at the Cadillac Theater I saw the house crew playing chess backstage to pass the time between setup and performance. There's a lot of hurry-up-and-wait in that business and usually you'd try to get a little shut eye since the nights were late and the days were long. I recall many days of just sort of sitting there trying to get some rest or exploring; but playing chess was how I really wanted to pass the time. That was the only crew I ever saw passing time that way and it stayed with me.


When I started my development career, I was itching to play some strategy games with colleagues. We'd go out to lunch together and they even had a Foosball table in the lunchroom. But none of us really spent a lot of time in the lunchroom because it was on another floor. Game playing started small, maybe a couple of us. Then we had some regular players and swapped others into the fun.


It was great for morale and a fantastic way to take a break from the work and go back in with a fresh start. Another benefit was the way it brought people together. The benefits of brining teams together are enormous...companies will send their teams on weekend retreats for such things. They'll have weekend events with BBQs and after work parties. Those are all well and fine and maybe often a little forced - those mandatory extra-curricular activities that often cause disruption in busy personal schedules. On the other hand, game playing over lunch is easy, optional, and not disruptive.


Something like this has to be driven by the team! And it needs to be supported by management. There's a fine line there. Sometimes a manager can just be a person and this is the context for it. At the same time, managers can have a BIG influence in the success of such things by jumping in and especially by not shutting something like this down. Heck, take some time for these things during working hours! Maybe on that Sprint Wrap up day after you've shown off your accomplishments for the Sprint. What else would you do for the rest of the day? Try to jam in another feature? No way! Sharpen the saw! Do games, coding challenges, put together a jigsaw puzzle...whatever the team decides!


I know there's some counterintuitive thoughts around this (I've encountered them)...were not being paid to play games were being played to build features. That's true. Fortunately, teams that play together stay together and there's no better way to get more done than with great team cohesion and high morale. It's an investment in a more productive team. So keep on organizing the work at a nice steady pace and helping each other out (by working together) and you'll have the time anyways - its a positive feedback loop!


Next post I'll talk a bit about local optimization vs overall optimization as it applies to teamwork and workflow. Here's a hint - there's a class of algorithm involved.

Wednesday, September 27, 2017

Cultural Evolution

I've been thinking a lot lately about organizational culture and how it might evolve. Actually, I've always thought about this and since my entrée into the software world I've applied myself to evolving both culture and process. I've worked through the evolution of tools and practices that enable collaboration. Here is one way I've applied tool improvement to help along the first team I worked with.


This team used a SharePoint list to track work - each work request had an entry and was printed out and handed to the developers with screenshots and a bit of write up about what was to be done. The process worked ok because when you went to the list you would be able to filter it and find your work. There was room for improvement so I jumped right into it like this -


First, I created a view for developers (where assigned to = Me). I shopped this around to other developers and my manager. She saw the potential there and asked for a view for managers. We briefly discussed and I went off and created it.


At the same time the managers changed their practices a bit and started using the views to plan on a weekly basis with quick checkins each morning. Afterwards, it was much easier for everyone to know what they should be working on. Micromanagement wasn't an issue. Managers managed the work flow and distribution, not each individual's work. Having the right tools in place helped with that because they could see right away where things were - no need to hover or constantly interrupt to ask or waste time in meetings that should be used to discuss impediments.


pro tip: Managers need to know how the work is going - proactive updates will keep them informed. Imagine if your mom was in a long surgery and you were left for hours and hours wondering how things were coming along. Or maybe you have something in for repair for weeks with no word about how it's going. It can be troubling not knowing and they've got to answer for those things to their managers or to clients.


Hopefully sharing this experience helps to illuminate the value in having the right tools to track and communicate about how the work is going. This form of passive communication enables anyone to check in on the work without bugging or micromanaging (both counterproductive activities).



Wednesday, September 20, 2017

Amazon Cognito?

I'm looking into interfacing with Amazon Cognito for user account management. I'm leaning toward abstracting it rather than using it directly in the web code via their JS SDK. Rather maybe use a server-side SDK. I could just use what ASP.NET has built in, seems like it might be a bit easier...the documentation for Cognito isn't straightforward - as it is with all their docs.

Wednesday, August 23, 2017

AWS Elasticache, Memcached, and Enyim

I've been knee deep in caching lately since I've been looking into an issue revolving around AWS Elasticache and Enyim. I've learned a lot so I'll share some things. Let's start at the beginning -

Enyim is a client for memcached which is a cache server - basically an in-memory key value store with a timer on each value so it can be expired after a certain amount of time to preserve memory.

memcached itself doesn't support clustering but Enyim can be configured to allow clusters. So you can have 2 or more memcached instances on one or more servers and use Enyim to talk to them all as if they were one cache...sort of.

Since abstractions leak, Enyim as an abstraction of multiple cache nodes, leaks in its own way. When a node goes down, you get nothing back in the client. Well that's not true, you get a Success == false and StatusCode == null that's what you get. And that's the default values for the Result object. And when you do get that, it means a cache node went down in your cluster - but not to worry!

Interesting thing about those cache clusters and how Enyim manages to use them. Enyim uses a hashing algorithm (actually several you can select from,) which shards based on the key and the number of nodes in the cluster. It picks a different one based on the key. Additionally, provided you have the servers are added in the same order to the cache client, it will be the same no matter where you are calling from. You could be in any number of producers and consumers of the cache and it will pick the right node.


Let's say you've got 4 nodes and you have a key 'my data key'.

Let's say you use Enyim to Store("my data key", someData, ...) and hypothetically that key hashes to a match for node 3. 

Now when your consumer of that data - could be on a different server in a different part of world for all it cares - calls Get("my data key"), that key will also hash to match server 3 and you'll get 'someData' in the response!

Internally, Enyim calls those nodes servers. It marks each server as IsActive when it makes a successful socket connection to the cache instance. When you use the Enyim CacheClient to make a call to the cache and the socket cannot connect, it marks the server/node as inactive (IsActive = false) and adds it to the dead server queue.

Don't worry though, it tries to reconnect to the servers in the dead queue. When the connection is re-established it automatically becomes available in the cache cluster again. There is a caveat when it fails though. When your client calls and the node is dead, the node isn't marked dead BEFORE the call, but after. It's up to you to test for those conditions (null StatusCode) and retry the call so that it WILL use the next node in the hashring.

In the same scenario above, let's say node 3 is unreachable. Well you could see how network connectivity could be an issue if they aren't co-located. In that case it could be down for only some clients but not others. Let's ignore that issue for this scenario and say the server is being rebooted for patching or something.

Here's what will happen...your consumer will make the first call and the key will resolve to node 3. Enyim will attempt to establish a connection to that node. When the socket connection times out it will mark the server as "dead". It will return Success == false and not set a StatusCode in the response object. The VERY next call will resolve to the next server for that cache key while the node is down.

With Elasticache in the mix, there's another layer of abstraction in play. Elasticache offers its own grouping and node discovery so that you don't have to specify each node address manually. This means you can add nodes without re-configuring the client. When using this option, you should definitely use Amazon's cluster configuration for Enyim, it handles the node discovery for you meaning you only configure 1 endpoint per cluster. This is a good thing, but what they haven't put in the mix is handling node failures on their end. Would be nice if they did, but they don't. So just think about that...

Still, when you are using cache clusters in Elasticache, the best way to handle a node reboot is to simply make the call and the amazing Enyim will do its alchemy and switch to a live node until the dead one is back up and running again. Works the same with adding and removing nodes too. It's all automagic!

Summary -

When using Enyim, just put some retry logic in your code - preferably in your own wrapper class so you aren't repeating yourself everywhere and forget to do it somewhere.

Wednesday, August 16, 2017

Where to Begin the Agile Project

So, your a programmer and you get a new project to work on. And let's assume this project involves some data source(s), probably some persistence, and user interaction. How do we approach the construction of this system?


In the first place, we'll want to understand what the system needs to do and what it will generally "look like". When I say look like, I mean how it will generally be architected - how does the data flow? What layers? What components? Where do they interface? This involves some basic design and there are several patterns to draw from that would get the job done.


Either way you design the general structure of the system, you'll need to understand the data flow. You'll be presenting some data to a user and that data will come from somewhere. It seems obvious that the first step is to begin where the data flow starts...begin at the beginning. However, I say NO!


Through years of trials and errors of beginning at the beginning, I've found that you generally have to change things near the data source at least a few times throughout the life of an "Agile" project.


I've spent much time modeling databases and building data layers for applications only find that once the front-end was built on top, the data model had some fundamental flaws. The main trouble is that those flaws only came out when the business stakeholders were able to interact with the system - when they were working with something concrete. Only then did the details truly solidify. But by then, there were layers built up below the presentation layer - interfaces, implementations, tests - that all needed to change to support this new understanding. This leads to much rework!


If we can have a better understanding of the system before there is a full foundation baked into the system, then we can reduce the amount of wasteful rework on a project.


To solve for this problem, we have some options. We can build wireframes and prototypes in a Design Sprint. This sort of design-up-front is useful in as much as our understanding of the system is complete at the beginning of the project. If there is a good general understanding by the requester, it will be relatively more useful to do mockups - it may trigger insight and understanding earlier in the "Agile" project. It's a way of saying "if I were to build this, would it solve your problem?" I'm for it and it does a lot of good, but it's not where we should end the game!


Another technique I like to use, and I'm urging you to do the same, is to "start at the end"! By starting at the end - meaning the desired outcome of the feature we're building this cycle - we can set some concrete earlier in the sprint (preferably on Day 1 - Sprint Planning). If you're building an analysis system, start with the graphs and charts. Mock up the data that feeds them. Then build backwards from there. It's best to find out how the system will provide value. Is it an invoicing system? Start with the invoice itself! Work back to entering the billing info.


When we do this - and it doesn't have to look perfectly pretty yet - we give the business stakeholder something to interact with BEFORE we build up all the supporting layers underneath it and put on the fresh coat of paint. It's like building a custom bike starting with the seat and handlebars first...then you put on the wheels and a temporary engine to test the rake of the front forks, footpegs, shifter, grips, brake levers, etc...the user interface. Then they know it's built to what they need - after all it is a custom!


But software is a bit different than building a hog. Users can't see or feel the engine directly. They interact through whatever UI you put on there - there can be mockups and design drawings and diagrams, but it's the concretion that will have the most meaning. Besides, mockups are throwaway - waste. You can build a mockup with the actual code, it's easy these days to get a basic UI up and running quickly. Better to build the real thing then iterate from there one little manageable decision at a time. Better to have to refactor early in one layer than have to chase a change up the whole stack!

Friday, August 11, 2017

Friday Challenge #6: Which Way Should We Go?

Lets say you've got some decisions to make. You have a bunch of stuff to get done but you aren't so good at managing your priorities...so you create a system to keep your priorities in order.


Ok that's cool. But how do you prioritize those taks? Maybe you want to take care of the one that's the latest first - if there are any that are late. Or how about the most important? The one closest to due date that isn't late?


Make a Priorities class with a method called GetHighest that gives back the highest priority item depending on the aforementioned conditions.


Lets try this two ways:


1: Write an algorithm that takes into account all those ways to prioritize using 'if' statements.


2: Use a class for each way you want to prioritize and pass it into the GetHighest method. The class should have a Rank method that takes the array (or whatever) and gives back the ordered array (or whatever).
You should have 1 class for each way and pick the way in a different method.


Now go back to way 1 and add a new condition and ranking. And do the same for way 2.


Ok so maybe you have to add and if statement either way or use a map of conditions to class for way 2. But how about this...combine conditions. Or maybe chain filters...make filter classes to remove those that are late and those with low value...combine those two filters in different ways with the soonest due ranking. Now you can create different filter and ranking strategies based on different conditions...


Now if you have 2 hours to spare and the top priority task takes 4 hours, you can apply a strategy that takes into account how much time you've got, how many people are available, what skills are required, or whatever else you need...do that with procedural code and you'll wind up with a twisted mess to sort through anytime you need to add a way to do rank or filter! Go ahead...try it with way 1!


In this challenge you've exercised the strategy pattern and possibly the factory pattern if you've used a factory to retrieve the appropriate strategy.


A few things to try with the factory - use contextual information from the real world - time of day, number of tasks to prioritize, time available, etc. Re-use the filters in the factory to get percentages of tasks that are late and chose a certain ranking strategy based on the percentage...you can see how various strategies can be combined to create new strategies and even reused in other ways that a procedure cannot. This also works in functional programming, in fact it's how functional languages work best...map, filter, etc. The strategy is the map function you pass into map or the filter function you pass to filter, etc.

Friday, August 4, 2017

Friday Challenge #5: Fixed Gear or 3-speed?

When I first encountered an OO language (Java) there was a tutorial that used bicycles to explain polymorphism. All bikes have some behaviors in common - you pedal them to move forward, you turn one of the two wheels using the handlebars to steer. While there are exceptions - pedal-less bikes for kids, sit down bikes with levers to steer, pedal assist (actually a moped) - the common case serves as great fodder for object oriented programming. With that, let's start the next challenge!

You don't have to use Java to take on this challenge, there are many many object oriented languages and many that have support for classes. Use whatever you would like to use - learn a new lang maybe!


Given the following types of bikes:

3-speed cruiser, id=1, type=MultiSpeedBicycle
10-speed mountain bike, id=2, type=MultiSpeedBicycle
BMX - fixed gear, id=3, type=SingleSpeedBicycle


Write a method that takes in a bike id and returns the bike. The method signature should have the base/parent type as the return type.


Bicycle getBicycle(int id);

Bicycle is an abstract type.

You can either make a static method on Bicycle if your language allows it, or make a BicycleFactory to house the method.


The basic bike (the abstract Bicycle type) has 3 methods - pedal(int pressure), applyBrakes(int pressure), steer(float angle).


A bike with gears EXTENDEDS the basic bike with two additional methods: shiftUp(), shiftDown().

*If you'd like, you can add some additional methods like getCurrentGear(), getMaxGear()...or you could return a bool from shiftUp and shiftDown to say if it changed or not...or return the resultant gear of the action. The more the internals of the object are protected the better, the consumer should not know what gear the bike is in, only that they can shift gears up or down.

Whatever you use to call getBicycle will eventually want to check what type that bicycle is...as far as it knows it only has the basic bicycle methods. Until it does know, shifting isn't an option.

Let's say we have a UI for this bicycle. Actually we would want a different UI for each type since users will interact differently with each type. The shiftable bicycles will have some way to shift gears in the UI - maybe a certain keypress or tapping the screen a certain way...how about pressing shift levers?


Fetch the proper UI for the type inside a method that has this signature...

BikeView getView(Bicycle bike); 


Inside that method, interrogate the type of bicycle and return the view for that type. Put that method into a BicycleViewFactory - do you think the constructor for it takes in all the views or maybe something that knows how to load all the views? Just stub out something for now - create the various view classes and have the method return the right one for the type of bicycle. Maybe you really only need two types of bicycles, maybe you want three - that's entirely up to you and far you want to go with the details.


So that's the challenge...write up those classes and put it altogether with some way to exercise (pun intended) the methods and verify that the right View is returned in the end. The View doesn't actually need to have anything in it, neither do the methods on Bicycle...stubs will work just fine. The lesson here is in how getBicycle and getView are implemented.


Oh! I almost forgot! There's one more thing you can do - you'll want something to load the bike and return the proper views...a BicycleController! First it needs to load up an input view to receive the bike id (maybe list them all - listBicycles() and have the user select one if you'd like). Once an id is sent to getBicycle, that method should call your _bicycleFactory.getBicycle() method to load up the bike. Pass that instance to the ViewFactory to get the view...if you got back a BicycleView return it to the user - else maybe return an ErrorView...the BicycleController should take in both the BicycleFactory and the BicycleViewFactory and hold internal references to those...they should both be set up and ready to use before they are passed into the controller.

Feel free to implement the UI in any way you see fit - heck try going for multiple UIs! Everything should be re-usable at this point...the only things that should change based on the UI you want to implement are the Views!

Let's take that a bit further... Consider different UIs. Maybe the user has certain preferences. Or maybe it's a different view per screen type. Perhaps there's a VR view with actual pedals and steering wheel as input...a stationary bike! Got it!? Those would have different Views for different user interfaces.

Alright, have fun! Happy Coding!

This challenge is all about polymorphism, factory pattern, MVC pattern, encapsulation, Dependency Injection.







Thursday, August 3, 2017

How to Make a DBA Salty

If you want to upset you DBA, and possibly get privileges revoked or at best not be able to ask for any favors, then here's how you can go about that.


Open a SQL editor (SSMS, Toad, Dbeaver, whatever you use to connect to and execute SQL). Connect to the highest level database you have access to. Make sure any kind of autocommit settings are off if this applies.


Now you want to make sure your not going to do anything really damaging like drop tables or something crazy like that - probably you won't have access to do that anyway if the security policies are set up correctly. But you should be able to update some data.


Now open a transaction:


BEGIN TRAN;
or
START;


or whatever your specific DB uses to start a transaction.


Now update some tables from within the transaction. It's best to do a whole range of updates and a few inserts just to ensure that table level locks are taken. Do this on a few tables just to be sure.


From within that same transaction, select from all those tables you've altered to verify that the data has changed. And that's it...don't commit the transaction, just leave it open. Before you know it, warnings will go off, alarms will sound, alerts will get sent. Anything using the DB will grind to a halt.


Eventually, a DBA will find that you've caused this and will lose trust and faith in your abilities. Lock the database like this a few times and you'll be put in the corner. Would you lose your job over it? Perhaps....


If you've left a transaction open like this and closed the window with the query, the connection and the transaction might even be left open. Then its up to the DBA to kill the connection and rollback the transaction.


BTW, you should probably ROLLBACK the transaction eventually, perhaps the DBA will ask you to close the trans...that means to roll back the changes you've made so that they go away without actually updating the DB or to COMMIT them which would apply them to the database. In this case, you were just messing with the data so you probably don't really want to commit the changes - they'd be saved permanently!

Friday, July 28, 2017

Friday Challenge #4: Hack Troy

Ok...it's time for another Friday Challenge! This one is for a bit of fun...be a hacker for a day. Hack Troy Hunt's site. Don't worry it's legal, he invites you to do so. You see, Troy is a world-class security guru and set up the site to accompany his PluralSight training video - Hack Yourself First. You can learn a lot from him in this free course, so why not check it out!

And while your at it, pop over to OWASP and see what the top 10 web security vulnerabilities are. Spoiler Alert!!! The top 3 haven't changed in the last 4 years. Jeez guys...aren't we getting it yet!?!?

Friday, July 21, 2017

Friday Challenge #3: HTML 3-ways

Ahhh...web frameworks. What do they do for us? A lot! But do we really need them? Let's find out with this Friday Challenge.

This challenge has 3 parts (tip: try not to get too fancy and get bogged down on any one part).

Part 1: Build a web page with HTML and JS only (use some CSS if you like pretty things). The trick for this one is to use base JS only, no JQuery or anything like that. Do all DOM manipulation using JS only.

Part 2: Build the same thing (or refactor existing) using a templating tool such as Handlebars or for additional challenge build your own rudimentary templating tool.

Part 3: Build the same using a full-on framework such as Angular, React, Ember, Elm or whatever flavor of the day your feeling.

Build whatever you are comfortable with - a hello world if you wish. But if you really want to realize the value of the challenge - try something that interacts with an API that returns data (and handle the possible failure of the API call or no data response).

We do depend on frameworks for a lot. And yeah it's not always necessary to do so, but if you decide you'll have a go at not using one you may end up rolling your own. Depends on what you're going for...either way it's good to know what life is like on either side.

Friday, July 14, 2017

Friday Challenge #2: Make Your Own Mocks

Mocking frameworks are all the rage in unit testing - they can help you build out Mocks, Fakes and other types of Test Doubles. We don't strictly need them to create those doubles, but they sure do help! If you haven't guessed yet, this challenge is all about test doubles! (Classicists need not apply! JK, this is for you too 😁).

Two parts:

  1. Use a Mocking Framework and a Unit Testing Framework to create a fake data reader, pass that into a business logic unit and use it to fetch some fake data.
  2. Create your own fake and pass it into the same business logic unit and use that to fetch some data...can you implement the fake so that it can be set up with different fake values for different tests? Maybe start with the stub - static data returned from the test double (in that case it'll be a a stub).


Suggest using Mentor which has Mentees...and get the Mentees that have scheduled sessions. For an extra challenge, instead use two test doubles - the data reader and a notifier - use the notifier to "send" reminders to all mentees without scheduled sessions. You won't actually send the notifications, but need to verify that "remind" happened.

Thursday, July 13, 2017

Managing Complexity in Today's World

Are business problems more complex today? That's the assumption I'm going to operate on - and I'm sure you'll find no shortage of sources on the web to back that up (empirical evidence aside). So let's assume that rhetoric for now in good faith and an appeal to common sense. Padding aside, that complexity flows through to the systems we create - the software included. In order to manage that complexity, we've become Agile.

I hear ya: "Not another article about Agile...". I know, I know.

But let me at least tell the story about what inspired this and then move on to the point. An IT professional (non-technical) and I started chatting and the conversation led to tools for automating business processes. I've developed on platforms and developed custom solutions for business process automation. Evaluating solutions can be a complex process in and of itself. The analysis process is fraught with uncertainty, broad (inaccurate) estimation, bias, and difficult to predict outcomes. Marketing (being what it is) definitely can skew the analysis. Heck, that's part of the duty of marketing! With proper marketing, sales is nothing but finalizing the deal that was already done through the marketing. So what does that have to do with Agile?

Agile is about being able to respond to change. In software, if we are truly following Agile principles and the practices set forth by the minds that solidified the movement, then we are designing and developing software that can respond to changes in the face of complexity. Lately, that seems to be the mantra of the business world as well - business needs to be Agile so that it can respond to changes in the complex market. As technology and automation have been evolving exponentially, so evolves the need for business to evolve to respond to those - among many other - changes.

Thinking about what this means in terms of managing change in the face of complexity and cost. In the software world, cloud computing has become all-the-rage for just those reasons. In the cloud, you pay only for what you use; changing and configuring resources is relatively easy; there are no data centers or servers for the business to manage. Good software design has been about responding to change for many many years, but now the infrastructure has caught up. It is a service rather than an internal resource.

Back to the topic of deciding on a platform...

Some platforms follow the cloud model - pay for what you use. Others - pay a big fee up front along with licensing and maintenance fees annually. Some platforms can respond to change - they are testable, debug-able, manageable. Others...not so much. Some platforms can handle some business situations easily. Usefulness is a function of cost v realization of value - as always, use the right tools for the job.

But whom should decide what the right tools are?

When you take your car to a mechanic - you may be paying for the service and it may be your car, but the mechanic knows which tools to use. Same with the surgeon, plumber, accountant, attorney, chef, construction company, or any other number of service professionals. Why should it be any different for software? Professionals know their craft. Business people know their business - and many know how to do software, woodworking, plumbing, some things legal, accounting, have dissected a frog - does that mean they should perform surgery? If you've replaced the float in the toilet, should you be replacing your own stack*? So who should decide which tools to apply in software development? Logically, it should be the professional software developer who knows their craft.

However, the situation changes when the cost exceeds the budget of the development department. Other levels of the business has to get involved in the decisions, there are politics at play, the tool may be oversold - then what? Now you've bought a tool without a use! So then you look for ways to use the tool to realize it's value...now it's the non-tool user calling the shots (since they've put their money and reputation on the line) on which tools to use to do the job. Then partitioning of tool users happens - you have experts in such platforms. Someone higher up the food chain now decides which tool to use and assigns the problem to the expert(s) of that tool. The expert dutifully applies said tool to the problem (which of course is a problem that tool was designed to handle because when you only know how to use hammer, every problem is a nail).

What's the better model? Maybe that's better left to another post, but I'll give you a hint - there's no 01001001 in the solution.



*A stack is an air vent that allows the toilet to flush properly among other things; without it the suction would be a serious problem. Perhaps it gets blocked by leaves or a birds nest and you have some issues. They used to make them out of cast iron which is some really heavy stuff that can maim you if you don't know what you're doing. They've also used galvanized steel which rusts over time. That rust can also clog the stack. Replacement should be done by a professional only (no user serviceable parts inside).

Wednesday, July 12, 2017

Video Series is a Go!

Good day! I've been thinking of doing videos as a companion to this blog for some time now (at least a year), and finally got that going! You can find the first video here or below. It has some tips and tricks on using Visual Studio more efficiently. Enjoy and subscribe!



Friday, July 7, 2017

Friday Challenge #1: Data Access

Today, I'm going to do a whole different kind of post - a Friday Challenge! Here's how it works: I'll post a challenge, you take up the challenge (usually by writing some code) and post the results or a link to the resulting repository in the comments. Discussions ensue. We share information. I'll put one up every Friday if there's interest. It's fun and engaging, what can go wrong? With that, here we go:

First Friday Challenge.

Your Quest is to write data access code using 3 different "patterns". I'll try to be a language agnostic as possible, the one requirement is to use a SQL based database.


Background:

In software there are many ways to perform the same functional operation, some are better than others when it comes to change, re-use, and readability.

Scenario:

You have a `users` table with the following columns: id IDENTITY/AUTOINCREMENT INT, name VARCHAR(255). You have the user's id and need to lookup the name of the user.

Example:

Given the following users:

|id|name|
|1|Marquart Flitshema|
|2|Loriens Blakens|
|3|Ashima Kithra|

When I look up a user with the id 1,
Then I should get the name "Marquart Flitshema".

When I look up a user with the id 3,
Then I should get the name "Ashima Kithra".

Challenge:

Pattern 1: Everything in 1 class/method/function (this is actually an anti-pattern).

Pattern 2: Pass the SQL and some mapping class/method/function into a different construct (class/method/function) to handle execution. Could be a base class (abstract or not), could be a prototype, any other construct will do so long as it is distinct from the construct that handles the SQL string and the mapping.

Pattern 3: Pass an entire construct into a different construct which handles the execution (by passing the database connection into the first construct).


Each pattern should be consumed similar to the following depending on language:

Pattern 1:

db->GetUserName(1);
db.GetUserName(1);
db.getUserName(1);
DataBase.getUserName 1.

 Pattern 2:

  users->GetName(1);
  users.GetName(1);
  users.getName(1);
  Users.GetName 1

  OR

  users.->Get(1).name;
  users.Get(1).Name;
  users.get(1).getName();
  users.get(1).name;
  Users.Get 1
  |> user.name.
(getName (dbQuery getUser 1))

consider the advantages and disadvantages of both approaches.

Pattern 3:

db->Query(userNameQuery);
db.Query(userNameQuery);
db.query(userNameQuery);
Db.Query UsersQueries.NameQuery 1

OR

db->Query(userQuery)->Name;
db.Query(userQuery).Name;
db.query(userQuery).getName();
...

again consider advantages and disadvantages

Those are just examples in various language formats or pseudo-language formats. I tried to be inclusive, though I'm not as well versed in some languages (for a bit of language fun look up DerpCode). Feel free to use due dilligence such as null checking, exception handling whatever else you would like, but try not to muddle up the point too much with all that for now...don't let it get you off track is what I mean.

The code doesn't need to work completely, but if consumed by other code it should work (hint: you could use Unit Tests to drive the code and mock out the database but there should be some sort of `connect`, `open`, `close` and whatever else your SQL DB of choice normally has in the mock).

So that's it, the first Friday Challenge...go forth on your quest! If you need any tips, explainations, etc...feel free to put those in a comment and I (or anyone else who feels enlightened to do so) will do our best to get back to you as soon as we can. You should create a repo in GitHub, Bitbucket, JsBin, *Bin, someother online platform or whatever you do then share your work in the comments. Please don't link to file downloads, dropbox, etc...those aren't really for sharing code the way your should be sharing it for this.

Wednesday, July 5, 2017

What is Dependency Injection and Why Does it Matter?

While DI may be ubiquitous among most developers, it recently occurred to me that there are entire batches of newly minted grads entering the industry who, by and large, would not be familiar with the concept. Don't worry, you aren't the only ones! Although this topic has been bludgeoned to death by bloggers, I somehow feel the need to bubble it up here as well - sorry, it's a fun topic for me and I strongly feel that it's necessary to understand the principle. So if you are reading this and thinking - here we go again! Just give it a glance. But if you're like "what's this cool thing about?" or maybe still foggy on the principle read on and see if this doesn't clear things up.


I like to think about Dependency Injection/Inversion (DI) like this: A consumer should not be responsible for producing a resource that it consumes, the execution context should do that.

For a concrete example, consider the need to check whether or not a user is currently enrolled for mentoring. I'm going to be abstract in terms of implementation in order to make things a bit more language and platform agnostic (aside from the example being a web UI)...


We have some web site for managing mentor opportunities. Users who are mentors can navigate to the mentor page if they are mentors. If not a mentor, they cannot go to that page. Also, if they are not a mentor, we want to see if they have enough XP to serve as a mentor - if they do, encourage them to sign up for mentoring.

There are a few scenarios at play here so let's iterate those with specific examples:


Background:

| user   | XP    | is mentor |
| Tyler | 564   | yes           |
| Julie  | 8274 | no            |
| Jeff   |  128   | no            |


Scenario: User is a mentor.


Given user "Tyler" is logged in
When the navigation is shown
Then the user should see the mentors link.

Given user "Tyler" is logged in

When "Tyler" goes to the Mentors page
Then the Mentors page should be shown.


Scenario: User is not a mentor and is not qualified to mentor.



Given user "Jeff " is logged in
When the navigation is shown
Then the user should not see the mentors link.

Given user "Jeff " is logged in

When"Jeff " goes to the Mentors page
Then the user should be redirected to the home page.


Scenario: User is not a mentor and is qualified to mentor.


Given user "Julie" is logged in
When the navigation is shown
Then the user should see the mentors link, but it should be disabled.

Given user "Julie" is logged in

When"Julie" goes to the Mentors page
Then the user should be redirected to sign up for mentoring.

Given user "Julie" is logged in
When messages are displayed
And the user has not accepted the offer to become a mentor
Then an invite to become a mentor should be included in the messages.


Scenario: User becomes a mentor.


Given user "Julie" is logged in
When the navigation is shown
Then the user should see the mentors link.

Given user "Julie" is logged in

When "Julie" goes to the Mentors page
Then the Mentors page should be shown.



That's a lot of description, but it's worth noting the various scenarios in this example. But now back to focusing on the DI part...

So we have a module (note: I'm using "module" to express a class or function or whatever the unit is called in whatever language you use) for each component of the website -  the navigation bar, the mentor page, the mentor signup page. Those modules each need information about the user - is the user a mentor and can they become a mentor. Let's make a User module - one that can be passed into each web module and which can answer those questions. . Let's see how that would look:

User:
        isMentor() : boolean
        isQualifiedForMentor() : boolean
        becomeMentor() : boolean

Navigation:
        getLinks(user) : links

MentorPage:
        authorizeUser(user)

MentorSignupPage:
        authorizeUser(user)
        signup(user)

The User module would be passed to the web modules by the request context (usually in the web framework) - the same thing that initializes the given web module.

So, something in the web request process will authenticate the user, load the user data, and pass the user details to the web page module as a User module. That User will be used to answer questions for each module that depends on it.

Notice what we are not going to do here - we are NOT going to call the database from within authorizeUser(), getLinks(), or signup()...those are details those modules simply should not care about! What they care about is whether or not the user is a mentor or is qualified to be one. Even the Signup page doesn't care about HOW to sign up the user, it just needs to direct something else to do the actual signup.

A level deeper...

The User itself could have some dependencies. Let's say the User needs to send a message to some signup processor - perhaps to a message queue so that the signup processor can pick off the queue and process at it's own pace (so the user won't have to wait for processing to complete).

Let's say at first, it handles the signup right there in the web process. Now let's say the site scales massively and running the signup process within the web processes impacts all users of the site due to the additional load on the web servers - we would want to offload that process. When we use DI, we can change the way the User handles the signup without changing the consumer (the User module).

User:
    internals/private members/curried functions:
        _mentorSignup/signupForMentoring(user_id)
    public function:
        becomeMentor() : boolean
             calls _mentorSignup -> signup(user_id) or signupForMentoring(user_id)

Whatever happens when a user signs up is passed into the user module by the module that constructs the entire web request handling process (or whatever consumes the User for that matter).

The MentorSignup dependency (the one User is depending on) can either handle the signup itself OR pass off the signup to some other process that runs asynchronously so that the current process can continue execution without delay. It can be changed without touching the User module or any module that consumes User (theoretically).

A goal in good system design is to produce product that can be changed at a minimal cost - because software is soft (malleable, changeable).

In the case of changing the MentorSignup module, it's an ideal situation to be able to change the implementation without changing the consumers. However, there is one problem - if the MentorSignup module is consumed in a way that the web modules need to be recompiled or the MentorSignup module would need to be replaced in those modules in some other way, then we haven't achieved the ultimate goal. We are coupled architecturally.

Instead, we could have some communication between the web process and the signup process using a common interchange protocol such as HTTP and a mechanism to tell the signup process that a specific user would like to sign up to become a mentor. That's getting in a bit deeper than DI so I'll back off a little...

So here's what we have now: web modules, a User, a MentorSignup module, and something to build it all up and pass those dependencies in (MentorSignup into User, User into web modules).

We can do some interesting things with that - we can create some fake User code that gives us pre-defined answers so that we can build and test the web modules without depending on all the details of how a User is signed up, how user data is stored and retrieved, etc. We've decoupled the web modules from the dependency on the user data.



How else could we do it? Let's say we started out with the user data in a db and coded everything into the web modules directly:

Navigation:
        lookupUserLinks ... open a db connection, run query, map results, close connection, use results

MentorPage:
        authorizeUser(user) ... open db connection, run query, map results, close connection, use results

MentorSignupPage:
        authorizeUser(user) ... open db connection, run query, map results, close connection, use results
        signup(user) ... open db connection, excute command...open smtp...build email...send email...close db connection...close smtp...open different db connection....execute another command...close different db connection...etc

For one thing, we have a lot more room for errors...then duplicating errors. Another, our web modules would be difficult to understand - what does all this db code have to do with redirecting a user??? Ok you say...so let's pull that out an put it into a User module, call up the user module to run the code...

Navigation:
        lookupUserLinks ... create User module...use User module to query (get user)...use results

MentorPage:
        authorizeUser(user) ... create User module...use User module to query (get user)...use results

MentorSignupPage:
        authorizeUser(user) ... create User module...use User module to query (get user)...use results
        signup(user) ... create User module...use User module to query...use results...create Email module...use Email module to create and send email

At least the "goo" is centralized and has some re-usability - we no longer have a copy and paste scenario. But now we're still coupled to the specific User module that the functions know about directly! So we can't change those without changing the web modules - we can't do cool things like swap the User module on the fly or in test scenarios!

Now we get into programming to abstractions...we've already abstracted the details somewhat, but we've got this wicked coupling to only one specific implementation. So let's level that up and program to an abstraction...

For the purpose of building the web module, we don't care about the details of how to get the user, we don't even care that the user was got at all! What we really care about is that we have something to tell us certain things about the user and something that will sign the user up for mentoring. That's the abstraction we care about. We'll get into the details of implementation later in the development process - regardless of what those are, this thing should still work as described above in those Scenarios.

An important note on abstractions - they can leak! While we may only care logically that the user is a mentor, we also care that finding out does not take ages...if it did, that would be a leak in the abstraction! We really want to find out almost immediately when that block of consuming code runs - therefore we may want to load up all the user data BEFORE the web page is shown to the user. And of course we know this needs to be done quickly as well so we'll have to consider that in the implementation of the User module.

Now that we've gotten this far and (hopefullly) you can see how DI and many other fundamentals (programming to abstractions) are important to producing good software, let's take a look at one of the primary benefits of having done so...testing.

Now that we've designed the system in a way that depends on abstractions whose implementations are injected, we can swap out the implementation. For each of the defined scenarios described waaay above in this post, we can create a User that gives the pre-defined responses and exercise the code without depending on complicated - one-time only, db interactive - test setups.

We can have a setup for the first Scenario similar to:

MentorUser : User
    isMentor(): true


NonMentorNotQualifiedUser : User
    isMentor(): false
    isQualifiedForMentor() : false


NonMentorQualifiedUser : User
    isMentor(): false
    isQualifiedForMentor() : true
    becomeMentor(): true

Well, there you have a whole set of subtypes all based on the User with only isMentor()! Of course each has its own place...it's own domain...we could further break those down to become specific dependencies of each web module, but that'll be enough detail for now...

think about the DI principle and other scenarios where you may want to swap some dependency based on context...connected to internet v not connected, testing v production, logging per environment, exception handling per environment, multi-tennant apps...just to name a few.

If you've now come to understand the DI principle then you've gained some valuable XP! Next you'll probably want to exercise the idea...look for something soon that'll help with that (hint:subscribe now to see what happens on Friday)!

Friday, June 30, 2017

How to Write Effective Commit Messages

When committing code to a source code repository, it is generally expected that we write some kind of comment to say what the change is all about. As we scroll through the history of changes we can see the comments for each commit. While there are variants of how we write our comments, I've found that following a simple formula can produce comments that provide the most value.

There are several problems when we write vague or disparate comments - consider it a smell. Either the work being done is nebulous or too big to pin down as in "doing work" or "cleaning up" or "refactor". Or the commenter isn't communicating well or worse yet, has no connection to the output of their work - "made tables" or "wrote code for beta" or "found a better way to do it".

The reason each of these types of comments are issues is that they don't communicate well to others - including your future self. As we scroll through source history looking for some past breaking change, we'd like to see specifics about each change.

It's good if you write comments like this:

    "[Issue #174] adding user type so Admins can login."

It has the following structure:

    "[<issue>] <what> so <why>"

First, the issue call out is formatted in a parse-able way - one day you may want to run a query on all comments and consistent formatting will help big time! More importantly, it links the change to an actual issue for reference otherwise there is no further context for the change. In some version control systems/integrations, you can actually link check-ins to issues. If you've got that setup, it's not strictly necessary to include the issue in the comment.

Next is <what> you are doing. Be brief in the first line. Make this something everyone can understand - make it relevant to users even. In this way, updates can easily be conveyed to users at deployment time! You can put details on subsequent lines if they aren't already in the issue itself or if they would be relevant to further understanding the change - but do so sparingly...the code and comments in the issue should speak for the details already.

Following <what> is <why>. I really like to include the why part because it allows other developers and your future self to understand at a glance why you made that change. More importantly, it gives product owners/managers some context to the changes that are being made with the same at-a-glance view. Communication with those folks can be facilitated via a dump of changes since last time! Of course with the proper tracking system this isn't the only way...but it is a handy tool for prepping a release.


Consider - we write code for consumers and will have to communicate to consumers about changes, commit comments is a big part of the glue between the incoming work and the outgoing product. Take the time to make them clear and concise for a better overall product experience!

Monday, May 15, 2017

Netflix Your Life

There we were, sitting at lunch talking about the "time v dollars" problem. The conversation wandered between alternatives to "trading time for dollars" and "working from wherever". As we tried to focus in on the ultimate work-life balance equation, my friend came out with phrase "Netflix your life". Hit it right on the nose!


If you were ever around when live television was a thing that people did, you know that you watched what was on - when it was on. In other words...you put everything else in life on hold to watch your shows when they were on. And of course this meant sitting through commercials; or most likely, timing snacks and bathrooms with commercials. For that time, TV owned your life. And, their goal was to keep you glued to the TV for as long as possible so they could sell ad time at a higher rate (more viewers == more targets == more revenue) - the ratings game.


FFWD to modern times - aside from the VCR - there was no other way to do it, you were on Network Time. Nowadays, we have a bajillion ways to watch what we want, when we want - were in full-control of our viewing time. We can even watch at 2x speed to cram more video into one sitting. And...its portable - so we can watch wherever.


How did this change the game? We became more discerning with what we watch. TV got much much better. And producers can measure exactly what people watch. Furthermore, it should work better for the networks since they get more viewers - albeit more competition.


If we are able to work whenever and wherever - we have Netflixed our lives.

Monday, May 1, 2017

Quick Math on Time

Let's imagine we have some arbitrary process that takes about 20 mins per day to accomplish. And let's say it is done 5 days a week. That's 100 mins per week, about 400 mins per month, and 5200 mins per year.


I wish I could say that was 52 hours, but we're not quite vibing on metric time in this world so I'll have to stick with our whateverwegot standard time: It's about 86 hours per year...more than 2 weeks vacation for most people, an entire paycheck, a whole Sprint plus more...you get the picture! How much does it cost?


Of course that part depends a lot on what the activity is. And valuation has the same dependency. You just can't automate a lot of processes away, but when you can automate and turning a 20 min/day task into a 1 minute task (or better yet 1hr/year for configuration) you better believe it can make a difference!


Let's put some $$$ to it...well, there's not easy answer here...but lets say its a skilled knowledge worker making $100/hr...easy to see that's over $8k/yr. And if more than one person does the same process that of course becomes a multiplier. A mid-size company might be losing a whole year or more on 20 minutes a day. Let's also suppose this process could be automated in maybe 8-16 hours.




How much does a meeting cost? Taking a guesstimates at some meetings I've attended where about 10 mid-tiers, 6 upper-mids, 15 lower-tiers (talking salaries here folks) met for an hour each month...that's equivalent of 31 person-hours (and then some considering the disruption, in which case we're talking 40 person-hours). Plus the space and resources used...range of $3000+ per meeting...or over $36k/year.




Automate or cut the fat...Kaizen is the practice of reducing (eliminating) waste. One way to cut waste is to automate, another is to eliminate, or lean-up your process. If you have an opportunity to do either, you stand to win big by freeing up resources so they can spend more time on more important things - like going to the beach, surfing, mountain biking...or what I mean is uhm...delivering more value, yeah that's what I meant.


Could that meeting have be cut down somehow? Probably not totally, but with some impact analysis maybe could save some people the trouble of having to attend. Would it be worth addressing? Probably not in this case. Point of this exercise is to illustrate that all waste cannot be reasonably eliminated. Plus there is a cultural value in the meeting as it served a secondary function to create some form of direction to an otherwise disparate departments around a common goal (as well as moderate hegemony).

Friday, April 21, 2017

Beyond Kanban

In the last lesson learned, we saw that Kanban is easy. It's easy because it only describes how work flows - there's a whole depth to it with value stream mapping, prediction, planning and measurement but that's really not that complicated either as we'll see soon enough. What Kanban cannot solve is how the work is understood.

In manufacturing, the origins of the culture that spawned Kanban, the product is always the same...therefore (much like a McDonald's) there are finite, well defined steps that procude a specified outcome everytime (within some degree of acceptance). In software, there's the same sort of problem but the product is different every time...

Since the product is different every time (if it wasn't we wouldn't need to build something new) the "specified" differs every time as does the "acceptance" and the "degree". Stephen Connolly eloquently describes this issue in this blog post

https://dzone.com/articles/bug-vs-feature-the-acceptance-criteria-game?edition=292905

As Stephen describes by way of the problem with the shields on the USS Enterprise, the understanding of the feature will need to be clarified with the upstream party. Add a button to the screen, make it do something. Also make sure shields can be activated in other ways - different interfaces. Ah...so the upstream party - shield system engineers (Scotty?) needs to make sure those shields can be activated from multiple sources. Now the shield becomes downstream for the activation call. Shield engineer specifies interaction protocols maybe defines some security practices - who makes that decision? Now it might be a deeper matter of verifying who is calling to raise the shields...is it Borg Captain Picard? Real Captain Picard? What's the override protocol? Does it need to change for this? Now it gets complex...what was a button is now a security and safety concern since were opening up access to a critical safety system...

Guess what? This is what Lean Change Management is all about - managing this sort of change without invoking filibuster (or bike-shedding, if you will). Map impact of the change, radiate change info, talk to people about the change, make changes small so they succeed. But I digress...

Point is, when we go beyond the initial ask (the happy path) and we have different requirements for each change, as Stephen points out, things need to be explicit. Kanban doesn't specify how this is done in software, but other processes do - to a degree.

In TPS (Toyota Production System) all the details are worked out well before the manufacturing process starts. In Software, that's generally not a good way to work. I recommend taking a bit from Scrum and breaking down the details at the start of some iteration...in Kanban there is no iteration, but there is a work week so why not use that? Start going through the feature stack each week start. Fill in details about the top features and break down the features into work stacks with details. If the stack starts running low mid-week, put in a kanban for feature breakdown. Stop the line, breakdown features and reiterate. This way there is continual flow.

One thing this does is force small changes. Small changes are critical to agility, they are mutually inclusive. You have to be able to ship and switch gears to be agile - by definition.

On a personal note, I've always found the Sprint (of the Scrum process) awkward due to the fixed time box. Because of that fixed time box, we're forced to jam ill-fitting work into that box. Often we over-scope because we're inherently optimistic - to a fault. It makes more sense to me to make landmarks out of shippable features or feature sets. Estimation comes from measurement of past sizing...instead of asking "how long will this take to develop" we use past data on "how long will something this size/complexity take to deliver", a more important prediction by far. It's also easier to measure - when did we start and when did we release it and actually start to realize its value (lead-time). That's what really matters anyways.

Thursday, April 20, 2017

How to Kanban

Kanban - unless you've been in a prison camp for the past few years, you've at least heard about it. It's origins are in manufacturing - where they have to repeat the same task over and over (major implications). It's been ported to software development. The basics of the entire TPS (Toyota Production System) are about reducing waste and Kanban is one small part of that entire principle - Kaizen (reducing waste/continuous improvement) is one of the other important principles that make the TPS successful. In order to truly port those principles into Software Development, some of those principles need to be applied differently (in a software environment sort of way) in order to optimize throughput and reduce waste (the main goals of the TPS).

The start is the end...in all ways. Start building your process with the end in mind and with the realization that YOUR process will depend on YOUR environment. Perhaps your environment benefits greatly when there is waste or when things take longer to accomplish, IDK. I'm only taking the happy path into consideration here - where your goals are to reduce waste and increase throughput (without reducing quality or increasing costs).

Most commonly, in software, the application of MVP (Minimum Viable Product) is used to reduce waste and that is a good start. Sometimes, and probably not rarely, important parts (artifacts) are cut out of the process in the guise of reducing waste. It's been covered already so I won't go there. I will focus right now on the kanban itself...the card.

A kanban is a card that represents a work order from downstream. Kanban move upstream and results move downstream. The kanban is a visual, tangible representation of something that a downstream step needs in order to do their work to deliver working product. All the way downstream sits the consumer. In software, data flow one way, kanban flow the other way.

The Kanban board is a visual way to see what's going on, but its located centrally somewhere so everyone can see what work is in process. This works when the team is co-located, but can easily fall apart when distributed. Each area should generally have its own board. These boards represent requests for work. Downstream should have a prioritized queue of all the work, those cards should be maintained by whomever is directing or requesting the product (the program manager, "the customer", the product manager, the BA whatever). The thing is, kanban move upstream in a pull system...usually starting at the end of the dataflow...the end-result.

For each work request, you need more than one card...one for the board and one for the person doing the work. The one for the board is so that everyone else knows where something is...when its being worked on, the person/pair/team doing the work is indicated by placing their card on the work card...they get a copy of the work card.



Lets take a quick walkthrough of this process in action -
Lets say were doing a User Management thing. First story is add users - "Add Users Page" kanban goes to UI team. "User Persistence" goes to the backend team just before the UI team really needs persistence. Integration happens, goes out for testing. QA almost immediately sends a card to the UI team "View Users" because they can't see the users they just added. UI team puts a card in to the backend team "Get Users", works on view while waiting for data. UI team gets back the "Get Users" card and integrates. .QA gets the "View Users" card and checks it out. Now someone has the idea to edit users so those cards flow up and the feature flows down. Disable user becomes a thing and so on and so forth - cards flow upstream, features flow downstream.

That's it. That's all Kanban is. It sort of solves some things, but really there's a lot more to the whole TPS system that is of GREAT value to software teams if one has the impetus to use them...continuous improvement, Gemba (go to where the work is)...Gemba is a good one. It's just walking over to someone's desk and seeing what they're doing and how they work - understand their work and challenges they face. Pretty easy right!

Here's a good one - Stop the Line! If something isn't working right, stop the line and get it fixed - crash the problem, meaning everyone get to it and anyone has helpful ideas please let them be known.

Continuous improvement - always look for little gains in efficiency and reducing waste. Maybe its wasteful to have 4 manual steps in some process that could be automated and would reduce process time 80%. If that process is repeated 3x per day that's a lot of waste to reduce.

The moral? Doing Kanban is about an entire cultural alignment around Lean practices. It's pretty easy to do with some basic principles...and some obvious benefits...once you understand the nature of it.


The next daily lesson learned goes deeper into some details about the details.

Wednesday, March 29, 2017

Talking 'bout that Functional Spec

What's a spec?

A spec is a reference for what the software should do.

Who uses the spec?

Everyone who is working on the product. Devs, QA, BAs, Managers, Marketing, etc.

How do I know it's good?

Here's a test - QA should be able to use scenarios in the test directly. They should be able to create accounts based on characters, follow through their routine activities and the non-routine activities as describes in the spec.

A sample



PizzaMeNow is an app for ordering pizza online. Users can order for pick-up or delivery. Users select how many people they're feeding and what toppings are good to go. They pay for their pies through the app (paypal, CC, maybe bitcoins or whatever else).

Scenario: Big Party.

Rick is having a party for the big game. He has 20 friends coming over and wants to serve up some great pies. He opens up PizzaMeNow and plugs in his phone number and address, fills in 20 for how many people and selects the "party varieties" toppings (which includes the usual suspects that you find at any pizza party - cheese pizzas, sausage pizzas, pepperoni pizzas, supreme). He's ordering ahead a couple hours, the party starts at 4 so he selects the 6:00 delivery time slot that same day. He sees the note that says it could get there +/- 15 mins of the selected time, he's ok with that and proceeds to checkout. He uses his PayPal account to checkout. 6:05, the party is on and the pizzas arrive - now it's time to really party hearty! 6 grande sized pizzas from the Pizza Shack, plenty to go around!

Scenario: Lonely Chad.

Chad is newly single and kind of nerdy. He is bit down and feels a bit like staying in for the weekend and watching all of the Hobbit and LOTR movies in order. It's going to take awhile and he doesn't want to have to break mid-action to make food. He logs into his favorite food app PizzaMeNow and orders pizza for lunch and dinner for both days. He places multiple orders and pays for them all at once. He wants it all fresh an hot, so he selects 1 person on each order and declines the "enough for leftovers" option. He prefers a bit of variety, so he puts in different ingredients with each order (somehow matching the theme of each movie that will be playing while he's eating each pizza). He confirms the four orders and checks out using his bitcoin account.

Scenario: Hungry Kids - "Pizza Again..."

Judy is a working mother with four kids and a hungry husband. She has the drill down for Friday nights...family game over pizza, babysitter and out. She uses PizzaMeNow regularly so she has a profile. She uses the "Reorder" option to place the same order as last Friday. She has the autopay option set up with her CC info so it just works. She is hoping one day that a simple text message from her phone will replace the bother to go into the app to place her order, but she is happy to be up and running so quickly (and get her order in while the car is stopped at a red light - maybe she is one-of-those-people or maybe she is in the passenger seat).


Each of these scenarios tells a user story that describes how the user will actually use the app. I'm using scenario here but User Story could work too, however there is app interaction involves and user stories typically involve the description of need without describing how they interact with the app...you know the type - As a mother of four plus a husband, I need to order our usual order quickly so that I can get on with my life...Ok, so these scenarios are a bit more like use cases combined. Use Case 1 : payment. Use Case 1.2 : pay with paypal. Use Case 1.2 : pay with CC. Use Case 1.3.a : pay with Visa....and so forth.

We'd really like to have scenarios for fails too ...

Scenario: Angry Dad.

Jim's wife Deborah ordered a pizza through PizzaMeNow and it's over 15 minutes past the selected time. The app said +/- 15 mins and so now it's late and Jim is hungry...hangry even. He searches the app furiously for a contact us link. There is no link to be found. He goes online immediately to the app store to give the app a terrible review...just then the phone rings. The driver is lost trying to find the place, meanwhile Deborah receives a notification that their next order will receive a discount automatically along with an apology for the delay in receiving their order.

Scenario: Can't pay.

Voldo is a bit of a scatterbrain...he put in his order, but forgot that his CC was maxed beyond maxed when he used it to pay for his pizzas he ordered through PizzaMeNow. He was is such a hurry that he didn't even wait for confirmation before exiting the app. He receives a text message saying that his order was declined and that he should supply another form of payment. He taps the link to open the order in the app and is immediately brought the order confirmation page. He proceeds to payment and uses a different payment form. This time he gets the confirmation in the app and all is well in his world.

We can see based on these scenarios the various features that we will have to develop in order to deliver to our users (pun intended). As the fictional users exercise each feature, we can begin to understand how they will be using the application. While writing these scenarios we can imagine the user experience when say they can't find their CC to pay for something, so we know we would want them to be able to come back to their order and pay later when they've found it. As we develop our scenarios further we can go into more depth of detail with each feature - perhaps create scenarios involving only one feature (requires some context usually though). This sample is not a complete spec, but only describes some possible real world uses of the application. They can get everyone in the mindset of thinking about how to design, develop and test the app. Consider, for example, how we would test the "Angry Dad." scenario...would we want to wait the whole time for the delivery plus the 15 mins to test that feature? Or would we want some control over those configurations so that we can just FF to the late state and trigger the email and phone call (btw: did the app call the driver and the customer or was that all the driver?)

Monday, March 6, 2017

Functional Programming like an OOP-er?

So, I've been learning much about Functional Programming techniques. Elm-lang is a great budding FP language that transpiles to javascript and is optimized for the browser - haven't seen much in the way of server-side packages for it...but then again, I haven't really looked.

One key feature of the FP languages I wan't to riff on is Currying - basically it's where you create a new function that does the same thing but with less parameters and you set defaults.


In javascript, for example if you had the following function...

    function saySomething(message, person){
        ...
    }

you could curry the message and create a new function to sayHello

    var sayHello = function(person) {
        return saySomething("Hello", person);
    }

Well, that's all well and fine. Our sayHello function would only be able to say "Hello" as that would basically be set in stone for this "instance" of saySomething. I said instance didn't I? Couldn't think of a better OOP way to put it. It's also called a closure isn't it? Basically we are creating a function object that has a locally scoped value of "Hello".

It could also be written like this (which is a bit more re-usable):



    var sayHello = currySaySomething("Hello");

    function currySaySomething(message) {
        return function(person){
            return saySomething(message, person);
        }
    }

//or if you like

    function currySaySomething(message) {
        return function(person){
         var _message = message;
            return saySomething(_message, person);
        }
    }

And in this way, an instance of the function object "saySomething" is returned as a closure with the locally scoped variable "message". It can be done another way...this one is straightforward OOP in javascript.


    var helloSayer = new HelloSayer();
    helloSayer.sayHello({name:"Li"});

    function HelloSayer() {
        var _message = "Hello";
        return { 
            sayHello : sayHello
        };

        function sayHello(person) {
            return saySomething(_message, person);
        }
    }

Well, that's an OOP way to do it, but it still meets the same goal of immutable state that's important to FP. But if you think about it - currying is really setting the internal "state" of a function. It's kind of like using a Factory, although in the example below it might be a bit overkill - could just use new Sayer("Hello"), but that's not the point of a Factory. If you need a Factory, there it is.


    var factory = new SayerFactory();
    var helloSayer = factory.build("Hello");
    helloSayer.sayHello({name:"Li"});

    function SayerFactory() {
        return { build : build };
        function build(message) {
            return new Sayer(message);
        }
    }

    function Sayer(message) {
        var _message = message;
        return { 
            sayHello : sayHello
        };

        function sayHello(person) {
            return saySomething(_message, person);
        }
    }

I guess what it comes down to is how you use the objects. One could have objects that hold internal immutable state. Using an OO language it's often necessary to avoid overly-complex, unreadable code.