Tuesday, June 23, 2015

Highlighting in Visual Studio

I found my old friend Lime Green Highlighting again in Visual Studio today. I've been working with some pretty hairy code and found that Lime Green really helps when finding the end brace of a 4000 line if statement...ho boy.




Tools > Options > Environment > Fonts and Colors > ....


From here you get a dialog with a variety of options for changing colors and fonts (Captain Obvious). I use Brace Highlighting (the one with a background setting) and change the background color to Lime Green. I can really tell where the statement begins and ends with that one! Especially since I have a slightly grey bg to save my eyes.


Here's the weak reference from MSDN
https://msdn.microsoft.com/en-us/library/ms165337.aspx

Monday, June 22, 2015

When NOT to use a noSQL

...or more precisely, when not to use an eventually consistent concurrency model.


I've been reading a lot lately about trying to fit a square peg into a round hole. What I mean by that is they are trying to force noSQL databases, particularly ones with an eventual consistency model, into a transactional operation.


I've been reading about two step patterns and other types of hacks and second generation noSQL with built-in transactions. I want to just state an obvious point - how about don't use eventual consistency when you need a transaction. When you need a transaction, use a model that supports one - its plain and simple! When you don't need a trans and you need a dynamic model and/or high throughput use noSQL.


It seems to me that some architects are not quite understanding how to pick the right tool for the job. It can cost a fortune to make a mistake as epic as allowing "sophisticated users" to post multiple concurrent requests to transfer assets and have them all go through due to lack of single concurrency on an entity. Search the web for bit-coin heist or similar and you will find out how many millions of dollars it can cost.




I've had a concurrency issue manifest at a very small scale, one that did not involve transfer of assets and did not cost a fortune. It did introduce a seemingly random defect and it was not a noSQL backing store.




 I admittedly did a read-then-insert-or-update without wrapping the entire op in a transaction. I did this to simplify the API so that a Save method was exposed instead of separate insert and update statements. The client makes calls asynchronously and perhaps a user clicked five time or there was some other technical issue which resulted in five records being added to the collection. This illustration further illustrates the need for transactions (and for properly using them).


So why not consider polyglot persistence and apply the right persistence tool for the job? There is a lot of good information on how to design a system that can take advantage of different backing stores and completely abstract the data store from the application. With such a design, your system can use transactions when it needs to and eventual consistency and distributed data when it needs it.



Thursday, June 18, 2015

A Quick Funny Thing

I've been letting my hair grow just a bit lately. It's about 10-13 cm now. At a gathering today someone asked me if my hair was new. I, of course, said "only the hair closest to the scalp." Another person in the group kindly reminded her that I am a software developer.

Wednesday, June 17, 2015

BDD - Tagging Features in SpecFlow

I've been working with SpecFlow for a couple months now on one passion project and I'm starting to get a handle on how to manage features and how to use tags for better organization.


Without further ado, I'll dive into an example:


Suppose you have a requirement with some business rules to determine if a project can start. These rules determine canStart, a boolean. There's another feature that sends some notifications on the project's scheduled start date. The recipients and the contents of the email are to be determined by project.canStart? value. Other logo is also driven by canStart including UI elements and some unicorns and trolls.


How can we organize these features and still keep track of every feature that uses canStart?


Firstly, whatever rules determine can start should be defined in its own feature since that sounds like it could be really complex logic that is highly volatile.


That feature could be tagged as @canStart, @businessLigic, @volatile, whatever else.


The notify start date feature would be in it's own .feature file. It would have several scenarios each tagged with several tags including @canStart since that is a dependency for the notifications.


When you need to change some logic that depends on @canStart, you can just do a tag search and voila! You now know all the dependencies!

Tuesday, June 16, 2015

Scoping a Project - To Size Right, Start at the End

Today's lesson learned is about how to scope a project under resource constraints in order to keep it as lean and beneficial as possible. This is largely about determining the minimum viable product (MVP).


The MVP of software is not the person who writes the most SLOC, or has the lowest defect defect density. It's certainly not the person who stayed late for weeks to deliver a feature. To the contrary! The MVP of software is the one who, through Socratic Methods or whatever it takes, gets the size of the product whittled down to the core. To the minimum deliverable product without any wasted efforts; with only useful features; with the lowest cost, highest quality, and most business value. (see this previous post for more about the importance of quality)


Joking aside, here is some seriousness about MVP - Minimum Viable Product (we really need a better TLA for that).


Identifying the MVP is critical to keeping down costs and/or impact to other efforts. It will result in earlier delivery of value, higher quality due to narrowed focus, and decreased initial efforts. All of these attributes are linked in a tight Tango dance.


In order to achieve the MVP we start at the end game. What is the most important output of the project? Which feature or set of features delivers the most value and how can that be delivered first? Gold plating is a plague on software, we've all seen it in those features we never use, some of us ;) have even been guilty of adding gold plating ourselves (I confess).


So, raise your right hand, place your left hand on the screen, and repeat after me:


From this point in time and into the future, <state your name> will be sure to ask - "what is the one most important deliverable?", "What can be deferred?", "Is this feature feasible from a business perspective (cam the business support it)?" I will refuse to accept "all of it" as an answer unless backed by sound reason. I will not add features which I deem "useful to the users", nor will I suggest such features get added without proper analysis. If they become necessary, they will be implemented later. I will determine the end game before I start.


This is not an easy feat, politics will likely be at play. The boss's offspring, a valuable client, a controlling interest may have something grand and glorious in their heads. They may insist. They may not be easily swayed. This is why the Most Valuable Player is the one who can keep things under control and still be a hero in everyone's eyes.

Sunday, June 14, 2015

Security Feature or Imposition?

I'm torn. I can't decide. In my experience it hasn't worked the way it is supposed to. At least 3 times I've been blocked from using my debit at a checkout due to "fraud protection". It's frustrating, it's embarrassing, and it's infuriating!


Yet there was a time when my credit card number was stolen and duplicated. The duplicated card was being used all over a city I've never been to. The bastards were running all over making small purchases. I noticed this and reported the issue. They didn't catch it, I did.


This last time I was prohibited from accessing my funds, I was at a grocery/everything store that I frequent. I'm there once/week or perhaps every other. We've been making some extra purchases lately - birthdays, spring cleaning, etc. And so the bill is a bit higher, bit not much.


I actually have had no say in this "protection" they just block a purchase, inconvenience everyone around, then call and demand my identity and annoyingly have me verify purchases or something else. This last time I got a text at 1am while I was asleep - long after the attempted use. I replied when I was awake next day and saw the text. Why did it take 5 hours to get the text?


I've had several frustrating phone calls with the fraud people. I'm almost ready to call it quits and use cash. Is it worth the inconvenience for some added possibility of security? Is this really a feature or an imposition on how I can use my money? Even if it is making things more secure, is it worth the cost? It would be great if I had some measure of control over this "fraud protection", then I might be willing to call it a feature. Right now its just some other folks sitting around looking at data deciding how and when they can block my purchases. We as humans can do better, I'm sure.


Looks like I'll be changing banks soon at least and try my luck somewhere else.

Thursday, June 11, 2015

Can You Name 3 Threats on the OWASP Top 10?

I've been asking this question and not getting good results. I've asked software developers who are making apps. I've asked students. It seems that security awareness just isn't on the agenda.




A security engineer commented on this to me today and gave me some insight. Basically its either you like to know how things work and you eventually are interested in SQL Injection, XSS, security mis-configuration etc or not. If you are, you're likely to be very interested in checking out the OWASP site.

Tuesday, June 9, 2015

Builder Pattern - a Real Life Example

Here is a real world example of using a pattern to set up dependencies that need user input in order to function.


Given an application that contacts a server for state information. Lets use a console interface for this example.


The app consumes an interface that defines a method to get server state. The user interface layer should know nothing about the implementation details, but needs to pass the server name to the implementation. We want to use the same instance per each server we interface with. If we connect to five servers, we should have five objects.


Those objects can have different methods and for efficiency, they should be reusable. Each instance has internal state about the server name, which is an implementation detail, and can only be used for that server (think multi-threading).


We should avoid constructing the object in the user interface code because it would know too much detail about the implementation and therefore would be too tightly coupled to the specific implementation details. Additionally, we would not be able to pass a mock object for testing purposes so we need some way to pass the information to the code that processes the user request.


Instead of passing that we can define an interface that build the dependency based on user provided data. The Build method accepts a server name, constructs the server object passing the name to the constructor, and returns the server instance to the caller.


The caller uses the returned object that implements the server state interface to retrieve information about the server it knows about.


In C# code form:



interface IServerInformationReader
{
    Services GetServices();
}
interface IServerInformationLogic
{
    bool IsServiceRunning(string serviceName);
}
interface IServerInformationLogicBuilder
{
    IServerInformationLogic BuildReaderForServer(string serverName);
}

namespace N5D.ServerManagement.Builders
{
    class ServerInformationBuilder : IServerInformationLogicBuilder
    {
        public IServerInformationLogic BuildReaderForServer(string serverName)
        {
            return new ServerInfoLogic(new ServerInfoReader(serverName));
        }
    }
}

namespace N5D.ServerManagement.Tests.Builders
{
    class ServerInformationFakeBuilder : IServerInformationLogicBuilder
    {
        public IServerInformationLogic Build(string serverName)
        {
            return new ServerInfoLogic(new ServerInfoReaderFake(serverName));

        }
    }
}


This allows us to create the object in multiple ways depending on our use and the implementation details. Those will not be known to the consuming code but will be abstracted by the builder.

class ApplicationCode
{
    IServerInformationLogicBuilder _builder;
    IServerInformationLogic _serverInfo;

    ApplicationLogic(IServerInformationLogicBuilder builder)
    {
        _builder = builder;
    }

    void DefineServer()
    {
        string serverName = GetServerNameFromUserInput();
        serverInfo = _builder.Build(serverName);
        
    }

    void IsServiceRunning(string serviceName)
    {
        Console.WriteLine(_serverInfo.IsServiceRunning(serviceName));
    }

}




This enables us to pass a different builder to the app code depending on whether we are testing or writing production code. The component that fetches the info from the server has that sole purpose while the component that computes IsServiceRunning is a logic detail. In this way we can return known data with a fake reader and test the logic.


The logic in this example is clearly simple, but illustrates a situation when a builder is appropriate. The builder can be injected with some IoC container in each context. The reader must be made at runtime. That's when you need a builder.

Monday, June 8, 2015

Measuring Success

A team works for months on a software project which is nearing completion. The PM is pushing toward the goal of getting the beast into the wild. Finally, the day has come...it's up and an all out celebration ensues. The PM declares the project a success and everyone does shots and pats each other on the back.




Fast-forward 2 years. The product needs some changes. Several Bugs need patching, some features need modification, and some new functionality is desperately needed in order to make the product live up to its promise.


The core team is gone. The PM has moved on with the momentum of a successful project propelling him/her upward at another company. No one asked the questions about the long outlook and now its here staring the current team starkly in the face.


From an ROI perspective, this product is a complete failure since it was projected to return costs in 18 months and instead has costed more in terms of construction AND ongoing costs. Plus there were indirect costs that not a single person measured.


This fictional scenario may or may not sound familiar to you or someone you know. If so, try the following...take deep breaths and count to 10 slowly. Feel better? If not recurse, else read on.


Lets consider a few parameters of a project that is rationally based on cost savings (as opposed to increasing income). What parameters might one expect? Lets put those into some sort of cost-benefit model:


costs: Building the Beast, Maintaining the Beast, Running the Beast, User Impact, Morale Impact


benefits: Lower Insurance Premiums, Lower Risks, Increased Productivity, Staff Reduction


Some of these properties are one-time such as build costs; others are recurring like licensing, user impact, and lower premiums; some are readily calculated, others not so much (looking at you morale). For the not so quantifiable properties, lets consider a fuzz factor that modifies a cost value. Costs are negative numbers, benefits are positive numbers. Fuzz is +/-. For estimation make the numbers relative.


Building app : -40, Fuzz 10. Range -30 to -50.
Increased Productivity: 3/month. 2. 1 to 5.
etc...


I won't bore the reader with details here, rather leave those up to the reader as an exercise. In the end, compute the median, best and worst case scenarios once fuzz factors are applied to all parameters of one-time and ongoing costs and benefits. Take the high range of costs and low range of benefits for worst case and vice-versa for best case.


Once all the ranges are established we can see that an ROI in the worst case can never be achieved due to negative ongoing impacts. The main impact drivers of long-term costs are quality related. The important qualities that affect long-term costs are user experience, technical design, and technical debt. Some costs are hard to measure but easy to explain.


Suppose users of a system have had regular interactions with other people, but as a course of automation those interactions have been reduced significantly. There is certainly a benefit to having face-to-face interactions with certain people. Take the automated phone tree as a prime example.


While I'm certain automation has reduced direct costs of having staff to answer phone calls, some of the best companies have turned back towards having real people on the other side in order to elevate brand experience. Why? It increases brand value...sure there is a cost, but there is also a cost of automation - frustrated customers who feel no loyalty. We've all been there.


How much loss has there been as a result of automated phone trees? Fuzzy right? Lets say worst case you lose 10/yr at a cost of 300 for implementation. Your ROI is never achieved and in fact returns are more negative over time as a result of loss which is more costly ground to recover than customer retention. Short-term gain, long-term loss. Fuzz factor - 5/yr. Range 5 to 15. What if loss is 15 due to implementing automation? That a rapid fail!


Now lets say 2 employees leave as a direct result of product support frustrations. Now you have loss of productivity and cost of replacement which is considerable collateral damage. Not only that, but lets say other projects are directly impacted as a result of resource drain to support the fruits of the project. Would you still deem this project a success after it has been truly tested in the wild? Now the shots become draws to sleep at night.


Lets reconsider the project parameters...


Long-term success should be defined up front and measured throughout the life of the product. The recurring parameters are the most important factors throughout the life of the product. Consider the recent air-bag recalls - how much has that cost in the long-term? Would a higher initial cost have prevented the damage? Apply that mindset to your projects.


Spending more up front can reduce long-term costs, so long as the spending focuses on the qualities that will drive those costs. Plan defensively!

Choose Wisely What You Build

"...software is hard."

- Knuth, Donald (2002). ref
13 years later and software is still hard. It's hard for a number of reasons including people, communications, and people communicating. In any case, it's also costly. Because of the costs (initial and ongoing) any feature and any new product should be considered carefully against those costs.


Adding up the costs:



People are likely going to cost more than equipment, licensing, and incidental costs. Good people are hard to find and expensive to keep. People are needed in order to build and maintain software. Software is soft, so it will change. People are needed to change software. Software is hard so it will be hard to build and maintain the software. It will be hard so it will be expensive. If it is not expensive to build, it will be poor quality and it will either be hard to maintain or it will not be valuable.


Measuring value:



Software is valuable when it either produces income or reduces costs. Predicting cost savings might be the easiest thing to do, still hard, but not as hard as predicting the cost of building it or the increase in income. There are too many uncontrollable values in predicting income producing software (market conditions, competition, advertising success, etc). Therefore it is more difficult to measure cost benefit.

Cost benefit of cost saving software will depend on usability and quality in addition to initial and cost of ownership. How can one measure the success criteria in terms of usability and quality? If there is some predefined criteria of gained efficiency that can be quantified, taken against the loss of some personal experience of the users and the cost of building and maintaining we can arrive at some ROI. I would propose that the loss of something by the end users is not easily understood or quantifiable. Whether the loss of personal experience, the human touch, or responding to change it could have an adverse impact.


What we can do:



Analysts considering a new feature or product should consider all of the costs as well as have a thorough understanding of the benefits before beginning a project. However, much of the factors are not easily quantifiable and will remain fuzzy. There must be ranges given where this is so and certain usability thresholds must be achieved before considering a project a success.

A template can be used with certain values in cost and value categories and assign estimated ranges in order to perform valuation. Properties such as analysis cost, construction cost, maintenance cost, end user loss costs, and incidental costs can be weighed against properties such as productivity gains, increased income, and loss reduction. When considering a product, each feature may have its own values so that they may be added incrementally. Break the work down into smaller pieces to reduce risk. Start with the features that are more valuable or less fuzzy, and stop when appropriate.