Friday, December 19, 2014

Is Better Software Your Desire?

A comment on Jim Bird's post titled "If you could only do one thing to make better software, what would it be?" on his blog Building Real Software (which has some fantastic posts and of which I am newly a fan) I made the following comment, thought I would share it here as well.

Good software should be able to change as and if the business needs change. The long-term benefits of writing software that is already decoupled, or can easily be decoupled (modular, reactive, whatever) are greater as the software can evolve at a lower cost. Good software has a lower TCO than bad software. A lower TCO can be achieved by either not changing or by changing readily. Bad software is ok as long as it doesn't need to change or isn't critical.

That being said, better software for the given situation can be achieved by connecting the developers/architects/engineers with the users and the business in a way that allows them (us) to understand the place of the software in the context in which it is intended to operate.

If you were hiring a building architect and engineers and builders to build a structure, the purpose of the structure would define the design parameters. If the building is a shelter for farm equipment, then one could likely buy an off-the-shelf structure. Conversely, if the structure is intended to house offices and strive for energy efficiency, then the project managers must find adequate resources to design and build such a building. In the latter case, the costs and time will certainly be far greater. The product should be expected to net a positive gain in the course of it's tenure at the selected location.

In software, where many parallels are made to structural architecture, it can be said that the expected cost and time should be reflected by the expected lifetime and ROI of the product. ROI can be actual profits or protected losses. These factors should drive the qualities of the code more than any other, regulatory compliance aside.

Better software can and should be driven by better project management.

Wednesday, December 17, 2014

Reactive Manifesto Irony

While diving in deep into the Reactive Manifesto I encountered a link that resulted in the familiar no connection error. Well of course, I don't have a connection with my phone in the tunnel, so this is what I get. If the developers followed the manifesto however, I would get a different response.

This gets a bit into the heart and soul of reactive systems. Perhaps in a reactive design, the browser would do a few things differently. It might check for a connection first, then check to see if a DNS server is available, and do the lookup. If not available, it might give the option to try later automatically and let you know when the page is loaded for viewing. Why should the user wait and watch the progress bar while the page loads? Just deliver the page after it's done!

Think of some of the newer automated phone systems where the system will call you back when an operator is available so you don't have to drag on the line listening to distorted elevator music and the same repeated message until they can catch up to you. This is part of being reactive. If the phone centers were truly reactive they would be able to add operators as demand increased in order to decrease wait time/increase capacity. A browser might not be able to scale in a similar way, but it can behave in an asynchronous way with respect to the user.

Wednesday, December 10, 2014

nosql document storage locking

MongoDb locks at the database level, so more databases may be better.

http://docs.mongodb.org/manual/faq/concurrency/

CouchDb does not lock, but uses mvcc (multi-version concurrency control) which means a new version for updates. This gives automatic auditing. Bigger database files though. Must be a cleanup mechanism.
http://docs.couchdb.org/en/latest/intro/consistency.html?highlight=locking

Looking for a document level locking mechanism, but probably wouldn't make sense. Imagine a map/reduce that passes over one or more docs that are receiving updates. I guess the read would need to wait if any of its read targets are locked, but that would mean checking for locks on each doc which may be costly. Not sure, just a hunch.

Thursday, December 4, 2014

The Mythical Cloud

The cloud, in its modern form, is relatively new to me. I've been doing some research and exploration in this space and "trying on" a few of the offerings in Windows Azure and gained some insight into AWS. Here are a few of the things I learned along the way - some of the luster and myth of cloud platforms appear to be either marketing fluff or misunderstandings. Following is my take on it. I'll be stating what I found to be benefits, costs, and myths.

IAAS and PAAS benefits

environment setup - this is extremely simplified, especially when you need something that is readily available as a pre-packaged vm. Otherwise, you still need to set up your own vm and manage it like any other vm environment. From a PAAS perspective, the infrastructure is generally ready to go. Developers targeting PAAS need to build the software in a way that can take advantage of the scalability in PAAS. There may be some learning curve, bit overall not much.

access from anywhere - depending on the security models and service levels involved, cloud offerings may be made available from anywhere. This may not be true of a private cloud, however there are some tools available to develop code right in Azure's online portal. This version of Visual Studio has some bells and maybe a few whistles, bit is certainly not full blown. Can do some html and JavaScript editing and check in against a repository, has intellisense.

scalability - this is the big sell, IMO. If the app is built in a way that can scale nicely (see Reactive Manifesto and dig further for more details), then the cloud and particularly PAAS truly has value!

built in management and monitoring - there are some great tools built into the platforms for monitoring applications running on them. Although, it seems to me that you get what you get, perhaps it is possible to develop your own custom tools to extend the dashboard - haven't dug in yet.

cost - this is the big sell to the business from a financial standpoint. Arguably a bigger sell than scalability. Not much to say other than pay for what you use. You may have to manage your scaling actively to really benefit though, depending on a number of factors that you can probably sort out so I won't dive in just now.

IAAS and PAAS drawbacks

less customization - PAAS especially.

relies on public space - availability, security. A recent conversation with a colleague mentioned shared drives and data and privacy rights of clients. The bottom line was, what if the FED has to confiscate the disks for some other tenant and your clients' data is on there? Good point. Hybrid solutions may be the answer there, but then you're really considering added complexity of data management.

maturity - knowledge, resources, help. This is becoming less of an issue as cloud technologies mature, but still worth consideration - else this blog post would have no value and I wouldn't be doing it besides.

myths

configuration and management - it's not a simple as some would have you believe. This seems to be the main myth from the technical standpoint.

patches and breaking changes - so, if patches are automatically applied in PAAS and we don't have control of when and how. Breaking changes automatically break my apps. Is this a real problem or am I making this up?

cost - cost savings may not be guaranteed. Cost of hardware is always on the decline. Reduced staff may not be as simple as claimed due to management of cloud resources concerns. Usage may be less than optimized by poorly written code, or poorly managed cloud resources.

So, the story I've told is that the cloud is great what it does. That cloud vendors boast about who is using it does not support any specific need to use it. Cost savings and ease of use depends on your own parameters. The decision, like any major decision, should be based on understanding and analysis of costs and benefits for your own organization. It is important to understand the marketing so that it does not misinform the decision to go to the cloud, particularly if you already have infrastructure in place. If you are a developer or technical person, use the cloud in any way you can because that is the way to be truly informed - through experience.

Wednesday, December 3, 2014

Is there a cloud myth?

There's a new kid on the buzzword block. He's been lurking in the shadows for a bit but he's taken center stage. His name is Cloud. He is a mystery and as we learn more about him, he remains more mysterious. This Cloud seems like he can do just about anything, he can do so much! He hosts databases and applications, offers insights and metrics, he is everywhere and nowhere all at once. He is a myth and a legend. It seems he can do it all!

Even with this Cloud guy around though there's still software that has to do things with data, and communicate with other software and people. That aspect hasn't changed, so why suddenly does the cloud have all the answers? The issues I see with software isn't with software it's with people.

Software can be whatever we make it. It can make good decisions for us, it can stay awake always, it can crunch numbers and transform data, it can provide endless hours of entertainment and automate almost anything as long as we enable it to. The disconnect is that most people don't know how to enable it or quantify things in a way that's consumable by software. Sure we're getting better at that and some people have this thing mostly figured out, but there are so many who don't who make countless decisions under the gun that the magic bullet of the cloud isn't going to save anyone.

In fact, there is no magic bullet. No application, nor platform, nor engineer can solve all your IT problems no matter what their marketing departments are telling you. This is the myth they build to sell product, when we know that in reality it's only a shift in complexity. Any system has a finite lower bounds of complexity. I define system in this sense as a collection of modules that perform some activity. Any oversimplification below this bounds would cause incompleteness in the system.

Complexity costs. Costs are usually determined in software by effort. The best way to reduce costs are to simplify the business processes that dictate the software. Complex business processes cause software to become more complex which results in more effort in analysis, planning, development and maintenance. A reduction to the impact of business rules and constraints on software will result in lower front-load costs and TCO.

The myth of the cloud simplifying will NOT make your software better, whether you developed it or bought it. It will NOT make your users more aware and tech savvy. And it will NOT make your apps better or more maintainable. It WILL reduce the cost of hardware and space that you maintain. It WILL fail somethimes. It WILL reduce certain maintenance procedures, but trade for others.

Tuesday, December 2, 2014

Handling Changing Data in Applications

I encountered a defect yesterday that lead to some thoughts about handling computation source data that is external and subject to change. The scenario is as follows:

The defective application stores work done and sends billable work to a billing system after monthly batch approvals. The rate billed depends on the shift and position of the person doing the work.

Some shifts/positions are billable and others are not. Only billable time is send to the billing system. A billing code is computed and sent to the billing system along with other data. The code is derived from a lookup table keyed on shift and position.

An employee who billed time can change from a billable shift/position to a non-billable one in the interim between the work done and the export to billing. The export can only be performed after the approvals are done by the supervisors. This is where we have our defect, and there are a few ways to resolve this.

One way to resolve the issue, in my opinion the best way, is to look up the data from the source system (which is not the application in question btw). In this case we would need to connect to the HR system which we don't currently do and which requires considerable changes to the enterprise architecture in order to accomplish. The right way is of course the most difficult.

Another option, which is less intrusive, is to save a copy of the data in our application and use our copy of the data to compute the result. This has its own flaws. If the source system is incorrect and is retroactively corrected, then our copy is also incorrect bit may never be corrected. This would rare in our case and probably not worth haggling over, but could be disastrous in other systems.

Another consideration with the latter approach is which data to store. Do we store the computed data or the source data? To answer this question, we need to consider the source data and the lookup data. How likely is each value to change? What is the outcome if the data does change? Will it have a desirable or undesirable effect? For example, our billing system may change and the values in the lookup table that we send may need to change to accommodate.

In our case, saving the shift and position at time of work done seems like the best option. But remember that we are making copies of data. A better approach is to get the data from the source. There are other things to consider here as well. Remember how the billing code and the shift and position data are a lookup in our application in question? Is there another system that should house this data? The billing system? Some other intermediary? Or is the work tracking system appropriate?

The answer in this case is that it depends on who manages the data and the merger of the data from the sources. In any case, whatever system is used to house the merger should make it available to other systems via a standard data transfer interface so that it can be queried on demand without making copies.

Friday, November 21, 2014

Design with the customer and the 99% rule

Today I had a meeting with the customer that I consider a victorious event. I was thrown into an iteration to address some backlog items this week. I have about a month to do this, so I started out working with the customer to refine the product backlog items for this iteration.

The product in question is an internal application and the customers are another department within the organization. I have seen some of this application before, but I am generally unfamiliar with the workflow and the terminology involved. We began by getting me up to speed with the workflow. Next, we tackled the more specific issues one at a time. Some of the backlog items are still a bit fuzzy, but we were able to design acceptable solutions for the ones we went over.

Having the primary business resource at hand was integral to designing successful solutions. The three of us were able to quickly assess the situation, with code and data at hand, and get to the real solutions without having to ask too many "what ifs?" that clog up the development process. In the past I've asked every "what if?" I could think of until I was over committed to the point of under-delivering. This could have been avoided simply by having the customer at hand to discuss the solution.

Thursday, November 20, 2014

How To Update NuGet in a Solution

If you build on a build server and see some error like:

EXEC: 'Unity' already has a dependency defined for 'CommonServiceLocator'.

You probably have NuGet.exe that's out of date. Here is how to update with Visual Studio 2013/2012.

Step 1: Go to the Solution folder on your local and remove the .nuget folder (or rename to .nuget.bak until later in case you need some custom stuff from .config or .targets).





Step 2: Open solution in Visual Studio.

Step 3: Right-Click Solution from Solution Explorer and select the "Enable NuGet Package Restore" option.

 
 
*say yes in the dialog that follows

Step 4: Check-in your changes.


There you have it! The easiest way to update nuget in a Visual Studio solution.

Friday, November 7, 2014

Suddenly - AOP, WCF, and Service Orientation

Suddenly all roads in my path of knowledge lead to AOP. It is all around me at the moment. Is it a tending topic in software or have I not been around the right circles until now?

Anyways, now to expand on my WCF topic of AOP that I introduced a couple posts ago.

After writing this, I felt compelled to add that this post is broad due to the recent extent of information processing this week. I apologize for being not more focused, but there were so many lessons learned and I feel I need to catch up since I've been neglecting this blog lately.

It's all cool that you can hook into the message processing pipe in WCF. That means you can intercept messages on the way in and out. You can access the headers, log the message, log details, log exceptions, and more. That's the same thing you get from the IoC way of doing this. So why should you host a bunch of services just to get AOP? It turns out that you don't have to host them separately.

WCF has the netNamedPipeBinding that uses binary serialization. It can be hosted in the same process as your consumer. So...if you have a your business logic divided into a few domains, you could put each in its own project that can be hosted in the same process as any workflow.

WCF has some built in logging at the message level and you can implement your own writers or use built in log writers. Or you can build up your own interceptors and do all the same pre-call, post-call, exception logging and auditing as you would with any class method.

The added benefit of using WCF in proc is that you can use the headers to share information and log that. If you do this, then you always have the option of switching to remote hosting to scale out. Perhaps one specific service has a long-running process and it needs to continue after the root application closes.

Consider a windows app that runs on a workstation in the intranet. The application has some process that eventually becomes a long-running process because the business rules changed. It needs to run async because of user experience. However, the user may fire-and-kill (pull the plug, end task, switch the power off), or the process becomes a resource issue, or the business logic changes more frequently so that it needs to be centralized for cleaner deployments (I'll consider the original design of not centralizing it poor in the first case, but this was the first example that came to mind).

In any case, we can now offload the process to a scalable server environment with just a few changes to the configuration settings. It's still in the internal network, only now its not local to the process or the machine. This implies at least three possible deployments, but there is at least one more and subclasses of that.

The service can live on the internet as well as the aforementioned inproc, on machine, or in network locations. If the service is over the net, it implies public accessibility. It could be hosted in the cloud or on a remote network. If it's on the net, in the cloud, you could tunnel to it and really take advantage of scalability.

I'm not making any strong recommendations here about where to host your services. These are just some of the available options. Consider what is appropriate. If you have an intranet and don't need to make your service publicly accessible, http in any case is not necessary and will only slow your communications down. InProc with net.pipe will offer the best communications performance. There's no reason you couldn't host the same service in more than one application, or even the same one.

Here's how the code looks to start the inproc service. http://msdn.microsoft.com/en-us/library/ms731758(v=vs.110).aspx

I was thinking about spinning hosts up in a web app, but this may be better to run as a windows service.

  • A host may spin up other hosts for it's dependencies. Can use IoC to create them. IMyService service = container.Get<IMyService>();
  • Perhaps consider disposal after it's not needed.
  • Let WCF handle threading, there are settings for that.
  • Can run as Task for calling async, but callbacks are possible so why not use those if desired, just don't dispose of the service before the callback.
  • Transactions can be coordinated across services via the DTC. Could be locking issues, so be careful to avoid deadlocks. Anyways, perhaps any transactions should be encapsulated in a service method if possible. The transaction could originate outside of the call or inside. using(var t = new Transaction()){ clientProxy.DoWork(); t.Commit();} Just need to make sure it flows into the service. Commits within the service for flowed transaction does not commit the trans, only votes yes. See DTC for more info (gtfy).
In this post, I provided an introduction to the possibilities of doing AOP in WCF along with reasons to host chunks of code in a service. I provided points about hosting the services in process when appropriate, as well as a few other points in general about WCF.

This information is not complete without mentioning the important consideration that this is a service oriented approach to programming and generally is more procedural in style. In contrast to object oriented programming which is more functional in style. While the services can be more or less OO inside, they generally should capture a specific unit of work. In some cases the unit of work is generic and in other cases it is very specific. The unit of work may be a composition of several services, or wholly encapsulated in one service. The services may be more generic and shared or very specific and not shared.

Now that the whole surface has been scratched, it should be clear to the reader that there are compelling reasons to use WCF and to understand service oriented programming as it evolves into its future form.

Wednesday, November 5, 2014

A bit about AOP in .NET

Today I learned a little about Aspect Oriented Programming (AOP) in .NET.

Basically, AOP is where you can add functionality to your system without altering the business logic code. It allows developers to add with a clean separation of concerns. For example, if you want to log all exceptions in a certain way, you could either wire up the logger to all your methods, or use exception logging as an aspect that you can add to all methods.

The way you hook in the exception logging would be to write your logging method, write your aspect method to wrap method calls in a try-catch, then use either an IoC container to wrap your classes or an IL rewriter to rewrite your compiled code to wrap them.

The added functionality is called advice, where you hook in is called a joint point, and you can filter joint points to get pointcuts. A pointcut is a specific point that you want to target (like a specific method with certain input values).

Here's a bit about each approach:

IoC - IoC frameworks wrap your implementation of an interface in a proxy that implements the same and when you request the implementation from the container, you get the proxy. If the framework offers a way to intercept the method call, you can add your functionality to the method implementation.

IoC frameworks take a bit of work to set up and work best if you write good OO. Once you have that, you can rock your socks off and stop repeating yourself.

IL rewrite - you write your code, then your wrappers. When you compile the assembly, it needs to run through some post-production process to weave the aspect code into the business code.

Generally, AOP is beneficial since it allows you to separate system logic from business logic. Functional requirement code, from non-functional requirement code. Also, when unit testing your BL, all the other stuff can stay out of the way if you do it right (keep them separate until you integrate, then test the integrations).

I would recommend keeping the BL, the AOP, and the wiring code each in separate assemblies so that they can be composed into different arrangements (one for unit testing and one for production). Adding the aspect should be configurable, so you could even have different wrappers for each environment. This is a clear drawback of automatic IL weaving on build if it does not allow it to be done dynamically.

Another approach is to apply AOP to message bases interactions by intercepting the messages. This can be done in WCF by hooking into the message pipe. The forthcoming Actor Model implementation known as Orleans may or may not offer the ability to intercept messages, I haven't has the opportunity to explore it yet.

In summation, AOP is a great way to apply SOC so plan your development efforts accordingly so that you can use it later without modifying too much code to make it hook correctly.

Tuesday, November 4, 2014

AO & SO

While attending a course on advanced WCF and Service Orientation(SO) via IDesign, the instructor dropped a couple terms - "Actor Model" and "Microsoft Orleans". Did the search thing and found that there is a relationship between SO and Actor Orientation (AO).
AO (my own term) seems to be similar to OO but with distinct boundaries. It's been around awhile and seems to be making its way back to the surface.

Orleans is an implementation of AO in Microsoft Windows Azure. It was used in HALO 4. Read more here. The value proposition of Orleans is that all Actors in the system are virtual - they exist even if they don't and they are distributed. This implied that some state data is stored in a way that can be retrieved from any node in the network. All messages between Actors result in a promise that, ultimately should be fulfilled. Once it is, some callback if fired. These concepts should be familiar to those who are following the js community. CouchDB, a nosql document store, uses an eventual consistency model to provide support for distributing databases anywhere for horizontal scaling. Essentially the db can be materialized locally or anywhere where access to another copy is available. EmberJS, an SPA MVC framework, uses promise to populate views asynchronously. I'm probably missing something big with AO, like remote code execution or some other gigantic benefit, but I'll definitely be learning more about it. Likely I'll have more to blog about the topic when I learn more. Will tag AO and Actor Model.

Wednesday, October 29, 2014

Who chooses your tools?

What would you say if I paid you to change a tire on my car? Ok, change a tire on my car but you can only use one of those multi-tools that has pliers/wire cutters, a few knife blades, screwdrivers, a leather punch, a file, and tweezers. What do you mean you can't do it? The tool does everything! The folks that made the tool say it does. They cut a wire in like 2-seconds and it has a leather punch...A LEATHER PUNCH!!! So obviously you can change a tire with it since it can remove nuts. Oh, btw change all four tires. Needs an oil change too. Only the one tool that does it all, that's what you'll use for that also. Maybe some new breaks, they've been a little squeaky lately. I failed to mention that the keys are locked in it and it's out of fuel but why would that be important? So, when will it be ready? How about in a couple hours? A couple hours sounds right to me. I never changed a tire before, but I have put fuel in a car and that only takes like a minute so a couple hours to change the tire and all the other stuff should do. Oh wait, you're using one of those magic bullet tools, so of course it'll take half the time. I'll pick up the car in an hour. An hour later... What!? It's not ready yet? What have you been doing? The super-tool has an issue? Oh you've called the manufacturer and they said everything will be ok. Good because I really like the idea they have with that does-everything-in-half-the-time tool. BTW, did you see the YouTube video they made? The one that's all sped up where the guy using the super-tool carves out a surf board in like 40 seconds then goes surfing!? I mean how awesome was that! So you'll be done in no time because obviously you are of equal or greater intelligence than the surf board guy, after all you know me don't you? So you must be awesomer! See ya in 20 mins buddy! 20 mins later... Ok this is getting ridiculous, perhaps another person with another tool, 2 people with super-tools would surely get this wrapped up in no time. I need this done, I have to lead a parade with this car in 20 minutes! I made plans after I figured this would only take an hour given your superior intellect combined with the most versatile tool EVER! finally... Well I missed the parade, but it's good to have my car back. I'll see you in a bit when I need the muffler replaced and to get it waxed. 😎

Thursday, October 23, 2014

Recursion, really! Because it doesn't have to be complicated.

I ran across something recently that went like this: some method (a[], b[]) = { do c with a and b; x[] = getTheArray(); bool MOE = false; while(MOE) { do c with a and x; MOE = does x meet condition of egress? } } of course c was some repeated code, not it's own method. But it didn't have to be either. I ended up refactoring the "some method" body to look something like the following: { if(a && b meet some condition) { do c with a and b; x[] = getTheArray(); call this recursively with a and x; } }

Friday, October 17, 2014

on my local machine

Today the lesson is to have a plan to restore your local machine in the least intrusive way. I had a drive failure this week. I was able to back up files before replacing a single drive with 2 smaller SSD drives. I put all user files on the second drive and all install filed and progs on the C drive. I have some options ahead of me with respect to a backup plan for files. I can add a mirror drive or schedule backups to a network drive or both. That's the easy part. The harder part is to reinstall all the software. The desktop team will install the OS and all the basics, and I get to add all the dev tools. That takes at least a day. option is to use virtual machines. The whole team would benefit from that strategy since the image would have all the software installed. If anyone needs a vm with the developer image, there it is. Tough part is to keep the image maintained with all the updates and new tools. Yet another option is to install via a script. With a script install the latest versions of all the goodies would be kept on a file share. The script would point to the latest (rename latest-whatever-setup.exe or .iso or .msi). Or the script would read a config file located on the share. The script would compare already installed software against the config and install/update as necessary. This approach would be slower, bit less maintenance than an image. Why not combine the last two? Maintain the central list for updates and keep a clean image vm somewhere that runs the script (or do that manually after every update has been tested). This way, any catastrophic failures can be recovered from, and all devs can update to the latest and greatest with a click of a button.

Tuesday, October 14, 2014

Unit Test Guidelines

For reusable code, unit tests should have more coverage. The more reuse a piece of code will have, the more the coverage. In a centrally used service or widely shared library, be sure to cover every parameter and every combination of parameters that makes sense. For the following parameter types, here is some basic guidance to start with: string - null, empty string, special chars especially ()'/\."{}<>*@,:;!?&%$# -- // and do not forget "O'Leary", short strings, long strings, string with numbers, leading numbers, leading chars, ending chars, \r\n, ... whole numbers - -max, max, -1, 0, 1, null, (-max - 1), (max + 1) decimals, floats, doubles - 0.1,-0.1, null, 0, 0.0, -max, max, (max + .1), specific precisions, culturally specific formats (Germany uses 1.000,00 not 1,000.00) all numbers - rounding errors, divide by zero datetime - null, 1/1/1, 01/01/0001, 99/99/9999, 12/12/9999, today, yesterday, tomorrow, culturally specific formats (dd/mm/yyyy, etc), non-Gregorian calendars if applicable, "what day is it?", 1/1, so many time formats... be sure to always use datetime never a string that represents the datetime unless its the long representation of the datetime in .Ticks, and use UTC whenever possible. Here's a tip for UNIT TESTING classes with internal dependencies - break the dependency out. If your dependency isn't mockable via a mocking framework because it is not an implementation of an interface, create a testable subclass and override to set up your tests. If you mark getters and other methods as virtual and ensure at least protected access to the method or property.

In Retrospective

A retrospective is a meeting at the end of an iteration where the team gets together and discusses what worked, what didn't, what could be done to improve. It's pretty standard faire. Perhaps there may be a danger in deciding too many things at the retrospective, where there isn't necessarily time for everyone involved to truly think on any of the issues discussed or to truly understand any of the ideas for improvement that have been proposed. It could prove beneficial to revisit some of what was discussed before implementing any serious changes to process. If you have a well defined process in the first place it will be documented in a clear and concise way with proper change management protocols built in. I recommend creating a single document that has a diagram and supplemental text that describes the process at a level which would rarely change. There is a balance between the level of detail in the process definition and maintenance costs/usefulness. Unless you need the extra detail due to regulations, keep it simple enough so that it won't need constant maintenance for every nuance of change, but include enough detail so that everyone knows who does what, when and sometimes how. The how should be describes separately from the process document. If you are using standard procedures, just call them out by name or link to the doc or both. For example: A release process could be defined as follows: The release should have a release coordinator, a release engineer, other technical parties, and a qa engineer. A pre-production meeting should be held before the release and should cover what is going and when, the schedule should be reviewed and everyone should have a copy of the schedule. The coordinator should set up a meeting during the release and communicate to all parties involved throughout the release. The coordinator shall communicate to all external parties according to the release schedule. Etc... This should also be diagrammed with definitions for each role and domain specific terminology or link to such a glossary.

Tuesday, October 7, 2014

SOC of MVC

Model, View, Controller implies separation of concerns between the business logic and the presentation layer, but it's easy to mix them if you're not paying proper attention. One way to mix them is to put all your business logic in the controller. Since MVC is a pattern for presentation, you can separate your layers further by abstracting the data and the business rules from the entire layer. Let your controllers consume contracts and DTOs or Domain Models if you're into DDD. But map the DTO or domain model to a view model that's specific to the views in the UI. The views should only consume the view models or domain models that are contained within the app. Mapping should be defined separately and not attempted within the controllers since a view can be reused, by implication so can the model. If the mapping is separate, then it can be reused along with them.

Monday, September 29, 2014

Shared Models, Extending

Previously, while working on a logistics system, I encountered a situation where a service and a consumer of the service shared the same class, but within their own domain. Lets say it was a Shipment class for this example. The service returned A.Shipment where a is the namespace and the client used B.Shipment where B is the namespace of the client code. I didn't see it at the time, but what I might have done is to create a shared library that contained the object as a DTO which A and B could use. Each space would then extend or wrap the DTO to add domain specific logic - a decorator pattern could work. Instead I used an explicit cast operator to map the properties from B.Shipment to A.Shipment. This works, but is a clear violation of DRY.

Saturday, September 27, 2014

Ahhh.. javascript functions in scope of a function

Did you know? You can create a function in the scope of a function.
function doSomething(options,aName, bName}{
    function internalFun(a,b){
        return (a+b)/2;
    };

    //or pass a function into a function
    var r = internalFun(options.a,options.getB());

    //but that assumes options implements getB and has a prop of a
    var s = internalFun(optiAonse], options[bName]);

}

Dependency Injection Usage

I put together a design to implement a request for a new feature yesterday via DI using an injection framework. Later, while pondering how I came to know about DI and DI frameworks, I realized that I probably would not have gone on that tangent on my own unless something really lead me there. For anyone unfamiliar with DI and/or DI frameworks, the benefits of DI are astounding so definitely learn how to use it. Study SOLID. DI is the D in the SOLID acronym. It means that anything which a particular class is dependent on is injected into the class by something else which knows how to do that. For example, if you have to send an email you could create an EmailManager class that consumes both an IEmailBuilder interface and an IEmailSender interface. The IEmailBuilder would simply have a GetEmail method that returns an Email object. And the IEmailSendable would have a SendEmail method, and possibly a SendEmailAsync method. So you could implement your standard emailing code and use SMTP to send the email. But also, you could send to a message queue such as msmq for some timed process to send batches, or you could write the email to a file or database table. The point is that there are several things you could do with the email. Each should be captured in it's own implementation of IEmailSendable. I smell a Factory pattern coming on here! The pattern of choice for this would be a Factory pattern since we don't know at build time which IEmailSendable to use. In this case, the factory is uses to inject the dependency into the EmailManager. The factory can be used to decide, based on whatever, which implementation of IEmailSendable to inject. This creates a separation of concerns and single responsibility in the classes involved. The EmailManager consumes the IEmailBuilder to get the email, then passes it to the IEmailSendable's Send method to send, file, queue or whatever implementation the Factory injects - perhaps the message delivery method will be looked up in a preferences table for each recipient and the same message will be delivered in different ways! It's all up to the implementation, although I would recommend setting up everything before returning the object from the Factory. It should be ready to send as soo, as SendEmail is called.

Tuesday, September 23, 2014

Take Time Off and Reflect

I took a long weekend to visit relatives and happened to have worked from home the week before due to an injury. Now, on my way back to office, I have a sense of perspective. Let's see what the rest of the week brings and how my perspective changes once I'm in the trenches again.

One thing I realized is that as a project usually involves several teams of people, knowing who is involved will be extremely important to the success of the project. I would assume that this can usually be known up front if time is taken for planning and understanding. For example, any need for a DBA on any project I've worked on was known early on. A proper plan will map this (possibly through some sort of activity diagram with swim lanes). It seems appropriate that each team has representation in the project kickoff meeting, especially from the person most likely to do the work.

This kind of plan requires participation in a project from a technical lead - the person who is responsible for designing the solution. Early participation can help determine an overview of the human resources needed for the project, especially if the technical lead is well-versed in the inner workings of the organization. After design, the needs would be more detailed - however, still not necessarily precise since design can change so much during build. The project plan needs to be reviewed and refined after this milestone.

Some questions to ask from each team that should generate tasks to add to the project plan:

How much lead-time do you need in order to plan for resources?

What information do you need in order to estimate effort?

How does one submit a request for work?

What level of involvement will be needed during the project?

This information should be available as part of the project documentation and should be tied to the tasks in the plan. A project plan should always map dependencies between tasks and between milestones.

Wednesday, September 17, 2014

Another way to quantify architctural layers in an enterprise sofware environment

Today, I watched a Webinar on how we can manage layers in the enterprise architecture. The webinar covered a technique that includes structures and guidance for managing and extending core systems by abstracting them and layering a middle tier and thin client tier on top of that.

The key takeaway for me was the timeline for change. Core systems generally change 10-20 years, middle 3-5, and thin client 1 year. Each layer in the architectural model mapped to a different development methodology - core is waterfall, middle is iterative, and client is agile.
The single most activity that provides the best value is to wrap the core systems to provide enterprise-wide abstraction.

In terms of approach, there were several prescriptions depending on the needs of each organization in terms of achieving the goals of addressing common concerns. One approach was to separate or create services into classifications : application, channel, and enterprise. Each approach is more costly, but offers more benefit of done right.

The speakers offered that the drivers of cost are related to management and governance. They cautioned to only build what is needed (a common theme in software development).

Several recurring themes in my view of software architecture for the enterprise were presented in the Webinar. This only further solidified and validated what I knew. Not to sell them short, because it also added to what I know. Brilliant work and a great presentation!

Wednesday, September 10, 2014

Software Craftspeople: Generalizing Specialists and Specializing Generalists

I recently came across a series of Agile essays and collections of knowledge about agile design, database refactoring, agile practices and more. After reading http://agilemodeling.com/essays/generalizingSpecialists.htm#Figure3
by Scott W. Amber I started analyzing my own skill set and how it's grown over the years. I've always been more of a jack-of-all-trades sort of person, which I attribute mostly to a craving for mental growth.

The essay makes some compelling arguments for composing a team of generalizing specialists. Some of the arguments include better communications, more accurate knowledge which is shared amongst all members of the team, and less overhead - all of which result in better overall efficiency and more accurate results.

The author offers a case that having generalized specialists will result in reducing redundant documentation which in turn results in more accurate and maintainable documentation.

As for my own generalizing, I've been working on growth in terms of strategy, architectural diagrams, and planning. The hard skills in which I am strongest include RDBM, JavaScript, html, css and C# .NET. I have some skill in Java, nosql db, and VB.NET. In the realm of soft skills - I've been writing, growing my "business analysis skills" and improving communications overall.

Monday, September 8, 2014

let me map the way

I revisited my realization that I am visual-spacial and that I'm good at mapping routes. Much of the time, I'm very good at predicting time to destination too. This lead me to come up with a method of mapping out goals literally. Here's what I propose:

Map setup:
-a star in the center indicates now (current time and state).
-each goal is placed in sectors around the center at a distance proportional to the effort involved.
-each goal is sized proportionally to the value gained by achieving the goal.
-paths should be marked using lines drawn as directly as possible from the center star to each goal.
-goals that can connect directly, should also be connected via straight lines.
-the connections should represent possible routes.
-if the goals are too close so that the proportion of the line does not reflect effort, add more lines or use curved lines so that the total distance is equal to relative effort.

Routes:
-routes can be drawn over the paths that connect the points to plan the best route.

Impediments/unknowns/obstacles:
-indicate obstacles with roadwork signs, roadblocks across the path, whatever else you can come up with to show slowdowns, blocking, etc.
-use dotted lines to indicate unknowns
-can label landmarks, unknowns, whatever using map keys, or tag with a number and show detail referenced by the number or letter-number combo

I will need to create a sample to include that in this post later, if I don't get back to it it's because I didn't map the way...

This could potentially be automated and read from different sources of data with appropriate mapping to the source fields.

Thursday, September 4, 2014

build v buy

Do you build or do you buy? Pros and cons related to software, hardware and talent.

The common case, on the surface, is software. Starting there, the pros and cons for build and buy can be weighed against current business cases and a determination can be made based on specified criteria.

Some pros for buy are decreased time to production, reduced internal management and support, field tested. Of course all of these are dependent on the product, your share of their market, your contract, and the internals and stability of the vendor. The cons include unknown internals, shiny advertisements-dull product, limitations to change, dependent upon stability of vendor company. Again, much of this depends on the vendor, product, and your company's relationship with the vendor.

Some pros of build depend mostly on internals and include build to suit, future change can be supported, issue mitigation can be reduced. All of these depend on staffing and management. Work can be outsourced to a consulting firm, but you'd lose some benefit of having full internal control and domain knowledge. Cons include internal staffing and management, continued costs.

When it comes to software, I am definitely build over buy. But, in some cases products with great APIs and well known products, documentation, and support can be great!

A second, lesser thought of case is in terms of personnel. The old cliche "good people are hard to find" may or may not be true depending on your perspective, but good people are not hard to build. It just takes a good organization to build good people. Once they're built, you want them to continue working for you even if they are working elsewhere. This requires a new way of thinking about work and employees.

Pros of buy - instant ninja, do not have to invest in training programs, experience.
Cons of buy - carried over attitudes and ways of thinking (inflexible), unknown experience, no historical working relationship.
Pros of build - can tailor skills to needs, strong relationships, domain knowledge.
Cons of build - may not prove fruitful,  might take skills elsewhere, takes time

Building employees must be a continuous cycle - recruit from colleges or other, training, exercising, etc. If the employee gets some buy in in the company, they will continue to work toward improving themselves and the company. Especially if they understand that the results of a company becoming more successful means that they will become more successful. That kind of buy-in (ownership) also means that employees would be more likely to stick around at their own company.

Set up the right kind of environment where people are brought up and ingrained into the organization and you will have a good thing - just keep doing it right.

UML Documentation: diagramming an existing application

I began creating diagrams of a system today that has very minimal diagrammatic documentation. There are many word written, but any person who is new to the system would have a high learning curve in order to get caught up.

The diagrams are the beginning of the development design and planning phase of a project. I will be working with a new developer, albeit far more experienced than I am, on this project. I worked on this application during a previous project and have been supporting it as well.

For some time I have had the goal of learning documentation techniques. Recently, I watched a PluralSight  training video titles "Introduction to UML". I took studious notes and am now putting the new knowledge into practice. I have no guide, so I'm certain to make mistakes, but perhaps it is better to get that sorted out now so that I can begin to improve.

I started on paper, then using Visual Studio Ultimate 2013 I created a Modelling project in the TFS Team Project folder that contained the application. I added a Use Case diagram and transferred (by re-creating) the core use cases into a single usecasediagram. Next, I created a high level component diagram of the whole system - which has several moving pieces.

This takes a bit of time, but not much. I was especially efficient since I was able to reference my notes and having gained much better understanding from the video. I saw a previous diagram of mine and recall how long that took to produce because I had no idea wtf I was doing. Now I have at least some frame of reference.

Still, that crappy diagram was useful enough to convey much about the system to a BA who had not yet worked with that system. Without the diagram, it would have been really difficult to communicate about all the moving pieces.

When the new developer is on the project I'm hoping that there will be a similar experience in terms of being able to communicate with the aide of the diagrams I produce. I liken them to having maps when dropped in the middle of a forest. Now all we need in a compass, better yet - GPS!

Communications Confusions

I learned today that ineffective communications can lead to a great deal of confusion. Often, the case is that a little bit of taking a step back can clear up much of this. A colleague asked me to do something that goes against the grain of what the team as a whole has been striving to improve. This lead me to put up resistance to the request.

As we talked it out we came to find that we were saying the same thing, but from a different perspective. Turns out the root of the issue was the colleague's unfamiliarity with a tool that we are using was causing some confusion. This is actually quite common with this specific tool which has lead to too much confusion over the years.

Other things came out of that conversation about some process changes that were not communicated; also, which not a single member of the development team was involved in. Wonder what the downstream effects of that will be...

I'm a fan of communication and collaboration, without which there will certainly be difficulties in maintaining the processes which are created.

Tuesday, September 2, 2014

Experienced || Good

In certain ways, I'm finding that developer experience in years does not always translate to better code, use of OOP, or better design/architecture.

Granted, I'm sure the world is full of top-notch developers. But for every one of those, how many are just filling seats and creating more slop? How do we bail the profession out of this? It's up to us a professionals to continue to study and educate ourselves and each other about our craft - never mind our past education and experience; or that of others. We can learn something new from anyone!

I post here to teach but also to learn.

Friday, August 29, 2014

Efficient SQL searching

If you don't want to use Fulltext indexing, or you need to order your search results by some criteria (which columns take precedence) here is a way that this can be accomplished with decent efficiency.

Suppose we have a People table in a Sql Server database and we want to find people by last name, then first name, then email address.

we can write the query as:

WITH lastNameBegins AS
(
    SELECT id,
    1 criteriaNum,
    ROW_NUMBER() OVER (ORDER BY lastName) AS rowNum
    FROM People
    WHERE lastName like @searchTerm + '%'
),
firstNameBegins AS
(
    SELECT id,
    2 criteriaNum,
    ROW_NUMBER() OVER (ORDER BY firstName) AS rowNum
    FROM People
    WHERE firstName like @searchTerm + '%'
),
emailBegins AS
(
    SELECT id,
    3 criteriaNum,
    ROW_NUMBER() OVER (ORDER BY email) AS rowNum
    FROM People
    WHERE email like @searchTerm + '%'
),
SELECT id, name
FROM People as p
JOIN (
    SELECT id, MIN(criteriaNum) criteriaNum, MIN(rowNum) rowNum
    FROM (
        SELECT * FROM lastNameBegins
        UNION
        SELECT * FROM firstNameBegins
        UNION
        SELECT * FROM emailBegins
    ) AS a
    GROUP BY id
) AS o
ON p.id = o.id
ORDER BY criteriaNum, rowNum;

And now you have a search that is ordered by criteria. The CTEs defined the ranking of the criteria, we union those together, there could be duplicates so we group those out and take the min vals for our ordering. I suggest limiting the size of the inner set 'a' for efficiency's sake.

Monday, August 25, 2014

Break from the Switch-Case

If you find yourself reacting to states via switch-case statements, consider using a GoF pattern called the State Pattern
http://www.codeproject.com/Articles/489136/UnderstandingplusandplusImplementingplusStateplusP
described in C#.

Basically, it uses state objects that know how to transition to other states. The main object has a reference to the current state. Possibly a state manager could offer more extensibility. A state manager would manage the transition between states instead of the state object itself. Another option could be to create a manager that wraps the main object, and the state object(s). Then a momento could contain the previous states. Whatever the situation dictates, break from switch-case.

Friday, August 22, 2014

Articles on Technical Debt

Here is a great article about the true cost of Technical Debt by Aaron Erickson. The author gives a good example of the cost over time and relates it to Financial Debt in a way that should make sense to management and appropriately supports the metaphor.

I read another article of his that talks about killing the project in favor of small releases. That was great too and can be found on the same site.

When I read the example of commissions, I immediately thought of a commission service that could be called from anywhere on the intranet to calculate commissions for the company. I thought of reports and of SSRS reports and realized how far behind it would be to write reports with a technology that does not support services as data sources.

Wouldn't it be great to write some business rule in a first class code language and have all consumers call that centralized logic to do the processing? It would be contained in a service of course, not in a DLL or some other form of shared code; if it were, each place that uses that code would have to update in order to get the new rule. So your change would still involve some Technical Debt a la Architectural Debt (your architecture introduces more future work required to change some core business logic that can change). IDesign refers to this likelihood of change as volatility.

This volatility factor is a natural occurrence in software, whether recognized or not.  Martin Fowler points out that this type of ignorance is wreckless, it is likened to going on a shopping spree and maxing out credit cards that you don't know you have to pay back. Then you will get the calls from the bill collectors. "The business needs to change xyz factor to yyz factor and now!" So now you don't sleep for a week and everyone is run ragged and angry because of this inexperience that cost just the right price.

In the case of Aaron's article, huge deals can run afoul and what was once an exiting merger can turn into a costly technical nightmare. I would like to hear of an example where this was the case. Office Depot and Office Max merged recently and stated the projected costs and benefits of the synergies of the merger. I wonder how that is going from a technical standpoint...

If I find anything, I'll surely have to blog on it.

Wednesday, August 20, 2014

Data Modelling and DB design


In a Sql Server Pro article from 2005
written by Michelle A. Poolet I encountered some great advice and a great example on how to normalize an existing table that is in first normal form (all the data in one table).

Not to pick on the author here, nor criticize, but I will go further with the normalization process for this database and take it from second normal form (where the author ended up) and go closer to third normal form by creating lookup tables for wood type, finish, size, and a whole other root for shipping. As the author states, we don't know enough about the business process to do so in the real world. I will do this to further illustrate the potential of doing so (after collecting proper usage details of course).

The example is for a custom furniture shop, which means that they use wood and other supplies to build furniture. They probably have some standard wood types that they use and possibly keep stock of some common types. If the database were expanded to include wood type, a lookup table would be really useful to link supply to demand.

I know, I know...build for current use cases, not future use cases. I hear you loud and clear. But let's just assume that they have some standard wood types for sake of this exercise. And let's assume that we've done our due dilligence and we are ready to take it to the next level.

If wood type alone becomes a lookup table, the root table will lose a varchar(30) [2B + nB, n=len(x)] worth of data and gain a smallint [2B] worth. Assuming that the type name is on avg 15 chars, that means a savings of 17B/record or 72 - 17 = 55. Now we can fit 25% more rows per page, about 137 rows per page. If we offloaded ALL of the varchar cols to other tables, we would have even better numbers here.

When we pull off those values into other tables, we will have to perform joins to them, but as long as the record count in those is small (I'm thinking most of it is standard anyways so they might be lookup tables that can be added to for new vals) they should be loaded into the cache and well reused. If anything, they may not be used every time users access the data. Again I don't have the use cases, but I'm making reasonable suggestions about them.

I look at any kind of notes or comments fields as something that should always be it's own table. Refer to the ProcessOrderNotes column in the ProcessOrderStatus table in Figure2 of the article. Each comment can be related in a number of ways to other tables either directly via an FK to the note, or to multiple entities via linking tables. In this way, the comments or notes become easier to query across if someone wants to search for something specific within a note. Eventually, this will be a use case so why not structure the data model to be more adaptable to begin with?

Think open-closed principle here where the model is open to extension, closed to modification. Of you have a new table that requires notes, you can just add an FK or a new linking table depending on what is needed. Now the app code can go on about calling it's addNote function, get the PK and write it to the table as an FK as needed. No reason to reinvent that. Well, suppose that all notes now need a timestamp and username. Guess what? Just add a table related to the Notes table, update the code to save to that each time the note is updated. Or store a new note record each time for auditing purposes and show them all - use cases?

Now I'm getting carried away, so I best stop here.

Friday, August 15, 2014

RMDB Surrogate v Natural Key

An integral part of transactional relational databases is the Primary Key (PK). A PK is a value that defines the data row and that related tables use to relate via a Foreign Key (FK).

Four standard criterion are commonly used to measure a PK.

1. Is it unique?
2. Is it immutable?
3. Is it minimal?
4. Is it never null?

A key can consist of one or more columns in a table as long as the combination meets the criteria.

My recommendation is that you stick to a Surrogate Key if you don't know. Here's why-

Natural keys made under an assumption that some business rule will not change can cause significant problems later when it does.

Most arguments for natural keys place emphasis on ease of use. If you find yourself joining multiple columns because your PK is a combo key then you lose that value. Create views to ease your suffering from joining a few more tables in your queries.


The exception I would make is for SOME lookups. Esp when a well defined code is used as the natural key. Bit even then, shit changes all the time. Suppose you assumed that the US state codes were immutable. Or telephone area codes. You would have to write scripts to update all those FKs that use 'Calif' or use some hacked in solution to fix your data and update all the apps that consume the data to support the hack. Contain volatility in your data!

Process v Procedure

Today I learned the difference between a business process and a business procedure. I looked through a few sources on the web and found that a process is made up of many procedures and tasks, and can cross departments. A procedure is a set of instructions to complete one unit of work. I would equate these to separation of concerns in software.

Think of an enterprise application environment where there are several standard concerns such as - application access security, sending emails, logging errors, reporting on data, saving transactional data, sending data to an external destination, etc, etc. These are re-usable procedures. A process collects them into something meaningful.

Since most common concerns can be thought of as procedures. It would be helpful to think of procedures as abstractions when describing a process.

Consider a task of explaining the process to a non-technical person, or someone outside your organization. The details about the procedures are specific to your domain, also they could change without the overall process changing. Is it a process or procedure? Could it be changed without affecting the path from point A to point B? Can parts of the process be standardized or abstracted into procedures?

For a more concrete example, a procedure could be "communicate with someone". There may be some criterion that will change the communications type but the end result is communication. Perhaps the communication is an email, could be a phone call, a text, in person, etc. The point is that a communication needs to take place, and the details are not a concern of a process.

Perhaps some Protocol will define what steps to follow for the communications procedure. Perhaps the communications needs to be saved for auditing purposes. That would limit the possibilities of forms of communication. In any case, those are procedural details. The process is about linking procedures.

Wednesday, August 13, 2014

Process Documentation Style

Finding that I post more on planning and management than on technical topics. This stems from current efforts more than anything.
The main thing I want to cover is that a process is repeatable. If it's not understood and documented, then there ain't no process. Or, as I just found out, it's the components of the business process that do not exist.

http://m.businessdictionary.com/definition/process.html

However, I would state that a process that is different for each individual or instance of input, cannot be defined by a single meaningful process. In order to understand a current process in such state, much effort is required to understand the process from each point of view.

Unless a consistent process has been defined, the process is to collect whatever procedures make sense at the time in order to achieve the outcome.

In some cases, perhaps in a results-oriented environment, ad hoc may be the best approach. This style of process management relies on the individual more than the structure of the process.

There are certain risks involved in taking such an approach including un-repeatable results and inconsistent data for analysis. Another drawback is the increased risk in taking on technical debt, depending on the individual.

The benefits are that the end result can be achieved without following a possibly complex and lengthy chain of procedures - results are dependent upon the individuals involved.

A well-defined process can be implemented in a process-oriented environment. This style is suitable for consistent and repeatable behavior, as long as people are following process.

The major risks are that a process may be ill-defined, overbearing, or not well understood. It may serve to stifle creativity as well as lead people to go off process if it is not understood, is too complex to follow, or if time to results is inadequate to the situation.

The benefits can be great once data is tracked and proper analysis techniques can be applied. Resources can be directed according to analysis, and long-term planning can be better achieved.

Possibly, one of the most important aspects of defining a process would be to keep the long-term goals in mind. If the process becomes heavy for the sake of the process, it may do more damage than good.

Sometimes, tools can limit the performance of a process. Appropriate tooling should be applied to the appropriate process; the process should not be limited or hindered in effectiveness by the tools.

Tuesday, August 12, 2014

how not to beg in the streets

Don't write your life story on a piece of cardboard, no one has time. Just ask for money.

Maybe I'm getting lazy with this one, but I saw a couple writing something on some cardboard that looked like a life story. I caught the first part of it - "Traveling..." that's about as far as anyone would get. So to any would-be travelers who run out of money - write really big "Traveling, need money! Please help!" that's it. Three lines, really big.

I also learned don't derail a train that's barreling down the tracks! Personal story - working hard on cleaning house and making great headway. Out of left field comes a request from management that would completely sidetrack the effort! I still need to push back on that one, at least it should wait until we clear out the playing field. Right thought, wrong timing!
Sudden change is a huge source of stress. "Stress can initiate the "fight or flight" response, a complex reaction of neurologic and endocrinologic systems.
 http://www.medicinenet.com/script/main/mobileart.asp?articlekey=20104

Resulting in high employee turnover. Those in leadership positions should be aware of the stress that their activities may cause. One way is to gauge current workload and progress before introducing new work. And keep the timelines and scope realistic.

Monday, August 11, 2014

Plan to Organize

Before you begin, set forth a plan. This is the most important part.

Having an adequate plan is especially important if you need to communicate and coordinate with one or more people. And hey, of you have more than one thing to do, why not put those in some central place that all parties can see so you don't drop the ball on on one or more of them? Try tracking things like progress, blocking, and work in this way too.

Devise a system of communication and a workflow that works for you and your organization so that it's not all in your head. This is the start to doing great things. Manage your work effectively, and you will work more efficiently.

Thursday, August 7, 2014

Thoughts of branching

If your code base has projects that are shared across products, consider creating a branch for each release.

This will allow bug fixes/hot fixes to be applied without having to play catch-up when the shared code has breaking changes. Bug fixes are typically unplanned activities, therefore spending extra time on them means that more time is spent on unplanned activities. These unplanned activities will surely include communication, additional testing, and resource leaks.

The time to bring a code base up to speed is during a planned release cycle. Be sure to include the updates in your release plan (before you start coding and during the planning phase). If you do not, this hidden input will creep in and cause some time pressures during the development phase, also the test phase will have some creep in it which will undoubtedly increase time unexpectedly as well.

To hot-fix a bug, branch from the current release branch (not the main/trunk), fix it, test it, release it. To scope a release, be sure to include in the plan any updates that have to be done to keep the code current. Branch from main/trunk for the release, do your work, merge main/trunk to your branch, test, deploy. Deprecate the old release branch if you are not using it.

Wednesday, August 6, 2014

listen up...to your users

I overheard an interesting conversation today regarding the use of encrypted storage. There was a wise person who had some very good ideas on the topic. The advice was to leave it up to the responsible individual to decide what should be encrypted and what should not. This was the conclusion along with some good backing points.

One of those were that often folks in IT assume too much of the users without getting the picture of reality. We live in a bubble of misconception and assumption when we don't reach out to our users for input on decisions we are making for them. It's one thing to give users the tools to do what they need to do, it's quite another to restrict users without accurate input. And even worse, to "Big Brother" users under the assumption that they need to be culled into submission and watched over.

One such "Big Brother" practice of storing attempted passwords in a database borders on unethical. How many times have you forgotten your password but you know it's either something like this, or maybe that other one? Is that the same password that you use for your bank account? Hope not!  Shame on you if it is, but shame on those who decided to store that information! My advice is to keep that information all in one place, on paper, in a safe but accessible place. This way, you know if someone got to it because it will be missing! If you try the shotgun approach by guessing every password you have, you never know if they're being captured by wolves.

The corrective action to take when implementing some new process is to either take a passive approach and measure popularity, or actively solicit feedback. Many times users will be unwilling to give feedback (unless negative) so the passive option may be preferable since you would get skewed data from solicitation (unless it's very active).

But Phil, you say how do we know if we should implement xyz until we know if the users want it? And if we can't solicit for xyz because the data will be skewed, how do we know? Perhaps there is a way to offer incentive for giving feedback. Bringing users into the fold regularly will be key to delivering successful solutions. Is it more costly to put something out there that never gets used, or to pay for active participation during the decision making process? Perhaps monetary incentives are not necessary; perhaps users will participate because they want to make their own lives better, or see their ideas in action, or just plain joy of contributing to a decision making process.

Here's my take on some certain approaches that I've seen in terms of user voice:

The online user voice boards where users add and vote for features is ok, but I think most of those get lost in the shuffle for larger apps, and only relatively active users participate. The ones with the most votes get bumped to the top and many duplicates are created then ignored.

Soliciting users via popups in the middle of their work, or when they are opening some page or application is intrusive and annoying. Often users are in the middle of a train of though (on a mission so to speak). DON'T INTERRUPT YOUR USERS!

A panel of beta users might be a very rational answer to soliciting feedback, but that's very exclusive and misses many users, if not the most common users.

A suggestion I might make is to allow users to submit suggestions for review (categorized). Once those suggestions are in and aggregated into coherent features, have a suggestion panel (like an ad rotator) in the application (the active information section that should exist in your app anyways and if not add it). Make it simple -  how much value do you place on xyz? 1-5 skip. If you have a logged in user, this can be collected per user and tracked along with usage data. In this way you can gather useful metrics on the most useful features for the most active users. For a portable device or small screen with local apps, each app should have it's own feedback link that's easily accessible and displays a few suggestions for voting. Wii had an awesome game called Everybody Votes, that was fun! User's got to see where they stood against the trend-line nationally and internationally. If they were in the highest group they got props - rewards!

The point here is to include users in the conversations when making decisions that affect them. Get the largest data sample possible and give good exposure to anything worth doing to gauge value (e.g. if it's really worth doing).

Measure what can be taken away as well! How much CRUFT is out there clogging up implementation when it's not even used? Lose it!

Tuesday, August 5, 2014

Make no assumptions about data you don't control

Today I learned from a legacy-app where a developer made an assumption about a data element that the application did not own. I would say never make assumptions about any data, but especially for data coming from another source - huge no-no.

Friday, July 25, 2014

SQL Server single row table

In SQL Server 2008 R2 I made a single row table that had Application Settings.

In order to enforce a table constraint of one row, create the table with a PK constraint on and id column with a CHECK contraint of idCol = 1. This will do two things, it will enforce that every row is unique and not null with the PK and the CHECK will ensure that the only val possible is 1.

Then, all you have to do is insert a row with id 1 and all other columns are setting vals (global settings for the app). If you need multiple settings, join tables to this one.

For Multi-tenant applications, consider adding a table for tennant specific settings. This table is for global settings only.

Thursday, July 24, 2014

Code Reuse and...EmberJS components

This is a combo post, a one-two punch. There are so many compelling reasons for writing reusable code - and some reasons for not. And...here is something I learned about reusable templates in EmberJS.
Starting with reasons to NOT write reusable code.
  1. lack of awareness
  2. doing a POC (Proof Of Concept) and will NEVER release the POC version
  3. something is on fire and you need to put it out NOW!
  4. someone is a doody-head and is making you say uncle
  5. you are establishing a practice of creating more work for later so that you remain gainfully employed

Here are reasons to do it.
  1. because you can reuse it later and don't have to build the foundation every time you build something
  2. it's more challenging, thus more compelling and you thrive on challenges because it makes life fun and interesting
  3. it can speed up development significantly
  4. it can be done in a way that protects against changing technologies
  5. save the best for last...when the business rules change (because business needs change) it's less risky, easier and more compelling to change the code to meet the needs

Those are just a few things off the top of my head, to get things rolling. Can you think of more?
The EmberJS Part
EmberJS is an open source client-side MVx framework that uses templates to markup layout for the web. The templates it uses are Handlebars, which is another open source code base.
here is an example template for EmberJS:

<script type="text/x-handlebars" id="somePig">
    {{#each piggy in pigs}}
        This little piggy went to {{piggy.placeName}}
        <br />
    {{/each}}
</script>
pigs is passed to the template by EmberJS via a controller, it's a JSON object.
And here is a component, which is a reusable template that takes in values that can be set when you use it:

<script type="text/x-handlebars" id="components/list-items">
    {{#each item in items}}
        item.displayText
        <br />
    {{/each}}
</script>

<script type="text/x-handlebars" id="pigs">
    {{list-items items=pigs}}
</script>

working sample

Stop and smell the roses, or at least slow down for it.

I took this literally today. Thank you Boeing for having the best flowers on my walk! This changed my trajectory today.

Tuesday, July 22, 2014

a positive approach to choices, risks and rewards

Some advice that I cobbled together from various readings, conversations, videos, podcasts, and whatever else regarding personal choices: "it depends..." What does it depend on for you? Each person would likely have different answers that may limit their choices. One thing it may depend on is whether or not this will lead you to a long-term goal or if it's a sidetrack. Ultimately, only you can answer that question. One thing for sure is that fear does not count, it's not part of the equation - no rewards are without risk. Which leads to another dependency - what's your level of comfort for risk? Is this just outside of your comfort zone? Or way outside of it? If its just outside or a bit more take that extra risk. If it's way outside, work up to it - find a way to take steps that way without getting too far ahead of yourself. Sometimes we have to completely ignore fear for a bit so we can get to the real answers. This doesn't mean ignore warning signs or anything like that, I'm referring to irrational fears. As a testament to my own practice of moving past fears, I recently took some leaps at the opportunity to meet with the CIO at work once a month. I'm not in a management position or anything like that either, and have been rather fearful of authority in the past. I just took a chance and set up a meeting. I was nervous for that first hour-long meeting, but apparently I made a heck of an impression. We meet once a month and talk in the hallways now. I can present ideas and sometimes elicit positive change. This is a good culture from the top down, granted, but I think from now on I'll open more doors of communication - seems to work well. No matter your decision, always remember that life is as fleeting as time and that mistakes happen, times can be tough too. Learn from them, move on, build on the experience. Good things come too, and when they do it far outweighs the negative impact of most mistakes in the long run. That being said, there are risks and there are deaf, blind and dumb. Be informed and aware of the risks so that you can make informed decisions.

Monday, July 21, 2014

EF load v to array

I did a performance comparison with query.Load() and query.ToArray() in EF 6 and found that the latter performs faster because .Load() calls ToList() to load the objects then disposes of the list. This takes more time and resources to execute, so why not call query.ToArray() and just let that load the conext?

Friday, July 18, 2014

WCF Command Interface

I just saw a clip about using a Command Interface pattern in WCF to alter behavior

In order to demo this concept, the presenter built a windows form app the had a method to set the color of a panel. It implemented a ServiceOperation contract so it was available via an endpoint that exposed the method as a service. There was another windows app that had a select list of colors that the user could select, which in turn would call the service endpoint and send a message that contained the selected color data. Low and behold! Once the color was selected in one app, the color would change in the other! While probably not the most practical solution to this problem (sockets would be faster) it did demonstrate an interesting pattern - how to change the STATE of a running process with WCF. This is no different that using a service for updating a database however and did not deliver what I thought was promised. It did not change BEHAVIOR of the running application. I expected it to implement some sort of command pattern, but it did not. The behavior was that when the color was sent to the application via the method, the background color of the panel would change to that color. This got me thinking about how to actually change the behavior of a running process using a service to send in the command. Here's what might work - WARNING: do not do this because it would be dangerous and stupid! Implement a service that takes in the definition of a method that implements a known interface. Use some sort of injection framework to compile and load that method, then call the method. Alternative to sending in the method - send in the DLL, save to a temp folder, use IOC/DI frameworks to grab the object that implements the method, spin up another process to run and inject that new object. Sounds dangerous and foolish, but it just may work. Herein lies the danger(if it isn't obvious)! Anyone who gains access to the endpoint could implement whatever they want via this command injection, including stealing data, crashing the system, or whatever. I haven't POCd this or anything, I'm just thinking out loud. I'm thinking it may work though as long as you can load the DLL this way, and if WCF doesn't trip on the file transfer (prob not the least concern about that one).

Thursday, July 17, 2014

My research into finding a better way to estimate lead down many paths. Many had no real, concrete, empirical way to go about estimating. Others cautioned away from it completely. While I'm in no seat at this point to impart supreme wisdom, I can share something I did find insightful.

COCOMO II

function point analysis

With COCOMO II I learned much about different factors that can impact delivery. This model attempts to include those, combined with the size of the software in order to plot effort distribution. The reason I found this insightful is because it calls out effort that I and others never really seem to recognize.

For an example, let's try this exercise with three fields and a button. Save the fields, send an email, and display some data to a user.

With function point analysis we can quantify the requirements as function points. Let's use this premade spreadsheet to make it easier to do our experiment.

http://www.cs.bsu.edu/homepages/wmz/funcpt.xls

I use 4 simple inputs, 2 user outputs (email and display to user), 1 inquiry (the return data), and 3 files (the data tables, the email builder, the response page). I tried to be conservative with Fi.


then, we plug our number into the calculator. (in this case 136) or rather let's reduce that by the ILFs and take 136 - 90 or 46 to be conservative since I'm slightly unsure about the ILFs at this time so I'll write them out of the equation completely, after all we're experimenting.


make our scale and driver adjustments as necesary (let's keep these default for now.

 and the tool runs the data through the model and shows us the light.


Who would have thought that three fields and a button could take so much effort? Sure we can take shortcuts and get it done in a day, but that comes with much future risk because of the likelihood of creating mounds of technical debt - especially over time when this sort of wild-west thinking becomes the normative.

There are several sources out there with full descriptions about how the model works. It's all very scholarly and technical. Why does this matter? This is important because it's something you can show managers that will give them the same insights as you gained. Arm them with information about technical debt management and you are on your way to a more realistic reality! Unless they are unwilling to learn, grow, change, listen to scholars or naval research. In that case, start shopping for new opportunities.

Tuesday, July 15, 2014

Planning for change

Today I had to implement a change across many applications that were all in different states of organization. I saw several different styles of coding and architectural styles. Some applications required minimal changes, others required some core changes that were deferred due to timing of other projects. The main issue with the ones that required more work were that they were written under the assumption that the way it was will be the way it will always be. What? Let's parse that out.

It was a certain way when the application was built.

The developers assumed at the time that this is the way it will always be.

This could not have been solely the developers' doing though. Often the business will make the assumptions for the developers and not even deliver a possibility that they will change in the future. But there may be some hope for developers in determining when something can change right before our very eyes!

If it's possible to change, then assume that it will.One thing that can and does change all the time is the name of things. If something can be called by an id or a name it's likely that the name will change. This is one big smell that we can be aware of that should call to mind "can this change?".

Once we have identified that it can change, the next step is to analyze the impact of the change when weighed against the development time involved in making our code handle this possibility up front.

Here is one very common scenario and an uncommon but potentially problematic possibility where changes could have an impact:

Person
{
InternalID,
SocialSecurityNumber,
Name,
Gender,
}

Names change all the time. Gender, not so often - but it is possible. How often do we plan for a gender change edge case? Often it is assumed that it will always remain the same, but just recently I was involved in an application where the planners assumed that it would never change. There will certainly be issues if it does.

I would also assume that a social security number can change, would never use that as an internal ID unless you want to end up allocating things to the wrong person someday.

How about some type names?

ThingTypes
{
ThingTypeID,
ThingTypeName
}

then these are used in the application like so:

Big Thing is allocated to: John Smith
Wierd Thing is allocated to: Jennifer Blah
Foo Thing is allocated to: Phil Vuollet

where someone got it in their head that they want to display these distinct types of things on the page and someone else ran with that and went ahead and wrote some code that did just that:

routine:
 add "Big Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 1
 add "Wierd Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 2
 add "Foo Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 3

so now we have some hard-coded stuff to deal with here. I got into the habit of creating data that represents this type of thing with a property called DisplayText. This makes it evident that the value is meant to be displayed and that is its only purpose. It's also a safe bet to put in a property called LookupCode, ShortCode or something similar. Usually the business will have some acronym that they will want to use in places and will refer to the entity as such in most cases. Whatever you do don't put in something vague like Name or Description (that one especially is a big offender). DescriptiveText might be better. If the text is meant to be displayed then use DescriptiveHelpText or DisplayDescription, or some name that conveys meaning.

Even better, use the same naming convention throughout your application! if you do, then you can create a reusable code bit that can do the displaying for you!

GetListValues(entityName, filter)

then you can map the properties to a generic view model that can be used to hook into a view on the page.

ListItemViewModel
{
  ID,
  DisplayText,
  HelpText
}

This can be extended if necessary through inheritence:

SensitiveListItemViewModel : ListItemViewModel
{
  SensitivityLevel,
}

Notice that the SensitivityLevel is a type in and of itself since this can change and often will.This should be maintained in the same fashion in terms of data structure and coding dynamically with change in mind.

We have seen that by keeping change in mind, and by weighing the initial cost vs long-term update costs (sweeping changes, testing effort, deployment effort included) we can recognize and plan for the future changes that will inevitably come.

Thursday, July 10, 2014

You will never get good requirements until you ask

It never fails...ever.

Screenshots with no urls, incomplete if not confusing information, unclear images, lack of details...

It can be frustrating at times, but the important thing to remember is that your job is better than being homeless. That is probably the hardest job of all.

Every work day on my walk to and from the office I pass the same half-dozen homeless people - both ways. They are in the same spot, every day, rain or shine, cold or hot rattling off the same line as hundreds of people ignore their pitch. Much like entrepreneurs, if they don't make a sale they starve. Are you going to starve if you don't get complete requirements?

What can you do? Have a conversation with the app owner, others who may know about the application (of course that can be a wild goose chase at times), or perhaps the customer. It drives me crazy when I get incomplete details and the first thing I'm asked is "how long will this take". I typically reply with something like "I'll have to look into it and get back to you". But here's the real hitch - how do you know how long it will take until you do look into the details?

Here's a scenario, probably all too common - developer (let's call him Adam) gets some hot change that cannonballs in out of left field. Adam is asked how long the change will take (especially because the managers want to evaluate whether or not the change is worthwhile), however Adam is not very familiar with the legacy ball of wax application that he inherited from 50 other developers who have their junk in their, and a whole bunch of cruft to pile on top of that.

From the requestor's POV it's just a text change to a field, or adding a checkbox, from Adam's POV it's pandora's box - or at least it should be. However, we frequently forget about so many hidden-inputs to the time-to-live equation. These are often swept under the rug in the developer's mind and those on the other side of the code have no idea unless you bring them into your world. If the requestor is on your team you can do this effectively by outlining each hidden-input in your world and tying times to each of these. Here is a start to the hidden-input equation:

1. Familiarity with the code - this is a biggie, esp if you have a ball of wax or some other poorly organized, debt riddled solution.

2. Familiarity with the language/technology - of you don't know how to write JQuery that well, you will spend more time looking up functions and special selectors. This can add significant time. Also, there are environmental technologies and config/set-up concerns that might add significant time as well.

3. Deployment - is this automated? Will there be some special considerations? Do you know how to do this, or will you need to learn?

4. The existing state of the solution - does it build? will there be tons of dependencies to upgrade? Can you navigate it? Do you have access to retrieve it?

5. Communications and Planning - will there be 7 hours of meetings per week related to this project? How much planning and documentation is involved?

6. Interference - How much time do you spend supporting other applications? How about other people? How about random administration tasks and duties?

7. Requirements - How solid are the requirements? Do they consider what the user doesn't want? Any degenerative cases? Edge cases?

8. Your technical environment- Is your machine a dog? Or is it relatively quick and efficient? Are your servers dogs?

9. Your health - Are your fingers broken? Are you on the fringe of a meltdown? Are you going through some stuff at home? Guess what? Those will affect your productivity! Consider it.

10. Testing - Who's on the hook for testing? Are there automated tests? Unit Tests? Web Tests? Do you need to develop them to support the change?

There's a good start, I like to make a list or put in tasks for each action item and adjust the times for each according to other items that have an effect, or if you have a project management system - adjust resource allocation percentages.

One task should always be - Research Existing Code.
Another would be - Update Documentation.
And another for each environment - Prepare for Deployment and Deploy to {env}

I like this approach because it gives meaning to the estimate and transparency to the reality of what you need to do to make the request happen. Otherwise, when you say "this will take 48 hours of work and two weeks to complete" you won't get a response like "WHAT!" and then a look like you are worthless and incompetent. If you break it down, the rquestor will be able to see how much work is involved. Also, put in a disclaimer of sorts highlighting the risks (interference, "my mother is in poor health so I may be taking a few days off", I can't type so fast with no fingers). Look to the SEC filings for public companies and see how they disclose risk. (RNDY) You probably don't have to go that far, but it's a good model.

Keep in mind that the requestor is probably super-busy and under pressure, they probably don't have time to be pedantic about creating requirements, you have to dig for clarity sometimes. They also will want things done asap, but keep them informed!