Wednesday, December 3, 2014

Is there a cloud myth?

There's a new kid on the buzzword block. He's been lurking in the shadows for a bit but he's taken center stage. His name is Cloud. He is a mystery and as we learn more about him, he remains more mysterious. This Cloud seems like he can do just about anything, he can do so much! He hosts databases and applications, offers insights and metrics, he is everywhere and nowhere all at once. He is a myth and a legend. It seems he can do it all!

Even with this Cloud guy around though there's still software that has to do things with data, and communicate with other software and people. That aspect hasn't changed, so why suddenly does the cloud have all the answers? The issues I see with software isn't with software it's with people.

Software can be whatever we make it. It can make good decisions for us, it can stay awake always, it can crunch numbers and transform data, it can provide endless hours of entertainment and automate almost anything as long as we enable it to. The disconnect is that most people don't know how to enable it or quantify things in a way that's consumable by software. Sure we're getting better at that and some people have this thing mostly figured out, but there are so many who don't who make countless decisions under the gun that the magic bullet of the cloud isn't going to save anyone.

In fact, there is no magic bullet. No application, nor platform, nor engineer can solve all your IT problems no matter what their marketing departments are telling you. This is the myth they build to sell product, when we know that in reality it's only a shift in complexity. Any system has a finite lower bounds of complexity. I define system in this sense as a collection of modules that perform some activity. Any oversimplification below this bounds would cause incompleteness in the system.

Complexity costs. Costs are usually determined in software by effort. The best way to reduce costs are to simplify the business processes that dictate the software. Complex business processes cause software to become more complex which results in more effort in analysis, planning, development and maintenance. A reduction to the impact of business rules and constraints on software will result in lower front-load costs and TCO.

The myth of the cloud simplifying will NOT make your software better, whether you developed it or bought it. It will NOT make your users more aware and tech savvy. And it will NOT make your apps better or more maintainable. It WILL reduce the cost of hardware and space that you maintain. It WILL fail somethimes. It WILL reduce certain maintenance procedures, but trade for others.

Tuesday, December 2, 2014

Handling Changing Data in Applications

I encountered a defect yesterday that lead to some thoughts about handling computation source data that is external and subject to change. The scenario is as follows:

The defective application stores work done and sends billable work to a billing system after monthly batch approvals. The rate billed depends on the shift and position of the person doing the work.

Some shifts/positions are billable and others are not. Only billable time is send to the billing system. A billing code is computed and sent to the billing system along with other data. The code is derived from a lookup table keyed on shift and position.

An employee who billed time can change from a billable shift/position to a non-billable one in the interim between the work done and the export to billing. The export can only be performed after the approvals are done by the supervisors. This is where we have our defect, and there are a few ways to resolve this.

One way to resolve the issue, in my opinion the best way, is to look up the data from the source system (which is not the application in question btw). In this case we would need to connect to the HR system which we don't currently do and which requires considerable changes to the enterprise architecture in order to accomplish. The right way is of course the most difficult.

Another option, which is less intrusive, is to save a copy of the data in our application and use our copy of the data to compute the result. This has its own flaws. If the source system is incorrect and is retroactively corrected, then our copy is also incorrect bit may never be corrected. This would rare in our case and probably not worth haggling over, but could be disastrous in other systems.

Another consideration with the latter approach is which data to store. Do we store the computed data or the source data? To answer this question, we need to consider the source data and the lookup data. How likely is each value to change? What is the outcome if the data does change? Will it have a desirable or undesirable effect? For example, our billing system may change and the values in the lookup table that we send may need to change to accommodate.

In our case, saving the shift and position at time of work done seems like the best option. But remember that we are making copies of data. A better approach is to get the data from the source. There are other things to consider here as well. Remember how the billing code and the shift and position data are a lookup in our application in question? Is there another system that should house this data? The billing system? Some other intermediary? Or is the work tracking system appropriate?

The answer in this case is that it depends on who manages the data and the merger of the data from the sources. In any case, whatever system is used to house the merger should make it available to other systems via a standard data transfer interface so that it can be queried on demand without making copies.

Friday, November 21, 2014

Design with the customer and the 99% rule

Today I had a meeting with the customer that I consider a victorious event. I was thrown into an iteration to address some backlog items this week. I have about a month to do this, so I started out working with the customer to refine the product backlog items for this iteration.

The product in question is an internal application and the customers are another department within the organization. I have seen some of this application before, but I am generally unfamiliar with the workflow and the terminology involved. We began by getting me up to speed with the workflow. Next, we tackled the more specific issues one at a time. Some of the backlog items are still a bit fuzzy, but we were able to design acceptable solutions for the ones we went over.

Having the primary business resource at hand was integral to designing successful solutions. The three of us were able to quickly assess the situation, with code and data at hand, and get to the real solutions without having to ask too many "what ifs?" that clog up the development process. In the past I've asked every "what if?" I could think of until I was over committed to the point of under-delivering. This could have been avoided simply by having the customer at hand to discuss the solution.

Thursday, November 20, 2014

How To Update NuGet in a Solution

If you build on a build server and see some error like:

EXEC: 'Unity' already has a dependency defined for 'CommonServiceLocator'.

You probably have NuGet.exe that's out of date. Here is how to update with Visual Studio 2013/2012.

Step 1: Go to the Solution folder on your local and remove the .nuget folder (or rename to .nuget.bak until later in case you need some custom stuff from .config or .targets).





Step 2: Open solution in Visual Studio.

Step 3: Right-Click Solution from Solution Explorer and select the "Enable NuGet Package Restore" option.

 
 
*say yes in the dialog that follows

Step 4: Check-in your changes.


There you have it! The easiest way to update nuget in a Visual Studio solution.

Friday, November 7, 2014

Suddenly - AOP, WCF, and Service Orientation

Suddenly all roads in my path of knowledge lead to AOP. It is all around me at the moment. Is it a tending topic in software or have I not been around the right circles until now?

Anyways, now to expand on my WCF topic of AOP that I introduced a couple posts ago.

After writing this, I felt compelled to add that this post is broad due to the recent extent of information processing this week. I apologize for being not more focused, but there were so many lessons learned and I feel I need to catch up since I've been neglecting this blog lately.

It's all cool that you can hook into the message processing pipe in WCF. That means you can intercept messages on the way in and out. You can access the headers, log the message, log details, log exceptions, and more. That's the same thing you get from the IoC way of doing this. So why should you host a bunch of services just to get AOP? It turns out that you don't have to host them separately.

WCF has the netNamedPipeBinding that uses binary serialization. It can be hosted in the same process as your consumer. So...if you have a your business logic divided into a few domains, you could put each in its own project that can be hosted in the same process as any workflow.

WCF has some built in logging at the message level and you can implement your own writers or use built in log writers. Or you can build up your own interceptors and do all the same pre-call, post-call, exception logging and auditing as you would with any class method.

The added benefit of using WCF in proc is that you can use the headers to share information and log that. If you do this, then you always have the option of switching to remote hosting to scale out. Perhaps one specific service has a long-running process and it needs to continue after the root application closes.

Consider a windows app that runs on a workstation in the intranet. The application has some process that eventually becomes a long-running process because the business rules changed. It needs to run async because of user experience. However, the user may fire-and-kill (pull the plug, end task, switch the power off), or the process becomes a resource issue, or the business logic changes more frequently so that it needs to be centralized for cleaner deployments (I'll consider the original design of not centralizing it poor in the first case, but this was the first example that came to mind).

In any case, we can now offload the process to a scalable server environment with just a few changes to the configuration settings. It's still in the internal network, only now its not local to the process or the machine. This implies at least three possible deployments, but there is at least one more and subclasses of that.

The service can live on the internet as well as the aforementioned inproc, on machine, or in network locations. If the service is over the net, it implies public accessibility. It could be hosted in the cloud or on a remote network. If it's on the net, in the cloud, you could tunnel to it and really take advantage of scalability.

I'm not making any strong recommendations here about where to host your services. These are just some of the available options. Consider what is appropriate. If you have an intranet and don't need to make your service publicly accessible, http in any case is not necessary and will only slow your communications down. InProc with net.pipe will offer the best communications performance. There's no reason you couldn't host the same service in more than one application, or even the same one.

Here's how the code looks to start the inproc service. http://msdn.microsoft.com/en-us/library/ms731758(v=vs.110).aspx

I was thinking about spinning hosts up in a web app, but this may be better to run as a windows service.

  • A host may spin up other hosts for it's dependencies. Can use IoC to create them. IMyService service = container.Get<IMyService>();
  • Perhaps consider disposal after it's not needed.
  • Let WCF handle threading, there are settings for that.
  • Can run as Task for calling async, but callbacks are possible so why not use those if desired, just don't dispose of the service before the callback.
  • Transactions can be coordinated across services via the DTC. Could be locking issues, so be careful to avoid deadlocks. Anyways, perhaps any transactions should be encapsulated in a service method if possible. The transaction could originate outside of the call or inside. using(var t = new Transaction()){ clientProxy.DoWork(); t.Commit();} Just need to make sure it flows into the service. Commits within the service for flowed transaction does not commit the trans, only votes yes. See DTC for more info (gtfy).
In this post, I provided an introduction to the possibilities of doing AOP in WCF along with reasons to host chunks of code in a service. I provided points about hosting the services in process when appropriate, as well as a few other points in general about WCF.

This information is not complete without mentioning the important consideration that this is a service oriented approach to programming and generally is more procedural in style. In contrast to object oriented programming which is more functional in style. While the services can be more or less OO inside, they generally should capture a specific unit of work. In some cases the unit of work is generic and in other cases it is very specific. The unit of work may be a composition of several services, or wholly encapsulated in one service. The services may be more generic and shared or very specific and not shared.

Now that the whole surface has been scratched, it should be clear to the reader that there are compelling reasons to use WCF and to understand service oriented programming as it evolves into its future form.

Wednesday, November 5, 2014

A bit about AOP in .NET

Today I learned a little about Aspect Oriented Programming (AOP) in .NET.

Basically, AOP is where you can add functionality to your system without altering the business logic code. It allows developers to add with a clean separation of concerns. For example, if you want to log all exceptions in a certain way, you could either wire up the logger to all your methods, or use exception logging as an aspect that you can add to all methods.

The way you hook in the exception logging would be to write your logging method, write your aspect method to wrap method calls in a try-catch, then use either an IoC container to wrap your classes or an IL rewriter to rewrite your compiled code to wrap them.

The added functionality is called advice, where you hook in is called a joint point, and you can filter joint points to get pointcuts. A pointcut is a specific point that you want to target (like a specific method with certain input values).

Here's a bit about each approach:

IoC - IoC frameworks wrap your implementation of an interface in a proxy that implements the same and when you request the implementation from the container, you get the proxy. If the framework offers a way to intercept the method call, you can add your functionality to the method implementation.

IoC frameworks take a bit of work to set up and work best if you write good OO. Once you have that, you can rock your socks off and stop repeating yourself.

IL rewrite - you write your code, then your wrappers. When you compile the assembly, it needs to run through some post-production process to weave the aspect code into the business code.

Generally, AOP is beneficial since it allows you to separate system logic from business logic. Functional requirement code, from non-functional requirement code. Also, when unit testing your BL, all the other stuff can stay out of the way if you do it right (keep them separate until you integrate, then test the integrations).

I would recommend keeping the BL, the AOP, and the wiring code each in separate assemblies so that they can be composed into different arrangements (one for unit testing and one for production). Adding the aspect should be configurable, so you could even have different wrappers for each environment. This is a clear drawback of automatic IL weaving on build if it does not allow it to be done dynamically.

Another approach is to apply AOP to message bases interactions by intercepting the messages. This can be done in WCF by hooking into the message pipe. The forthcoming Actor Model implementation known as Orleans may or may not offer the ability to intercept messages, I haven't has the opportunity to explore it yet.

In summation, AOP is a great way to apply SOC so plan your development efforts accordingly so that you can use it later without modifying too much code to make it hook correctly.

Tuesday, November 4, 2014

AO & SO

While attending a course on advanced WCF and Service Orientation(SO) via IDesign, the instructor dropped a couple terms - "Actor Model" and "Microsoft Orleans". Did the search thing and found that there is a relationship between SO and Actor Orientation (AO).
AO (my own term) seems to be similar to OO but with distinct boundaries. It's been around awhile and seems to be making its way back to the surface.

Orleans is an implementation of AO in Microsoft Windows Azure. It was used in HALO 4. Read more here. The value proposition of Orleans is that all Actors in the system are virtual - they exist even if they don't and they are distributed. This implied that some state data is stored in a way that can be retrieved from any node in the network. All messages between Actors result in a promise that, ultimately should be fulfilled. Once it is, some callback if fired. These concepts should be familiar to those who are following the js community. CouchDB, a nosql document store, uses an eventual consistency model to provide support for distributing databases anywhere for horizontal scaling. Essentially the db can be materialized locally or anywhere where access to another copy is available. EmberJS, an SPA MVC framework, uses promise to populate views asynchronously. I'm probably missing something big with AO, like remote code execution or some other gigantic benefit, but I'll definitely be learning more about it. Likely I'll have more to blog about the topic when I learn more. Will tag AO and Actor Model.