Friday, December 19, 2014

Is Better Software Your Desire?

A comment on Jim Bird's post titled "If you could only do one thing to make better software, what would it be?" on his blog Building Real Software (which has some fantastic posts and of which I am newly a fan) I made the following comment, thought I would share it here as well.

Good software should be able to change as and if the business needs change. The long-term benefits of writing software that is already decoupled, or can easily be decoupled (modular, reactive, whatever) are greater as the software can evolve at a lower cost. Good software has a lower TCO than bad software. A lower TCO can be achieved by either not changing or by changing readily. Bad software is ok as long as it doesn't need to change or isn't critical.

That being said, better software for the given situation can be achieved by connecting the developers/architects/engineers with the users and the business in a way that allows them (us) to understand the place of the software in the context in which it is intended to operate.

If you were hiring a building architect and engineers and builders to build a structure, the purpose of the structure would define the design parameters. If the building is a shelter for farm equipment, then one could likely buy an off-the-shelf structure. Conversely, if the structure is intended to house offices and strive for energy efficiency, then the project managers must find adequate resources to design and build such a building. In the latter case, the costs and time will certainly be far greater. The product should be expected to net a positive gain in the course of it's tenure at the selected location.

In software, where many parallels are made to structural architecture, it can be said that the expected cost and time should be reflected by the expected lifetime and ROI of the product. ROI can be actual profits or protected losses. These factors should drive the qualities of the code more than any other, regulatory compliance aside.

Better software can and should be driven by better project management.

Wednesday, December 17, 2014

Reactive Manifesto Irony

While diving in deep into the Reactive Manifesto I encountered a link that resulted in the familiar no connection error. Well of course, I don't have a connection with my phone in the tunnel, so this is what I get. If the developers followed the manifesto however, I would get a different response.

This gets a bit into the heart and soul of reactive systems. Perhaps in a reactive design, the browser would do a few things differently. It might check for a connection first, then check to see if a DNS server is available, and do the lookup. If not available, it might give the option to try later automatically and let you know when the page is loaded for viewing. Why should the user wait and watch the progress bar while the page loads? Just deliver the page after it's done!

Think of some of the newer automated phone systems where the system will call you back when an operator is available so you don't have to drag on the line listening to distorted elevator music and the same repeated message until they can catch up to you. This is part of being reactive. If the phone centers were truly reactive they would be able to add operators as demand increased in order to decrease wait time/increase capacity. A browser might not be able to scale in a similar way, but it can behave in an asynchronous way with respect to the user.

Wednesday, December 10, 2014

nosql document storage locking

MongoDb locks at the database level, so more databases may be better.

CouchDb does not lock, but uses mvcc (multi-version concurrency control) which means a new version for updates. This gives automatic auditing. Bigger database files though. Must be a cleanup mechanism.

Looking for a document level locking mechanism, but probably wouldn't make sense. Imagine a map/reduce that passes over one or more docs that are receiving updates. I guess the read would need to wait if any of its read targets are locked, but that would mean checking for locks on each doc which may be costly. Not sure, just a hunch.

Thursday, December 4, 2014

The Mythical Cloud

The cloud, in its modern form, is relatively new to me. I've been doing some research and exploration in this space and "trying on" a few of the offerings in Windows Azure and gained some insight into AWS. Here are a few of the things I learned along the way - some of the luster and myth of cloud platforms appear to be either marketing fluff or misunderstandings. Following is my take on it. I'll be stating what I found to be benefits, costs, and myths.

IAAS and PAAS benefits

environment setup - this is extremely simplified, especially when you need something that is readily available as a pre-packaged vm. Otherwise, you still need to set up your own vm and manage it like any other vm environment. From a PAAS perspective, the infrastructure is generally ready to go. Developers targeting PAAS need to build the software in a way that can take advantage of the scalability in PAAS. There may be some learning curve, bit overall not much.

access from anywhere - depending on the security models and service levels involved, cloud offerings may be made available from anywhere. This may not be true of a private cloud, however there are some tools available to develop code right in Azure's online portal. This version of Visual Studio has some bells and maybe a few whistles, bit is certainly not full blown. Can do some html and JavaScript editing and check in against a repository, has intellisense.

scalability - this is the big sell, IMO. If the app is built in a way that can scale nicely (see Reactive Manifesto and dig further for more details), then the cloud and particularly PAAS truly has value!

built in management and monitoring - there are some great tools built into the platforms for monitoring applications running on them. Although, it seems to me that you get what you get, perhaps it is possible to develop your own custom tools to extend the dashboard - haven't dug in yet.

cost - this is the big sell to the business from a financial standpoint. Arguably a bigger sell than scalability. Not much to say other than pay for what you use. You may have to manage your scaling actively to really benefit though, depending on a number of factors that you can probably sort out so I won't dive in just now.

IAAS and PAAS drawbacks

less customization - PAAS especially.

relies on public space - availability, security. A recent conversation with a colleague mentioned shared drives and data and privacy rights of clients. The bottom line was, what if the FED has to confiscate the disks for some other tenant and your clients' data is on there? Good point. Hybrid solutions may be the answer there, but then you're really considering added complexity of data management.

maturity - knowledge, resources, help. This is becoming less of an issue as cloud technologies mature, but still worth consideration - else this blog post would have no value and I wouldn't be doing it besides.


configuration and management - it's not a simple as some would have you believe. This seems to be the main myth from the technical standpoint.

patches and breaking changes - so, if patches are automatically applied in PAAS and we don't have control of when and how. Breaking changes automatically break my apps. Is this a real problem or am I making this up?

cost - cost savings may not be guaranteed. Cost of hardware is always on the decline. Reduced staff may not be as simple as claimed due to management of cloud resources concerns. Usage may be less than optimized by poorly written code, or poorly managed cloud resources.

So, the story I've told is that the cloud is great what it does. That cloud vendors boast about who is using it does not support any specific need to use it. Cost savings and ease of use depends on your own parameters. The decision, like any major decision, should be based on understanding and analysis of costs and benefits for your own organization. It is important to understand the marketing so that it does not misinform the decision to go to the cloud, particularly if you already have infrastructure in place. If you are a developer or technical person, use the cloud in any way you can because that is the way to be truly informed - through experience.

Wednesday, December 3, 2014

Is there a cloud myth?

There's a new kid on the buzzword block. He's been lurking in the shadows for a bit but he's taken center stage. His name is Cloud. He is a mystery and as we learn more about him, he remains more mysterious. This Cloud seems like he can do just about anything, he can do so much! He hosts databases and applications, offers insights and metrics, he is everywhere and nowhere all at once. He is a myth and a legend. It seems he can do it all!

Even with this Cloud guy around though there's still software that has to do things with data, and communicate with other software and people. That aspect hasn't changed, so why suddenly does the cloud have all the answers? The issues I see with software isn't with software it's with people.

Software can be whatever we make it. It can make good decisions for us, it can stay awake always, it can crunch numbers and transform data, it can provide endless hours of entertainment and automate almost anything as long as we enable it to. The disconnect is that most people don't know how to enable it or quantify things in a way that's consumable by software. Sure we're getting better at that and some people have this thing mostly figured out, but there are so many who don't who make countless decisions under the gun that the magic bullet of the cloud isn't going to save anyone.

In fact, there is no magic bullet. No application, nor platform, nor engineer can solve all your IT problems no matter what their marketing departments are telling you. This is the myth they build to sell product, when we know that in reality it's only a shift in complexity. Any system has a finite lower bounds of complexity. I define system in this sense as a collection of modules that perform some activity. Any oversimplification below this bounds would cause incompleteness in the system.

Complexity costs. Costs are usually determined in software by effort. The best way to reduce costs are to simplify the business processes that dictate the software. Complex business processes cause software to become more complex which results in more effort in analysis, planning, development and maintenance. A reduction to the impact of business rules and constraints on software will result in lower front-load costs and TCO.

The myth of the cloud simplifying will NOT make your software better, whether you developed it or bought it. It will NOT make your users more aware and tech savvy. And it will NOT make your apps better or more maintainable. It WILL reduce the cost of hardware and space that you maintain. It WILL fail somethimes. It WILL reduce certain maintenance procedures, but trade for others.

Tuesday, December 2, 2014

Handling Changing Data in Applications

I encountered a defect yesterday that lead to some thoughts about handling computation source data that is external and subject to change. The scenario is as follows:

The defective application stores work done and sends billable work to a billing system after monthly batch approvals. The rate billed depends on the shift and position of the person doing the work.

Some shifts/positions are billable and others are not. Only billable time is send to the billing system. A billing code is computed and sent to the billing system along with other data. The code is derived from a lookup table keyed on shift and position.

An employee who billed time can change from a billable shift/position to a non-billable one in the interim between the work done and the export to billing. The export can only be performed after the approvals are done by the supervisors. This is where we have our defect, and there are a few ways to resolve this.

One way to resolve the issue, in my opinion the best way, is to look up the data from the source system (which is not the application in question btw). In this case we would need to connect to the HR system which we don't currently do and which requires considerable changes to the enterprise architecture in order to accomplish. The right way is of course the most difficult.

Another option, which is less intrusive, is to save a copy of the data in our application and use our copy of the data to compute the result. This has its own flaws. If the source system is incorrect and is retroactively corrected, then our copy is also incorrect bit may never be corrected. This would rare in our case and probably not worth haggling over, but could be disastrous in other systems.

Another consideration with the latter approach is which data to store. Do we store the computed data or the source data? To answer this question, we need to consider the source data and the lookup data. How likely is each value to change? What is the outcome if the data does change? Will it have a desirable or undesirable effect? For example, our billing system may change and the values in the lookup table that we send may need to change to accommodate.

In our case, saving the shift and position at time of work done seems like the best option. But remember that we are making copies of data. A better approach is to get the data from the source. There are other things to consider here as well. Remember how the billing code and the shift and position data are a lookup in our application in question? Is there another system that should house this data? The billing system? Some other intermediary? Or is the work tracking system appropriate?

The answer in this case is that it depends on who manages the data and the merger of the data from the sources. In any case, whatever system is used to house the merger should make it available to other systems via a standard data transfer interface so that it can be queried on demand without making copies.