Friday, April 24, 2015

Feature Solution Problem

Consider a feature that can be done in a few different ways. Solution 1 is estimated to take 8-weeks and is the gold plated version. Solution 2 is also estimated at 8-weeks and is the silver version. Solution 3 is estimated at 6-weeks and is the bronze version. Which option do you chose? Certainly not the silver since it takes the same time to produce gold right?


We don't have all the information to make an informed decision. We don't know what the initial costs are for each solution. We don't know the TCO (Total Cost of Ownership) for the long term. We don't know the risks.


Lets put up some more info:


Sol 1: Needs 6 human resources for a cost factor of 8.6. Requires licensing at a yoy cost factor of .2. Requires 2 new long-term commodities at a cost-factor of 3.3. Has a potential impact of other projects due to resource constraints.


Sol 2: Needs 4 human resources at a cost factor of 9. No licensing required. Requires 1 new commodity at a cost of 2.3. Has a potential impact on other projects.


Sol 3: Needs 4 human resources at a cost factor of 6. No licensing. No new commodities. No risk to other projects. Long term risks due to support, estimated at yoy cost of 1.1.


Which is the best option? Still, we haven't normalized the information. We don't know the long-term costs of 1 & 2. We don't know details of coat to other projects...something more to think about.

Good Requirements, Good Requirements, Good Requirements

The incantation of the title of this post will unfortunately not make good requirements appear out of nowhere. So what does? Teamwork!


Good requirements are all about communication. What is a software requirement/specification anyways but a communications from the consumer to the supplier? As in any form of communications, there must be an understanding between the senders and the receivers of the nomenclature.


Imagine someone called a contractor for an estimate to build a deck, but the consumer really meant a patio. Lets say Joe Consumer communicated the following whilst getting an estimate - "How much would it coat to build a big deck in my back-yard." What's the first thing that contractor would ask? How big...if Joe doesn't know the measurements he'd have to give an approximation. Not being very savvy of measurements because Joe never needed to and he didn't pay attention in the 4th grade, Joe says - "from the garden to the kitchen window and to the end of the third fencepost." Well, since this is a phone call, the contractor obviously doesn't have enough information to give even the remotest estimate. So, says the contractor - "I'll have to come out and take some measurements to give an estimate."


That is to say that the contractor has to take the time to drive to the site, get out some simple tools, measure, and likely draw up some rough sketches before she can have any remote idea of the costs. The information the contractor needs could be what kind of wood, how many stairs, benches, flower boxes, shape, coatings, height, etc... So goes the conversation...until the consumer and the contractor reach a point when they cannot communicate any further because things are getting really confusing and difficult.


Finally, the consumer grabs a magazine and shows the contractor a picture of a stone patio. Somehow it all makes sense at that point. The customer's frustration with the contractor's obsession with wood and the contractor's confusion about why the customer would want the deck at ground level are all abated. There is a clear communication.


The moral of the story is that a communication can and should take several forms. Ultimately what the contractor would likely do is draw a diagram with markings and notes and use that to build the patio. The contractor would include all the hidden parts that make the patio stable while perhaps referencing the picture for aesthetic touches.


In the end, it took a conversation about the details of the project to build the right thing. It's the same in software. If the consumer and builder of software are not talking about the same thing, details can become frustrating. This is why it is important for the askers and the builders to produce requirements/specifications together.


Addendum: Of course when the contractor gives the estimate, she says "I could do it for..." as they always do. Why do they say it this way? Let us pontificate...


Consider the following key elements - time, cost, and quality. The patio could be built for more cost and better quality. Could be built fast for the same price but lower quality. So, what are the driving factors in the estimate that was given? Just something to think about...

Friday, April 17, 2015

Project Solar

I started doing some research into going solar at home. Not off the grid solar, but grid-tied solar. There are two basic types - off-grid and grid-tied. With off-grid, there are two options with or without batteries. Go without batteries if you're only interested in using electricity when the sun shines. Perhaps the whole house wouldn't be off-grid. Maybe only a sprinkler pump or something would. Off-grid completely likely means you'll need batteries, lots of them if your area doesn't get much sun. Grid-tied means you can have year-round power from multiple sources (the grid and your solar array). Grid-tied also means that you can sell the electric company your excess. Tying in with the grid means a few things, first you have to register with the electric company. There are certain compliance constraints that don't seem unreasonable. Some basic jabbing around the web turned up some good information. Some things were a little surprising actually, but make sense once you know how the cells work. Solar cells are gates. When the sun shines the gates open letting the electricity flow. All the cells in a panel are hooked together in a series. Think of a channel with locks for ships to pass through. If any of gates are not receiving sun, the whole panel is useless. A partial block reduces the effectiveness of the whole panel by half. Any shadows are bad, even from bare branches. Besides the financial considerations, I'll need to consider any technical constraints such as shadows, trees, arc of exposure during peak and off peak sun (summer and winter). This will translate to output which would equate to return on investment. A kit with an array of 10 270 watt panels costs about $6k for the equipment. There is a registration fee for the electric co. There would be incidentals, plus electrician to hook into the grid. Rough estimate for DIY (except the tie-in) is $10k for a system rated at around 400kwh/month with 5 hours of sun per day. I split that in half to 200kwh/month for my forecasts. Could probably refine that and adjust for seasonal differences. Without A/C my monthly usage is 800kwh/month. The electric co pays wholesale prices for electricity. One possibility is to take an option for $1k and set up a single off-grid panel and use it to charge batteries for light usage. Perhaps the garage can run on that or something. In that case it might be useful to have the battery system. Could be cost prohibitive if cut-overs are involved. It would be nice to have one in place to see what the actual output is for different times in the year. In any case, I believe that this would not yet be able to tie into the grid. There were implications that at least a 2000W rated system was necessary for selling to the grid. Perhaps there is a difference between grid-tied and selling back (also known as net metering). Next steps - - get wholesale price - price of application - can < 2kw systems be grid-tied without selling back? - measure space and arc of visibility - get hours of sun for locale - any local codes? This is going to be a three year goal since I have other things going on. I'll be looking to post follow-ups as things move along.

Thursday, April 16, 2015

ASP.NET WebForms UpdatePanel Partial Postback JavaScript After pageLoad

While working on a legacy app in WebForms, I came across a situation where a Control was not available to JavaScript after a PartialPostback. I took every measure to ensure that the function was called after pageLoad. At first, it was being called OnClientClick. Then I switched it to call during pageLoad, endRequest and every other client event. No matter what I did, both $ and document.getElementById could not find the Control in the DOM. It was like it didn't exist. Either that or the id changed. Turns out the last thing was true. The id changed after the partial. On the first load, the element had a regular id. After the post back it had a ct100....._regularIdPart id. Hmmm...so I switched up the selector to $('[id$=regularId]') which is endswith.

Thursday, April 9, 2015

Learning Assembly Language

For self-improvement I decided to learn assembly programming. Funny where a "Yellow Brick Road" can take you. What inspiration lead me to assembly? The Jargon File of course. Can't quite recall how I stumbled upon the Jargon File, but that lead me to want to learn assembly programming.

So far I'm reading this tutorial on the subject. I haven't done any of the programming yet. There is a try-it option which is great; however, it doesn't quite work out on mobile which is how I do my research for convenience. Don't you know there's already been a bit of payoff though? There has.

I was struggling with an existing feature in an app that had performance issues. It was caching html in a db as a result. When the user navigates to the page and opens certain sections of the page, an Ajax call was being made to load the html from the db. We needed the html to be built on-demand in order to implement a new requirement.

After updating the code to build the html when the user opens section, I noticed performance issues. After some investigation, I saw that there were anywhere from fifty to a hundred database calls (Intellitrace is a great tool for this). No wonder there were performance issues - database calls in loops and recursions will certainly have an adverse affect on performance. I had to improve that.

Thinking along the lines of assembly code where values from memory are loaded into registers (built-in memory in the processor), the solution was to load all the values into memory from the db and switch all the logic to operate on the in-memory data. This wasn't as difficult as it sounds.

All data that was used in the html was coming from 5 tables. Those tables contained meta-data. At most, any table would have about 250 rows. In the builder class, I added a IEnumerable member for each entity type. In the class constructor, I loaded all the values that would be used to build the html 5 calls to the db total.

Since we are using Entity Framework and Linq, it was just a matter of encapsulating each call to the db and switching from context.EntityCollection to _entityCollection in all of the calls. The same Linq statement applied except for the .Include statements. I didn't end up having any problems in that respect though since all the included entities were loaded in the initial calls (I lied about the number of tables, but most of the others are 1:1 anyways so not logically separate tables).

For example:

...
context.People.Where(p => p.IsActive == true && p.GroupNumber == groupNumberParameter).OrderBy(p => p.LastName).ToArray();
...


became:

...
_allPeople.Where(p => p.IsActive == true && p.GroupNumber == groupNumberParameter).OrderBy(p => p.LastName).ToArray();
...


By reducing the number of database calls in a similar way that assembly programming can be more efficient by reducing the chatter between processor and memory (interrupts) programming at a higher level will be more efficient. Of course memory pressures may result if you aren't cognizant of the demand so there are some caveats as always.

Monday, April 6, 2015

Assumptions

The big evil in software - assumption - strikes again! Every assumption begins with an ass, this time the ass was me. I assumed a security setting was available in a specific app, yet today it was pointed out that it is not. So here we are late in a project designing the least of evil implementation in order to hack it in. Note that our preferred implementation requires ripping out the application authorization implementation completely and using a custom role provider, however that is completely out of scope.


 So, we meet up in the middle somewhere. But what can you do? Do you spend the time up front doing all the research to know for all of the features that are being implemented in a 3-4 month-er!? That would add a month at least to the schedule. Or take on a few surprises...or never do a 3-4 month-er. Break dem dere requirements up into smallest chunks and flip em out to prod in little bitty chunks until you get what you want.

Wednesday, April 1, 2015

Sprint Duration

Some colleagues and I have been thinking a bit about the duration of a Sprint or Iteration and have decided that having a fixed duration is a bit of nonsense.


First of all it forces us to plan our cycles in an awkward way by putting filler into an iteration. This causes a few problems since it pits focused effort in more than one context. Instead of discussing, planning and developing the login feature we end up doing those for login and search, for example.


Secondly, it requires that we break work up that takes over the arbitrarily chosen timeframe. This is generally a good thing, but can lead to awkward breaks in the work and improper elaboration and planning. Say for example that the team decides to do one part in Sprint 1 then the rest in Sprint 7 because other things were more important. Now months have passed and of course now there will be rework because the elaboration of the final piece took place while Sprint 6 was under way.


Our take on it is to scrap this whole idea of arbitrarily time-boxed sprint cycles and base the cycle off when some feature/story/chunk of work is ready to go. Perhaps some minimum time would be appropriate. We are a small shop and we aren't set up to branch by feature. If we could, then features could be pulled rather than pushed. As it is, we can push per feature.


There is another reason for the push per feature method. When code sits too long, it grows fuzz. That's the only way I can describe the phenomenon. Somehow when we revisit the code we wrote months ago it grows little fuzzy edges of mystery that were once perfectly clear in our minds at the time. Ah well I guess that smell means we better clean it up again and introduce more risk, right??? I guess this could happen if your feature B in Sprint 7 has a shared dependency or please don't say it, shared code with feature A from Sprint 2. Now what you do! Retest Sprint 2!? Ha!!