Showing posts with label planning. Show all posts
Showing posts with label planning. Show all posts

Thursday, September 4, 2014

UML Documentation: diagramming an existing application

I began creating diagrams of a system today that has very minimal diagrammatic documentation. There are many word written, but any person who is new to the system would have a high learning curve in order to get caught up.

The diagrams are the beginning of the development design and planning phase of a project. I will be working with a new developer, albeit far more experienced than I am, on this project. I worked on this application during a previous project and have been supporting it as well.

For some time I have had the goal of learning documentation techniques. Recently, I watched a PluralSight  training video titles "Introduction to UML". I took studious notes and am now putting the new knowledge into practice. I have no guide, so I'm certain to make mistakes, but perhaps it is better to get that sorted out now so that I can begin to improve.

I started on paper, then using Visual Studio Ultimate 2013 I created a Modelling project in the TFS Team Project folder that contained the application. I added a Use Case diagram and transferred (by re-creating) the core use cases into a single usecasediagram. Next, I created a high level component diagram of the whole system - which has several moving pieces.

This takes a bit of time, but not much. I was especially efficient since I was able to reference my notes and having gained much better understanding from the video. I saw a previous diagram of mine and recall how long that took to produce because I had no idea wtf I was doing. Now I have at least some frame of reference.

Still, that crappy diagram was useful enough to convey much about the system to a BA who had not yet worked with that system. Without the diagram, it would have been really difficult to communicate about all the moving pieces.

When the new developer is on the project I'm hoping that there will be a similar experience in terms of being able to communicate with the aide of the diagrams I produce. I liken them to having maps when dropped in the middle of a forest. Now all we need in a compass, better yet - GPS!

Wednesday, August 13, 2014

Process Documentation Style

Finding that I post more on planning and management than on technical topics. This stems from current efforts more than anything.
The main thing I want to cover is that a process is repeatable. If it's not understood and documented, then there ain't no process. Or, as I just found out, it's the components of the business process that do not exist.

http://m.businessdictionary.com/definition/process.html

However, I would state that a process that is different for each individual or instance of input, cannot be defined by a single meaningful process. In order to understand a current process in such state, much effort is required to understand the process from each point of view.

Unless a consistent process has been defined, the process is to collect whatever procedures make sense at the time in order to achieve the outcome.

In some cases, perhaps in a results-oriented environment, ad hoc may be the best approach. This style of process management relies on the individual more than the structure of the process.

There are certain risks involved in taking such an approach including un-repeatable results and inconsistent data for analysis. Another drawback is the increased risk in taking on technical debt, depending on the individual.

The benefits are that the end result can be achieved without following a possibly complex and lengthy chain of procedures - results are dependent upon the individuals involved.

A well-defined process can be implemented in a process-oriented environment. This style is suitable for consistent and repeatable behavior, as long as people are following process.

The major risks are that a process may be ill-defined, overbearing, or not well understood. It may serve to stifle creativity as well as lead people to go off process if it is not understood, is too complex to follow, or if time to results is inadequate to the situation.

The benefits can be great once data is tracked and proper analysis techniques can be applied. Resources can be directed according to analysis, and long-term planning can be better achieved.

Possibly, one of the most important aspects of defining a process would be to keep the long-term goals in mind. If the process becomes heavy for the sake of the process, it may do more damage than good.

Sometimes, tools can limit the performance of a process. Appropriate tooling should be applied to the appropriate process; the process should not be limited or hindered in effectiveness by the tools.

Wednesday, August 6, 2014

listen up...to your users

I overheard an interesting conversation today regarding the use of encrypted storage. There was a wise person who had some very good ideas on the topic. The advice was to leave it up to the responsible individual to decide what should be encrypted and what should not. This was the conclusion along with some good backing points.

One of those were that often folks in IT assume too much of the users without getting the picture of reality. We live in a bubble of misconception and assumption when we don't reach out to our users for input on decisions we are making for them. It's one thing to give users the tools to do what they need to do, it's quite another to restrict users without accurate input. And even worse, to "Big Brother" users under the assumption that they need to be culled into submission and watched over.

One such "Big Brother" practice of storing attempted passwords in a database borders on unethical. How many times have you forgotten your password but you know it's either something like this, or maybe that other one? Is that the same password that you use for your bank account? Hope not!  Shame on you if it is, but shame on those who decided to store that information! My advice is to keep that information all in one place, on paper, in a safe but accessible place. This way, you know if someone got to it because it will be missing! If you try the shotgun approach by guessing every password you have, you never know if they're being captured by wolves.

The corrective action to take when implementing some new process is to either take a passive approach and measure popularity, or actively solicit feedback. Many times users will be unwilling to give feedback (unless negative) so the passive option may be preferable since you would get skewed data from solicitation (unless it's very active).

But Phil, you say how do we know if we should implement xyz until we know if the users want it? And if we can't solicit for xyz because the data will be skewed, how do we know? Perhaps there is a way to offer incentive for giving feedback. Bringing users into the fold regularly will be key to delivering successful solutions. Is it more costly to put something out there that never gets used, or to pay for active participation during the decision making process? Perhaps monetary incentives are not necessary; perhaps users will participate because they want to make their own lives better, or see their ideas in action, or just plain joy of contributing to a decision making process.

Here's my take on some certain approaches that I've seen in terms of user voice:

The online user voice boards where users add and vote for features is ok, but I think most of those get lost in the shuffle for larger apps, and only relatively active users participate. The ones with the most votes get bumped to the top and many duplicates are created then ignored.

Soliciting users via popups in the middle of their work, or when they are opening some page or application is intrusive and annoying. Often users are in the middle of a train of though (on a mission so to speak). DON'T INTERRUPT YOUR USERS!

A panel of beta users might be a very rational answer to soliciting feedback, but that's very exclusive and misses many users, if not the most common users.

A suggestion I might make is to allow users to submit suggestions for review (categorized). Once those suggestions are in and aggregated into coherent features, have a suggestion panel (like an ad rotator) in the application (the active information section that should exist in your app anyways and if not add it). Make it simple -  how much value do you place on xyz? 1-5 skip. If you have a logged in user, this can be collected per user and tracked along with usage data. In this way you can gather useful metrics on the most useful features for the most active users. For a portable device or small screen with local apps, each app should have it's own feedback link that's easily accessible and displays a few suggestions for voting. Wii had an awesome game called Everybody Votes, that was fun! User's got to see where they stood against the trend-line nationally and internationally. If they were in the highest group they got props - rewards!

The point here is to include users in the conversations when making decisions that affect them. Get the largest data sample possible and give good exposure to anything worth doing to gauge value (e.g. if it's really worth doing).

Measure what can be taken away as well! How much CRUFT is out there clogging up implementation when it's not even used? Lose it!

Thursday, July 17, 2014

My research into finding a better way to estimate lead down many paths. Many had no real, concrete, empirical way to go about estimating. Others cautioned away from it completely. While I'm in no seat at this point to impart supreme wisdom, I can share something I did find insightful.

COCOMO II

function point analysis

With COCOMO II I learned much about different factors that can impact delivery. This model attempts to include those, combined with the size of the software in order to plot effort distribution. The reason I found this insightful is because it calls out effort that I and others never really seem to recognize.

For an example, let's try this exercise with three fields and a button. Save the fields, send an email, and display some data to a user.

With function point analysis we can quantify the requirements as function points. Let's use this premade spreadsheet to make it easier to do our experiment.

http://www.cs.bsu.edu/homepages/wmz/funcpt.xls

I use 4 simple inputs, 2 user outputs (email and display to user), 1 inquiry (the return data), and 3 files (the data tables, the email builder, the response page). I tried to be conservative with Fi.


then, we plug our number into the calculator. (in this case 136) or rather let's reduce that by the ILFs and take 136 - 90 or 46 to be conservative since I'm slightly unsure about the ILFs at this time so I'll write them out of the equation completely, after all we're experimenting.


make our scale and driver adjustments as necesary (let's keep these default for now.

 and the tool runs the data through the model and shows us the light.


Who would have thought that three fields and a button could take so much effort? Sure we can take shortcuts and get it done in a day, but that comes with much future risk because of the likelihood of creating mounds of technical debt - especially over time when this sort of wild-west thinking becomes the normative.

There are several sources out there with full descriptions about how the model works. It's all very scholarly and technical. Why does this matter? This is important because it's something you can show managers that will give them the same insights as you gained. Arm them with information about technical debt management and you are on your way to a more realistic reality! Unless they are unwilling to learn, grow, change, listen to scholars or naval research. In that case, start shopping for new opportunities.

Tuesday, July 15, 2014

Planning for change

Today I had to implement a change across many applications that were all in different states of organization. I saw several different styles of coding and architectural styles. Some applications required minimal changes, others required some core changes that were deferred due to timing of other projects. The main issue with the ones that required more work were that they were written under the assumption that the way it was will be the way it will always be. What? Let's parse that out.

It was a certain way when the application was built.

The developers assumed at the time that this is the way it will always be.

This could not have been solely the developers' doing though. Often the business will make the assumptions for the developers and not even deliver a possibility that they will change in the future. But there may be some hope for developers in determining when something can change right before our very eyes!

If it's possible to change, then assume that it will.One thing that can and does change all the time is the name of things. If something can be called by an id or a name it's likely that the name will change. This is one big smell that we can be aware of that should call to mind "can this change?".

Once we have identified that it can change, the next step is to analyze the impact of the change when weighed against the development time involved in making our code handle this possibility up front.

Here is one very common scenario and an uncommon but potentially problematic possibility where changes could have an impact:

Person
{
InternalID,
SocialSecurityNumber,
Name,
Gender,
}

Names change all the time. Gender, not so often - but it is possible. How often do we plan for a gender change edge case? Often it is assumed that it will always remain the same, but just recently I was involved in an application where the planners assumed that it would never change. There will certainly be issues if it does.

I would also assume that a social security number can change, would never use that as an internal ID unless you want to end up allocating things to the wrong person someday.

How about some type names?

ThingTypes
{
ThingTypeID,
ThingTypeName
}

then these are used in the application like so:

Big Thing is allocated to: John Smith
Wierd Thing is allocated to: Jennifer Blah
Foo Thing is allocated to: Phil Vuollet

where someone got it in their head that they want to display these distinct types of things on the page and someone else ran with that and went ahead and wrote some code that did just that:

routine:
 add "Big Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 1
 add "Wierd Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 2
 add "Foo Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 3

so now we have some hard-coded stuff to deal with here. I got into the habit of creating data that represents this type of thing with a property called DisplayText. This makes it evident that the value is meant to be displayed and that is its only purpose. It's also a safe bet to put in a property called LookupCode, ShortCode or something similar. Usually the business will have some acronym that they will want to use in places and will refer to the entity as such in most cases. Whatever you do don't put in something vague like Name or Description (that one especially is a big offender). DescriptiveText might be better. If the text is meant to be displayed then use DescriptiveHelpText or DisplayDescription, or some name that conveys meaning.

Even better, use the same naming convention throughout your application! if you do, then you can create a reusable code bit that can do the displaying for you!

GetListValues(entityName, filter)

then you can map the properties to a generic view model that can be used to hook into a view on the page.

ListItemViewModel
{
  ID,
  DisplayText,
  HelpText
}

This can be extended if necessary through inheritence:

SensitiveListItemViewModel : ListItemViewModel
{
  SensitivityLevel,
}

Notice that the SensitivityLevel is a type in and of itself since this can change and often will.This should be maintained in the same fashion in terms of data structure and coding dynamically with change in mind.

We have seen that by keeping change in mind, and by weighing the initial cost vs long-term update costs (sweeping changes, testing effort, deployment effort included) we can recognize and plan for the future changes that will inevitably come.

Thursday, July 10, 2014

You will never get good requirements until you ask

It never fails...ever.

Screenshots with no urls, incomplete if not confusing information, unclear images, lack of details...

It can be frustrating at times, but the important thing to remember is that your job is better than being homeless. That is probably the hardest job of all.

Every work day on my walk to and from the office I pass the same half-dozen homeless people - both ways. They are in the same spot, every day, rain or shine, cold or hot rattling off the same line as hundreds of people ignore their pitch. Much like entrepreneurs, if they don't make a sale they starve. Are you going to starve if you don't get complete requirements?

What can you do? Have a conversation with the app owner, others who may know about the application (of course that can be a wild goose chase at times), or perhaps the customer. It drives me crazy when I get incomplete details and the first thing I'm asked is "how long will this take". I typically reply with something like "I'll have to look into it and get back to you". But here's the real hitch - how do you know how long it will take until you do look into the details?

Here's a scenario, probably all too common - developer (let's call him Adam) gets some hot change that cannonballs in out of left field. Adam is asked how long the change will take (especially because the managers want to evaluate whether or not the change is worthwhile), however Adam is not very familiar with the legacy ball of wax application that he inherited from 50 other developers who have their junk in their, and a whole bunch of cruft to pile on top of that.

From the requestor's POV it's just a text change to a field, or adding a checkbox, from Adam's POV it's pandora's box - or at least it should be. However, we frequently forget about so many hidden-inputs to the time-to-live equation. These are often swept under the rug in the developer's mind and those on the other side of the code have no idea unless you bring them into your world. If the requestor is on your team you can do this effectively by outlining each hidden-input in your world and tying times to each of these. Here is a start to the hidden-input equation:

1. Familiarity with the code - this is a biggie, esp if you have a ball of wax or some other poorly organized, debt riddled solution.

2. Familiarity with the language/technology - of you don't know how to write JQuery that well, you will spend more time looking up functions and special selectors. This can add significant time. Also, there are environmental technologies and config/set-up concerns that might add significant time as well.

3. Deployment - is this automated? Will there be some special considerations? Do you know how to do this, or will you need to learn?

4. The existing state of the solution - does it build? will there be tons of dependencies to upgrade? Can you navigate it? Do you have access to retrieve it?

5. Communications and Planning - will there be 7 hours of meetings per week related to this project? How much planning and documentation is involved?

6. Interference - How much time do you spend supporting other applications? How about other people? How about random administration tasks and duties?

7. Requirements - How solid are the requirements? Do they consider what the user doesn't want? Any degenerative cases? Edge cases?

8. Your technical environment- Is your machine a dog? Or is it relatively quick and efficient? Are your servers dogs?

9. Your health - Are your fingers broken? Are you on the fringe of a meltdown? Are you going through some stuff at home? Guess what? Those will affect your productivity! Consider it.

10. Testing - Who's on the hook for testing? Are there automated tests? Unit Tests? Web Tests? Do you need to develop them to support the change?

There's a good start, I like to make a list or put in tasks for each action item and adjust the times for each according to other items that have an effect, or if you have a project management system - adjust resource allocation percentages.

One task should always be - Research Existing Code.
Another would be - Update Documentation.
And another for each environment - Prepare for Deployment and Deploy to {env}

I like this approach because it gives meaning to the estimate and transparency to the reality of what you need to do to make the request happen. Otherwise, when you say "this will take 48 hours of work and two weeks to complete" you won't get a response like "WHAT!" and then a look like you are worthless and incompetent. If you break it down, the rquestor will be able to see how much work is involved. Also, put in a disclaimer of sorts highlighting the risks (interference, "my mother is in poor health so I may be taking a few days off", I can't type so fast with no fingers). Look to the SEC filings for public companies and see how they disclose risk. (RNDY) You probably don't have to go that far, but it's a good model.

Keep in mind that the requestor is probably super-busy and under pressure, they probably don't have time to be pedantic about creating requirements, you have to dig for clarity sometimes. They also will want things done asap, but keep them informed!

Thursday, July 3, 2014

Time is Fleeting

When you think something will take only 2 hours, and didn't really look at what needs to be done you may end up spinning your wheels and burn up a whole day before you know it.

I learned this lesson (once again today) and on reflection should have put a bit more thought into the whole amount of work that needs to be done rather than just the development task in a bubble.

One thing I definitely should have done is to consider the technology at hand and how foolish it is to expect it to behave as I would've wanted it to. K2 SmartForms (WebForms via custom designer interface) doesn't always play nice with customizations. That's one input to the time equation.

The next is the custom control that plugs in an autocomplete. I don't know it's limitations either, until now.

Then there is deployment, set-up time, configuration, troubleshooting, testing, etc.

Don't forget to think of all the work and any technical debt or knowledge debt involved when giving an estimate. Put it on a list, or in your work tracking system along with the coding tasks so you and your managers can plan accordingly.