Friday, July 25, 2014

SQL Server single row table

In SQL Server 2008 R2 I made a single row table that had Application Settings.

In order to enforce a table constraint of one row, create the table with a PK constraint on and id column with a CHECK contraint of idCol = 1. This will do two things, it will enforce that every row is unique and not null with the PK and the CHECK will ensure that the only val possible is 1.

Then, all you have to do is insert a row with id 1 and all other columns are setting vals (global settings for the app). If you need multiple settings, join tables to this one.

For Multi-tenant applications, consider adding a table for tennant specific settings. This table is for global settings only.

Thursday, July 24, 2014

Code Reuse and...EmberJS components

This is a combo post, a one-two punch. There are so many compelling reasons for writing reusable code - and some reasons for not. is something I learned about reusable templates in EmberJS.
Starting with reasons to NOT write reusable code.
  1. lack of awareness
  2. doing a POC (Proof Of Concept) and will NEVER release the POC version
  3. something is on fire and you need to put it out NOW!
  4. someone is a doody-head and is making you say uncle
  5. you are establishing a practice of creating more work for later so that you remain gainfully employed

Here are reasons to do it.
  1. because you can reuse it later and don't have to build the foundation every time you build something
  2. it's more challenging, thus more compelling and you thrive on challenges because it makes life fun and interesting
  3. it can speed up development significantly
  4. it can be done in a way that protects against changing technologies
  5. save the best for last...when the business rules change (because business needs change) it's less risky, easier and more compelling to change the code to meet the needs

Those are just a few things off the top of my head, to get things rolling. Can you think of more?
The EmberJS Part
EmberJS is an open source client-side MVx framework that uses templates to markup layout for the web. The templates it uses are Handlebars, which is another open source code base.
here is an example template for EmberJS:

<script type="text/x-handlebars" id="somePig">
    {{#each piggy in pigs}}
        This little piggy went to {{piggy.placeName}}
        <br />
pigs is passed to the template by EmberJS via a controller, it's a JSON object.
And here is a component, which is a reusable template that takes in values that can be set when you use it:

<script type="text/x-handlebars" id="components/list-items">
    {{#each item in items}}
        <br />

<script type="text/x-handlebars" id="pigs">
    {{list-items items=pigs}}

working sample

Stop and smell the roses, or at least slow down for it.

I took this literally today. Thank you Boeing for having the best flowers on my walk! This changed my trajectory today.

Tuesday, July 22, 2014

a positive approach to choices, risks and rewards

Some advice that I cobbled together from various readings, conversations, videos, podcasts, and whatever else regarding personal choices: "it depends..." What does it depend on for you? Each person would likely have different answers that may limit their choices. One thing it may depend on is whether or not this will lead you to a long-term goal or if it's a sidetrack. Ultimately, only you can answer that question. One thing for sure is that fear does not count, it's not part of the equation - no rewards are without risk. Which leads to another dependency - what's your level of comfort for risk? Is this just outside of your comfort zone? Or way outside of it? If its just outside or a bit more take that extra risk. If it's way outside, work up to it - find a way to take steps that way without getting too far ahead of yourself. Sometimes we have to completely ignore fear for a bit so we can get to the real answers. This doesn't mean ignore warning signs or anything like that, I'm referring to irrational fears. As a testament to my own practice of moving past fears, I recently took some leaps at the opportunity to meet with the CIO at work once a month. I'm not in a management position or anything like that either, and have been rather fearful of authority in the past. I just took a chance and set up a meeting. I was nervous for that first hour-long meeting, but apparently I made a heck of an impression. We meet once a month and talk in the hallways now. I can present ideas and sometimes elicit positive change. This is a good culture from the top down, granted, but I think from now on I'll open more doors of communication - seems to work well. No matter your decision, always remember that life is as fleeting as time and that mistakes happen, times can be tough too. Learn from them, move on, build on the experience. Good things come too, and when they do it far outweighs the negative impact of most mistakes in the long run. That being said, there are risks and there are deaf, blind and dumb. Be informed and aware of the risks so that you can make informed decisions.

Monday, July 21, 2014

EF load v to array

I did a performance comparison with query.Load() and query.ToArray() in EF 6 and found that the latter performs faster because .Load() calls ToList() to load the objects then disposes of the list. This takes more time and resources to execute, so why not call query.ToArray() and just let that load the conext?

Friday, July 18, 2014

WCF Command Interface

I just saw a clip about using a Command Interface pattern in WCF to alter behavior

In order to demo this concept, the presenter built a windows form app the had a method to set the color of a panel. It implemented a ServiceOperation contract so it was available via an endpoint that exposed the method as a service. There was another windows app that had a select list of colors that the user could select, which in turn would call the service endpoint and send a message that contained the selected color data. Low and behold! Once the color was selected in one app, the color would change in the other! While probably not the most practical solution to this problem (sockets would be faster) it did demonstrate an interesting pattern - how to change the STATE of a running process with WCF. This is no different that using a service for updating a database however and did not deliver what I thought was promised. It did not change BEHAVIOR of the running application. I expected it to implement some sort of command pattern, but it did not. The behavior was that when the color was sent to the application via the method, the background color of the panel would change to that color. This got me thinking about how to actually change the behavior of a running process using a service to send in the command. Here's what might work - WARNING: do not do this because it would be dangerous and stupid! Implement a service that takes in the definition of a method that implements a known interface. Use some sort of injection framework to compile and load that method, then call the method. Alternative to sending in the method - send in the DLL, save to a temp folder, use IOC/DI frameworks to grab the object that implements the method, spin up another process to run and inject that new object. Sounds dangerous and foolish, but it just may work. Herein lies the danger(if it isn't obvious)! Anyone who gains access to the endpoint could implement whatever they want via this command injection, including stealing data, crashing the system, or whatever. I haven't POCd this or anything, I'm just thinking out loud. I'm thinking it may work though as long as you can load the DLL this way, and if WCF doesn't trip on the file transfer (prob not the least concern about that one).

Thursday, July 17, 2014

My research into finding a better way to estimate lead down many paths. Many had no real, concrete, empirical way to go about estimating. Others cautioned away from it completely. While I'm in no seat at this point to impart supreme wisdom, I can share something I did find insightful.


function point analysis

With COCOMO II I learned much about different factors that can impact delivery. This model attempts to include those, combined with the size of the software in order to plot effort distribution. The reason I found this insightful is because it calls out effort that I and others never really seem to recognize.

For an example, let's try this exercise with three fields and a button. Save the fields, send an email, and display some data to a user.

With function point analysis we can quantify the requirements as function points. Let's use this premade spreadsheet to make it easier to do our experiment.

I use 4 simple inputs, 2 user outputs (email and display to user), 1 inquiry (the return data), and 3 files (the data tables, the email builder, the response page). I tried to be conservative with Fi.

then, we plug our number into the calculator. (in this case 136) or rather let's reduce that by the ILFs and take 136 - 90 or 46 to be conservative since I'm slightly unsure about the ILFs at this time so I'll write them out of the equation completely, after all we're experimenting.

make our scale and driver adjustments as necesary (let's keep these default for now.

 and the tool runs the data through the model and shows us the light.

Who would have thought that three fields and a button could take so much effort? Sure we can take shortcuts and get it done in a day, but that comes with much future risk because of the likelihood of creating mounds of technical debt - especially over time when this sort of wild-west thinking becomes the normative.

There are several sources out there with full descriptions about how the model works. It's all very scholarly and technical. Why does this matter? This is important because it's something you can show managers that will give them the same insights as you gained. Arm them with information about technical debt management and you are on your way to a more realistic reality! Unless they are unwilling to learn, grow, change, listen to scholars or naval research. In that case, start shopping for new opportunities.

Tuesday, July 15, 2014

Planning for change

Today I had to implement a change across many applications that were all in different states of organization. I saw several different styles of coding and architectural styles. Some applications required minimal changes, others required some core changes that were deferred due to timing of other projects. The main issue with the ones that required more work were that they were written under the assumption that the way it was will be the way it will always be. What? Let's parse that out.

It was a certain way when the application was built.

The developers assumed at the time that this is the way it will always be.

This could not have been solely the developers' doing though. Often the business will make the assumptions for the developers and not even deliver a possibility that they will change in the future. But there may be some hope for developers in determining when something can change right before our very eyes!

If it's possible to change, then assume that it will.One thing that can and does change all the time is the name of things. If something can be called by an id or a name it's likely that the name will change. This is one big smell that we can be aware of that should call to mind "can this change?".

Once we have identified that it can change, the next step is to analyze the impact of the change when weighed against the development time involved in making our code handle this possibility up front.

Here is one very common scenario and an uncommon but potentially problematic possibility where changes could have an impact:


Names change all the time. Gender, not so often - but it is possible. How often do we plan for a gender change edge case? Often it is assumed that it will always remain the same, but just recently I was involved in an application where the planners assumed that it would never change. There will certainly be issues if it does.

I would also assume that a social security number can change, would never use that as an internal ID unless you want to end up allocating things to the wrong person someday.

How about some type names?


then these are used in the application like so:

Big Thing is allocated to: John Smith
Wierd Thing is allocated to: Jennifer Blah
Foo Thing is allocated to: Phil Vuollet

where someone got it in their head that they want to display these distinct types of things on the page and someone else ran with that and went ahead and wrote some code that did just that:

 add "Big Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 1
 add "Wierd Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 2
 add "Foo Thing is allocated to: {person}" to page where person := some database function to look up the person with id = 3

so now we have some hard-coded stuff to deal with here. I got into the habit of creating data that represents this type of thing with a property called DisplayText. This makes it evident that the value is meant to be displayed and that is its only purpose. It's also a safe bet to put in a property called LookupCode, ShortCode or something similar. Usually the business will have some acronym that they will want to use in places and will refer to the entity as such in most cases. Whatever you do don't put in something vague like Name or Description (that one especially is a big offender). DescriptiveText might be better. If the text is meant to be displayed then use DescriptiveHelpText or DisplayDescription, or some name that conveys meaning.

Even better, use the same naming convention throughout your application! if you do, then you can create a reusable code bit that can do the displaying for you!

GetListValues(entityName, filter)

then you can map the properties to a generic view model that can be used to hook into a view on the page.


This can be extended if necessary through inheritence:

SensitiveListItemViewModel : ListItemViewModel

Notice that the SensitivityLevel is a type in and of itself since this can change and often will.This should be maintained in the same fashion in terms of data structure and coding dynamically with change in mind.

We have seen that by keeping change in mind, and by weighing the initial cost vs long-term update costs (sweeping changes, testing effort, deployment effort included) we can recognize and plan for the future changes that will inevitably come.

Thursday, July 10, 2014

You will never get good requirements until you ask

It never fails...ever.

Screenshots with no urls, incomplete if not confusing information, unclear images, lack of details...

It can be frustrating at times, but the important thing to remember is that your job is better than being homeless. That is probably the hardest job of all.

Every work day on my walk to and from the office I pass the same half-dozen homeless people - both ways. They are in the same spot, every day, rain or shine, cold or hot rattling off the same line as hundreds of people ignore their pitch. Much like entrepreneurs, if they don't make a sale they starve. Are you going to starve if you don't get complete requirements?

What can you do? Have a conversation with the app owner, others who may know about the application (of course that can be a wild goose chase at times), or perhaps the customer. It drives me crazy when I get incomplete details and the first thing I'm asked is "how long will this take". I typically reply with something like "I'll have to look into it and get back to you". But here's the real hitch - how do you know how long it will take until you do look into the details?

Here's a scenario, probably all too common - developer (let's call him Adam) gets some hot change that cannonballs in out of left field. Adam is asked how long the change will take (especially because the managers want to evaluate whether or not the change is worthwhile), however Adam is not very familiar with the legacy ball of wax application that he inherited from 50 other developers who have their junk in their, and a whole bunch of cruft to pile on top of that.

From the requestor's POV it's just a text change to a field, or adding a checkbox, from Adam's POV it's pandora's box - or at least it should be. However, we frequently forget about so many hidden-inputs to the time-to-live equation. These are often swept under the rug in the developer's mind and those on the other side of the code have no idea unless you bring them into your world. If the requestor is on your team you can do this effectively by outlining each hidden-input in your world and tying times to each of these. Here is a start to the hidden-input equation:

1. Familiarity with the code - this is a biggie, esp if you have a ball of wax or some other poorly organized, debt riddled solution.

2. Familiarity with the language/technology - of you don't know how to write JQuery that well, you will spend more time looking up functions and special selectors. This can add significant time. Also, there are environmental technologies and config/set-up concerns that might add significant time as well.

3. Deployment - is this automated? Will there be some special considerations? Do you know how to do this, or will you need to learn?

4. The existing state of the solution - does it build? will there be tons of dependencies to upgrade? Can you navigate it? Do you have access to retrieve it?

5. Communications and Planning - will there be 7 hours of meetings per week related to this project? How much planning and documentation is involved?

6. Interference - How much time do you spend supporting other applications? How about other people? How about random administration tasks and duties?

7. Requirements - How solid are the requirements? Do they consider what the user doesn't want? Any degenerative cases? Edge cases?

8. Your technical environment- Is your machine a dog? Or is it relatively quick and efficient? Are your servers dogs?

9. Your health - Are your fingers broken? Are you on the fringe of a meltdown? Are you going through some stuff at home? Guess what? Those will affect your productivity! Consider it.

10. Testing - Who's on the hook for testing? Are there automated tests? Unit Tests? Web Tests? Do you need to develop them to support the change?

There's a good start, I like to make a list or put in tasks for each action item and adjust the times for each according to other items that have an effect, or if you have a project management system - adjust resource allocation percentages.

One task should always be - Research Existing Code.
Another would be - Update Documentation.
And another for each environment - Prepare for Deployment and Deploy to {env}

I like this approach because it gives meaning to the estimate and transparency to the reality of what you need to do to make the request happen. Otherwise, when you say "this will take 48 hours of work and two weeks to complete" you won't get a response like "WHAT!" and then a look like you are worthless and incompetent. If you break it down, the rquestor will be able to see how much work is involved. Also, put in a disclaimer of sorts highlighting the risks (interference, "my mother is in poor health so I may be taking a few days off", I can't type so fast with no fingers). Look to the SEC filings for public companies and see how they disclose risk. (RNDY) You probably don't have to go that far, but it's a good model.

Keep in mind that the requestor is probably super-busy and under pressure, they probably don't have time to be pedantic about creating requirements, you have to dig for clarity sometimes. They also will want things done asap, but keep them informed!

Wednesday, July 9, 2014

SOLID DB principles

In my last post, I rattled off a little bit about creating a db model that conforms to SOLID development principles. I presented the idea to some colleagues and it seemed to go over well. Today I will attempt to expound upon the principle and how a good database design can conform to each piece of the SOLID acronym.

S - single responsibility - each table should have a single responsibility. This means no god tables with nearly infinite nullable columns that can capture more than one type (sub-type actually) of thing.

O - open-closed - open to extension, closed to modification. Create linking tables or type tables to extend a base table, don't change the base table. Add views to create reusable components. Always create list/type tables so that new types of data can be linked to metadata. Reuse data when possible (no is no). If you really, really need to change the attributes frequently, consider EAV - but with caution.

L- Liskov Substitution - Comment is a base table, PostComment is a sub table. They map to types in object-land. Learn TPT, TPH etc.

I - interface segregation - there aren't really interfaces. Just make sure again that your tables, and perhaps views and other objects such as stored procedures/functions (especially these) aren't doing more than one thing.

D - dependency inversion (DI) - this applies more to the DAL than anything. Using DI allows for switching the database technology later, or even spread your data model across technologies. Either way all other code/modules/layers should be agnostic of the data store - abstract that baby away! One could go so far as to write adapters that adapt the db access technology to a custom interface should one want to switch from something like Entity Framework to NHibernate. EmberJS does something like that in the ember-data library. Consumers make all calls through ember-data objects, but can map the calls to whatever their storage technology requires. I truly can't think of any way to apply DI in SQL, perhaps noSQL implementations will have varying abilities to do so with map/reduce functions. I would say that the best way to practice good DI with SQL is to avoid any sort of business logic in stored procedures. I have seen some pretty tricky DI being simulated in stored procedure chains, perhaps some EXEC calls on dynamic SQL and things like that. I avoid those sorts of things like the plague these days unless it's generated SQL via EntityFramework.

Anyways, that's my attempt to SOLIDify database design. I feel kind of floppy about the whole DI rambling, if anyone has some good ideas about that one feel free to chip in!

Happy Coding!

Monday, July 7, 2014

SOLID data relationships

One approach I like to take when designing a database schema is to focus on the relationships before adding columns that are not involved in the relationships (in a relational database that is).

The structure and integrity are the most important component of a good database design. One approach is to model the data structures first. If you are working in an object oriented language you could build your objects first with minimal properties. By doing this and thinking about SOLID development principles, we can keep our database design in alignment with our code development practices.

in C#:
public class Person
public int PersonID {get;} {set;}
public virtual ICollection<Task> Tasks {get;} {set;}
or even using "class/table diagrams":

-PersonID int PK
-Tasks Task[]

-TaskID int PK
-Assignees Person[]

in this model we can already see that a person can have many tasks and a task can be assigned to many people. We can also see that the task class will not map directly to relational tables. There is only one task, this task can be assigned to multiple people. Therefore we will need a linking table to represent the many-to-many relationship between people and tasks. Likely we would want to collect more data about that task assignment, so probably the intermediary table should have it's own model.

-TaskAssignmentID int PK
-TaskID int FK
-PersonID int FK

-PersonID int PK
-TaskAssignments TaskAssignment[]

-TaskID int PK
-TaskAssignments TaskAssignment[]

now we can track each assignment and later add props like CompletedDate, hook up another link table to comments so you can save multiple comments with metadata related to each one.

While many-to-many relationships with a single comments table would work, there is a slight issue with data-integrity. A single comment should relate to a single TaskAssignment, and there should be zero-to-many Comments for a TaskAssignment. Or anything we'd like to tie comments to. Note that we might be tempted create a combo PK on TaskAssignments using TaskID and PersonID and do away with TaskAssignmentID, but what if it's a long running task and a person can be assigned to it again?

One way is to add a comments table each time we need comments to relate to another entity. Our query, should we want to view all comments related to some specific root would be really complicated. Another approach would be to have a single Comments entity with a CommentType attribute and whenever we need to add comments to some other table we can add an entity that relates to a specific comment and the target table.

-CommentID int PK
-CommentType CommentType FK

-CommentID int PK, FK
-PersonID int FK

-CommentID int PK, FK
-TaskID int FK


now we don't have to violate Open-Closed principle to add comments, our comments can be typed in code using TPT inheritance (Comments is the abstract base), and we maintain data-integrity.

The CommentType is so that we can view the Comments table in a query and know what type of entity each one relates to without joining all ___Comments tables in the query. We will have to if we want to know which entity each comment relates to. In .NET, EntityFramework helps significantly with this.

Hope anyone who gets this far will have some neurons firing with regards to structuring a database to support SOLID development practices.

Saturday, July 5, 2014

Primary Key: ID or Combo Key?

Debating a coworker about a table schema in SQL server. The debate is about a parent child relationship. Should the child have its own identity?

Here is the situation (in a generic way).

PeopleTable (a proxy of a table in another db)
- PersonID PK

-PlanYear PK

-PersonID FK PK
-PlanYear FK PK
-other details columns

I am of the mind that the third table should have its own ID rather than the combination PK that is also an FK from two other tables. Even though there will only be one instance per person, per year.

Here's my rationale -
the specific plan is its own entity, even if it can only relate to the specific person and year. SQL Server needs to work harder to order by two columns also to join on two columns (as so users running queries). And if you ever want to create a rest API, what does the URL look like?

The other side of the argument is rooted in difficulties encountered with a database that created its own local ID for referencing entities from another db.

DB Main

-bucketId PK

-dropletId PK
-bucketId FK PK

DB Two

-bucketId PK

-LocalDropletId PK
-bucketId FK

this was done because the droplets in main are partitioned per bucket
Bucket 1 - Droplet 1,2,3.
Bucket 2 - Droplet 1,2,3.
so that there is no way to refer to a droplet without referring to the bucket as well. So this schema was born of another schema and was an attempt to simplify referring to a single droplet entity in the db.

there are other structures that can make our database more open to future change. EAV for the plans would be a great option! Plans change, new ones are added. EAV would handle this beautifully.





Of course in this case the plans will not change that often or by that much, so EAV is un-necessary since it adds complexity.

In any case, my preference is to have an ID for each table. The PersonID and Year are still necessary and can be used for identifying the relationship. But in this way the plan has its own identity. Other entities relates to the plan itself can refer to the plan directly.
Guess we will see how it goes.

Thursday, July 3, 2014

Time is Fleeting

When you think something will take only 2 hours, and didn't really look at what needs to be done you may end up spinning your wheels and burn up a whole day before you know it.

I learned this lesson (once again today) and on reflection should have put a bit more thought into the whole amount of work that needs to be done rather than just the development task in a bubble.

One thing I definitely should have done is to consider the technology at hand and how foolish it is to expect it to behave as I would've wanted it to. K2 SmartForms (WebForms via custom designer interface) doesn't always play nice with customizations. That's one input to the time equation.

The next is the custom control that plugs in an autocomplete. I don't know it's limitations either, until now.

Then there is deployment, set-up time, configuration, troubleshooting, testing, etc.

Don't forget to think of all the work and any technical debt or knowledge debt involved when giving an estimate. Put it on a list, or in your work tracking system along with the coding tasks so you and your managers can plan accordingly.

Wednesday, July 2, 2014

NuGet HintPath Fix/Workaround

So we're using nuget and everything is awesome! We have a solution with projects that are maybe custom extensions or helpers that we will use in all our projects and life will be great! Now we bring those projects into another solution for part of this big project. We keep our concerns separated in separate solutions of course and share some projects that will help keep things cool and aligned. Then we build the solution and WHAT HO! What is this crazy nuget error? So we spend a bunch of time trying to resolve this (how many dev hours worldwide?) We find that the issue lay in the way nuget installs a ref to the packages folder in our projects. The path used is relative to the project. If we don't have them in the other solution's packages folder they don't exist and the build fails. You can resolve this in three ways:

Option 1: Edit the proj files to use $(SolutionDir)/packages/... well this doesn't work when the proj is built from msbuild directly without a solution. But it's pretty good.

Option 2: Build the dependent solution first.

Option 3: Add this to the proj file:
 <when condition="$(SolutionDir) == '' Or $(SolutionDir) == '*Undefined*'">
 <when condition="$(SolutionDir) != '' And $(SolutionDir) != '*Undefined*'">
replace '../' in nuget HintPaths:

Visual Studio templates vstemplates part 2

Yesterdays lesson was a catchup to today's lesson learned. Actually there were several which will end up in separate posts since the topic vary. This lesson is regarding creating new solutions from template collections. After a custom template is made available and you create a new solution from that template, all of the projects are created in their respective folders under a folder the same name as the solution.

The good: the solution is well organized.

The bad: if you have NuGet packages installed as part of the template, the references will be mangled since they want to point to ../packages but you need them to point to ../packages.

The ugly: there are several ways to resolve this issue until the NuGet gets rid of this ref issue for good (another lesson learned today).

Option 1: close the solution, move the project folders up one folder, open the sln file in Notpad++ or some other text editor, change the project refs to reflect the new location.


Option 2: edit the proj files in the template collection folders to ref nuget package one level up.

Option 3: edit the proj files in your new solution to point to the correct place.

Option 4: uninstall and reinstall all the nugets in the solution. Better take notes of versions and use the nuget commands because the manager won't give you the same version.

Option 5: edit the proj templates to reflect the fix in my next post.

there may be other options, so far I went with the first one. Found that you have to close the solution before you edit or it will resave from the cache.

Another thing I learned about vstemplates is that they get cached by VS after you create a couple. If you edit the templates, you would need to clear them from the template cache. Once you do that, you also need to restart VS. The template cache is located in appdata/roaming/microsoft/visual studio/{{version}}/project templates or something like that.

Tuesday, July 1, 2014

Visual Studio 2013 project templates

.vstemplate is the extension.

While in the process of creating some boilerplate templates for solutions I learned quite a bit. The reason for creating these templates is that we have identified our best practices for using WCF services to wrap dbs and 3rd party APIs in services that we control. Other services related to business logic, workflow and logging have their own solution patterns that are pretty standard. I was searching for a way to automate new services via solution templates. I found that Microsoft surfaces the ability to create custom project templates for Visual Studio that can be used to do just that. There are a few caveats and I can't get everything I need to fully standardize the process without some manual steps or writing custom code, but it's a good start.

There are two basic styles of project templates to choose from - single project and multi project. There is no concept of a solution template. A multi-project template is what I was able to use to create solutions that contain a collection of projects. Here's how it works and some lessons learned that they don't tell you in the limited documentation.

A template collection is designed in a vstemplate file. This file references a vstemplate file for each project you want to include in the solution. The individual project templates point to the items to include in the project (proj file, classes, folders etc). These are single project files.

The documentation says to create a project and export it via VS File menu. This does some work to the proj file and creates the vstemplate. The collection of files is zipped up and put in the My Templates folder of the VS/Projects folder in Documents. If you selected the import option (checked by default) it also puts a copy in the ../Projects/templates folder that makes it available to VS when you create a new project.

The way I created the "solution template" was to create a subfolder in the Visual C# folder of ../templates for that solution type. I moved each proj zip into that. I unzipped the folder contents to their own folder. Then I deleted the zip folders. The multi-project vstemplate is at the solution template root under ../templates/Visual C#. The vstemplate refs each project template.

The good: We can now create projects with all the necessary projects included.

The bad: Haven't found a way to include solution level items in the resultant solution.

The Ugly: We use post-build scripts to set up the service locally in IIS when the configuration is set to Debug. The AppPool name is ProductServiceVx where Product should be only the product name. I thought I could use $safeprojectname$ which should be the user entered value in the new project dialog. This is not so in our situation.

So there are some manual steps for us. Here they are in a nutshell and how we are in the situation.

Create a new project in VS.

Pick a single name that represents the product - for EmployeeDataService, call it EmployeeData. All of the projects will be named using as the namespace and assembly name! Even the existing classes. In fact you can add $safeprojectnane$ in any files as long as you define replace as true on the element in the vstemplate.

But Wait! I want the proj names to be PeopleData.Interfaces, PeopleData.DTOs etc... They are, but then the classnanes become PeopleData.DTOs and my post build has PeopleData.DTOsVx as the AppPool name. Not what I want. Try as I might, I cannot find documentation on how to affect this declaratively. Hence the manual steps to crack open the projects and make adjustments. This opens up some risk of errors. Its palpable though since it takes so much work to set up a solution with 10 projects including dependencies.

Speaking of dependencies, enable NuGet package restore and bring in external dlls and projects that these depend on. But again, this is minimal compared to the overall time it takes to set up the solutions.

Speaking of time consuming, it did take a while to create the initial solution, then export each project and define the master vstemplate, but I hope it proves to be worthwhile in the end.