Thursday, June 24, 2010

TFS Migration: Sharing common code

In this latest installment of recording our team's migration to Team Foundation Server 2010, I'll be describing how we organized and share our common code libraries.

The scenario is quite common. We have a Application that provides specific line of business functionality. It leverages several Common libraries that provide underlying functionality that are used by all of the Applications built and supported by our team.

Previously, when using Visual SourceSafe, we would share these libraries by using the 'Add Project from Source Control' feature in Visual Studio to add the common projects to our Application's solution.

TFS guidance offered several approaches to sharing these libraries across Applications. The primary approaches are:
  1. Workspace Mapping - add a directory mapping to your workspace for the shared library so it will be updated to your local box alongside your application's directory structure. This is essentially a client-side solution, since Workspaces are managed on the local development system.
  2. Branching and Merging - use the improved Branching capability in TFS to branch the common project into the source control structure of the application project. This is essentially a server-side solution, since the branched code is maintained by Source Control and will be updated by anyone who does a 'Get Latest' on the project directory.

We have adopted the Branching and Merging approach, but it was not without a lot of experimentation of both methods. In the end, here were the key factors in our choice.

  1. Server side vs. client side. A big part of our goal in organizing our projects is the ability to set up a new development environment quickly. The server side Branch and Merge strategy supports this goal by eliminating one more step (that is, configuring the Workspace) in preparing a new dev environment.
  2. Isolation of potential changes. While it is more and more infrequent, changes do occur to the common libraries. By leveraging a Branch, the branched copy of the common project is isolated from all other consumers of that library, until the changes are merged in to the Main branch for the common library. If we were using the Workspace mapping approach, a check-in of changes would go straight to the Main branch of the shared project, which could cause a ripple effect through other Applications that are using the common code.

Still, there are other implications and adjustments for our development team, primarily propogating changes. In the case when a change to a common library is merged back to the Main branch, and it is appropriate to propogate those changes to all other consumers, a new effort is required to forward merge the common code out to its consumers. This effort is compounded if the consumer Application itself has multiple branches that would all need the updates. However for our team, this is an acceptable trade off since a) changes are infrequent, and b) the number of consuming applications is manageable. Furthermore, as we continue to expand our automated Build setup, propogation may be something that can be scripted in as part of a nightly build of the common projects.

For more information about these approaches, you can check out the Patterns & Practices guidance for Team Development with TFS.

Thursday, June 17, 2010

TFS Migration: Branching and Merging Strategy

As we are evaluating and planning our migration to Team Foundation Server 2010, considerations for adopting a Branching and Merging strategy for our source control projects has demanded a lot of our attention.

While using Visual Source Safe, our team rarely if ever used Branching to isolate project so they can be developed or supported in parallel with our main effort. VSS wasn't so great at executing branching and merging, and we, like many, chose to err on the side of caution and forego branching altogether.

In TFS Source Control, it seems branching and merging have been improved to a level of comfort where we as a team can have some confidence that executing a branch or merge operation won't lead us to spend more time cleaning up rather than moving on and doing the development work itself.

In the end, we have largely concluded to use branches to isolate our releases, but not to isolate our normal development during a sprint. As such, our Main (or Trunk) branch will be the primary (and consequently slightly unstable) code line. After each release, we will create a branch for that deployed version of the product for maintenance purposes.


Here are some of the key factors in our decision:

  1. branches require a level or "care and feeding" that (for our relatively small team, at least) would have not yielded any great value for our normal development sprint. We rarely have divided teams working on the same product, or long running feature development in parallel to working our product backlog.

  2. branching complicates IIS hosted applications. We choose to use IIS, rather than the Visual Studio Development Server, to host our applications during development. Consequently, every branch requires a reconfiguration of IIS to point to the site's directory contained in that branch. While this reconfiguration be scripted (as demonstrated here) it still is another step that the developer has to remember to perform.

  3. branching is further complicated when you have shared code libraries that must be replicated to every new branch. While there are some clever approaches to managing workspaces or using Build scripts to update branches, you can alleviate the whole issue by minimizing the branches you create.

  4. we do not have a formal QA team that would require a stable Main branch for testing purposes. Our adoption of Agile is such that our peer reviews and testing are integrated in to our sprint, and consequently we are not 'handing off' our product to another team to exercise. If there were a QA team, then it certainly would be more important to keep a highly stable Main branch, and that would merit spawning a new Branch for ongoing development. In our case, it's just not necessary.


It is nice to know that TFS Source Control makes branching and merging a more trustworthy operation, and that the tools are there when we need them. But for now, we'll take the simple road.

Friday, June 11, 2010

TFS Migration Resources

As I am doing research and preparing for our migration to Team Foundation Server, lots of various resources are emerging to assist in our planning. It seemed appropriate to make a list of these items and provide some commentary on their content and relevance to our efforts.

Codeplex based resources
patterns & practices Team Development with TFS Guide (Final Release)
This online guide provides a decent overview of how to leverage TFS for your development team. While it has not been updated (so far as I can tell) to reflect TFS 2010, many of the concepts and descriptions are relevant and applicable with the latest version of the product.

Visual Studio TFS Branching Guide 2010
One of the major choices we are facing in the midst of our migration is whether to move into using branching and merging as part of our development process. The guides provided by the so-called 'Visual Studio ALM Rangers' give very clear overviews and explanations of the various strategies surrounding branching your code and organizing your project.

MSDN based resources
Team Foundation Server 2010 Resources page
MSDN, true to its mission, provides plenty of source material about TFS directly from Microsoft. You can start with the TFS Installation Guide, then move to the Getting started with Visual Studio Application Lifecycle Management to get a good basic understanding of the parts and pieces that make up the TFS product.

Blogs

The Woodward Web
Brian Harry - Brian Harry was one of the original developers of SourceSafe, and is now part of the Team Foundation Server product team at Microsoft.

As I find additional resources, I'll update this list.

Wednesday, June 9, 2010

Migrating to Team Foundation Server: Preamble

About a year ago, I led an effort to introduce the organization where I work to automated unit testing, continuous integration and automated deployment as part of the development process. We implemented CruiseControl.NET, wrote and refined dozens of NAnt scripts, and in the end, established a working CI environment for our project, complete with nightly builds, one-click deployment to our test environment, and execution of NUnit tests, code analysis and creation of API documentation.

This year, Microsoft released Visual Studio 2010, and with it a new release of Team Foundation Server, and our team has all come together to work towards implementing TFS into our environment. With that implementation, the existing CruiseControl.NET build environment will give way to TFS Build, and I will use this blog to document and share the planning, process and lessons learned as part of migrating a medium sized application in to TFS as Team Projects.

In the coming weeks, we'll be making decisions about things such as:
  1. How to structure our projects in TFS Source Control.
  2. How to organize our Solutions and Team Projects.
  3. How to map our current CI process to the TFS Build workflow.
  4. How to manage our Agile/Scrum process using TFS Work Items.

As a prelude, here's a brief overview of the application we're working with.

Ours is an intranet application for project management and workflow handling. The basic structure follows a n-Tier architecture, though we are using Dependency Inversion. The component parts include:

  • An ASP.NET Web site project
  • A Utility library for common functions.
  • A DataAccess library for database access.
  • A Workflow library for common workflow functions.
  • A Core library, defining our business entities, data access interfaces, and service methods.
  • A Data library, which implements our actual database calls.
  • A Controller library, which is a go-between for the Web Site pages and our Core layer.
  • 2 libraries containing integration to 3rd party product APIs.
  • Several unit Test libraries for the Core, Controller and Workflow libraries.

All told, our Visual Studio solution contains 16 projects.

We also have a project containing our common NAnt build scripts and CruiseControl.NET configuration files, and each individual project has a NAnt build script governing the build for that assembly.

The Utility, DataAccess and Workflow assemblies are shared with other applications.

Finally, we have about a half-dozen 3rd party Assemblies that we use, which includes the Microsoft Enterprise Library, Telerik UI controls, and Aspose Word.

In planning our migration, we must be able to continue to allow the Shared Assemblies to be accessible and used by other projects, as well as maintain all the functionality NAnt and CC.Net provides.

Wednesday, May 27, 2009

Spawning a new window when a form is posted using Html.BeginForm

While it is not immediately evident, the following usage will spawn a new window displaying the results of a form post.


<%Using Html.BeginForm("Print", "Secure", Nothing, FormMethod.Post , New With {.target="_blank"})%>



Turns out that the form tag always included the target attribute, you just need to instruct the BeginForm helper to use it.

Tuesday, May 5, 2009

Determining the run time of a process.

I have had several cases through my professional career where I have developed background processes that handle recurring tasks, such as renewing subscriptions or retrieving history for a batch of customers. These are normally implemented as Windows Services, which means I generally log the status of the process, as well as any summary data for its last execution, to the Application Event Log.

One piece of data I like to include in my summary log entry is how long a processing cycle took to execute. This data point is helpful for routine benchmarking, as well as determining if a process has gone rogue and is taking unusually long to complete (possibly due to a remote service latency or something like that.)

Using the provide Date related objects in .NET makes it easy to determine and report this execution time.

First, define some variables for the summary report:


Dim startTime As DateTime = DateTime.Now()
Dim recordsToProcess As Integer = 0
Dim recordsProcessed As Integer = 0
Dim recordsErrored As Integer = 0


I typically am pulling records to process from a database query, and so setting up the processing loop is pretty straight forward.


recordsToProcess = dataSetWithRecords.Tables(0).Rows.Count

For Each drCurrentRecord As DataRow In dataSetWithRecords.Tables(0).Rows

'do processing here
Try
'execute task
.
.
.
'update success counter
recordsProcessed += 1
Catch ex as Exception
'handle error state
'update error counter
recordsErrored += 1
End Try
Next

Now I can capture and calculate the processing time. I use the DateDiff() method to determine the number of seconds between the start and end timestamps. Then, using the TimeSpan object I have an easy to use representation of the time elapsed that I can put in my summary message.


Dim endTime As DateTime = DateTime.Now
Dim timeElapsedSeconds As Double = DateDiff(DateInterval.Second, startTime, endTime)
Dim timeElapsedSpan As TimeSpan = New TimeSpan(0, 0, timeElapsedseconds)

Dim sbSummary As New System.Text.StringBuilder
sbSummary.AppendLine("Agent Results:")
sbSummary.AppendFormat("Total records to process:{0}", recordsToProcess.ToString)
sbSummary.AppendLine()
sbSummary.AppendFormat("Records processed Successfully: {0}",recordsProcessed.ToString)
sbSummary.AppendLine()
sbSummary.AppendFormat("Records with errors: {0}", recordsErrored.ToString)
sbSummary.AppendLine()
sbSummary.AppendFormat("Processing Time: {0} Hours, {1} Minutes, {2} Seconds", timeElapsedSpan.Hours, timeElapsedSpan.Minutes, timeElapsedSpan.Seconds)
sbSummary.AppendLine()

LogMessage(sbSummary.ToString)

This approach is reliable, accurate, and keeps me from having to manually calculate the elapsed time.

Monday, May 4, 2009

Namespaces and Partial Classes

This post goes in the 'finally figured out why my code wasn't working like I expect it' category.

More and more, Visual Studio and various .NET project items use the .designer file as a partial class to put code generated by VS. A lot of time, you never need to worry about the .designer file.

I encountered a situation recently while creating a Windows Service in VS2008. In my Service class (which has a .designer file) I changed the Namespace. Everything compiled great, and the Windows Service installer would successfully deploy and register the service. But when I would try to start the service, nothing would happen. My code would never run.

In the end, I remembered that the Main() method of the Service is located in the .designer file. As soon as I added the matching Namespace to the .designer file, rebuilt and redeployed, the service started up without any problem.

I personally like to use Namespace entries on every class rather than using the Root Namespace entry on the Project properties. I now know that I need to make sure any Partial classes also have the matching Namespace designation. Lesson learned.