Friday, March 21, 2014

My TDD Happiness Graph

I'm helping my friend Mayur with TDD and he has his frustrating moments as is evident from his tweet

This happened with me when I was learning TDD. This happens with everyone. Key is not getting demoralised and keep going. I did not know this in the beginning and I gave up. Then came back again. I wanted to share this experience in best way possible and I came up with follwing - My TDD Happiness Graph

Take a moment and look at the graph. There are few interesting things in there

  1. It takes time to master the art of TDD (I am no way close to mastering the art yet). 
  2. In the beginning you would have lot of "turn off" moments with TDD. More experienced you are, more "turn off" you would have
  3. The real kick start to my TDD learning was when I paired with someone who knew how to test drive my code
  4. For long time, I was only test driving my C# code, leaving my JavaScript to mercy of time. It took me a while to realise that JavaScript can be tested with same effectiveness as any code in statically typed language. The tooling support for testing JavaScript code is not spectacular but it is not disappointing as well
  5. I have indicated that my happiness took a dip with dependency injection (DI). That may be misleading and hence clarification. TDD makes it easy to inject dependencies and mock them during tests. This made life so easy that I made mistake of applying DI without giving consideration SOLID and code maintenance. Overtime I learned how to tame DI but it is worth mentioning mistakes I made
  6. In the end, I have shown as having reached "TDD heaven". I do not know what this means, it just something came to my mind. This is a state where every time you code or think about code, you start with tests. Things like SOLID automatically follow or at least TDD makes it easy to refactor your code (brutally if needed) in order to achieve SOLID. 
One last thing, this is my experience with TDD and your experience may be different. But I am sure you had your own happiness crests and troughs in the journey.

I was finding it difficult to make Visio and blogger work nicely with each other but I failed. If you are seeing the image blurred then a high resolution version is available here

TDD Happiness Graph


Monday, March 17, 2014

Improve command prompt experience on Windows

I always find it frustrating that command prompt on Windows is not as great as a *nix shell. So I am always on lookout for cool things that improve my command prompt experience on a Windows box. I have had fair amount of success in maintaining interest in command prompt experience on Windows by  using combination of tools, techniques and package manager for windows. I am going to talk about these today.


1. Tools


If you are like me then you would have those frustrating moments with Windows command prompt when you have to go through hoops of clicks in order to complete trivial tasks like copying and pasting commands. Limitations like these make Windows command prompt one of the least interesting tools. But you are not alone and some great people have built tools that add a new dimension to productivity of Windows Console. Here are three that I have used so far in order of my preference

Do not download any of these just yet. In the last section I talk about a windows package manager which makes installing these a breeze.

ConEmu (Console Emulator)

Here is a one line description of ConEmu from their site
ConEmu-Maximus5 is a Windows console emulator with tabs, which presents multiple consoles and simple GUI applications as one customizable GUI window with various features.

What I like most about ConEmu is it comes loaded with lot of simple features like select text to copy to clipboard etc. It also come with some default color schemes so that you do not have to fiddle with colour settings of every odd element of console as in Windows Command prompt. ConEmu is quite lightweight, and starts in no time on my VM

Scott Hanselman has a written about how he is using ConEmu which I recommend reading first if you plan on using this emulator.

Cmder

Again, here is the description of Cmder from their site
Cmder is a software package created out of pure frustration over the absence of nice console emulators on Windows. It is based on amazing software, and spiced up with the Monokai color scheme and a custom prompt layout. Looking sexy from the start.

Cmder IMO offers a better experience as compared to ConEmu and I feel is closer to Ubuntu shell and ConEmu is (I like Ubuntu shell). But Cmder is heavy and at times slows down.

Console2

Again, here is description from Console2 website

Console is a Windows console window enhancement. Console features include: multiple tabs, text editor-like text selection, different background types, alpha and color-key transparency, configurable font, different window styles

I do not have great deal of experience with Console2 so I cannot say much about this. Scott Hanselman has a review of Console2 which you should go through if you need to know more before installing.


2. Tricks


There are not tricks but only one trick so the heading a bit misleading. Never mind, this one trick is going to relieve you of lot of frustration. I was introduced to this by my friend Peter Camfield. This is combination of Doskey Macros from Ben Burnett and AutoRun settings for Windows command prompt. This trick lets us create nice shortcuts and aliases for the commands we use commonly. E.g. you can use "ls" instead of "dir" if you come from Linux background.

Copy the script from Ben's article and save it on your disk as alias.bat. Suppose you saved it a C:\alias.bat. Next, open registry editor (type "regedit" on command prompt if you do not know how) and go to following location


HKEY_CURRENT_USER\Software\Microsoft\Command Processor
 If a string value "AutoRun" exists then append the path of your alias.bat to it. If not, create new string value and set it's value to "C:\alias.bat". In my case, I was already injecting ANSICON into my console, I had to append my path as below

(if %ANSICON_VER%==^%ANSICON_VER^% "C:\Program Files\ansi151\x64\ansicon" -p)&c:\alias.bat

Restart your command prompt and aliases defined in alias.bat should be at your service now. True power comes from how smartly you define your aliases. My alias.bat file looks like below. Do not copy that file as is. I have replace couple of my path with <<instructions>>. Replace those with your paths

        
;= @echo off
;= rem Call DOSKEY and use this file as the macrofile
;= %SystemRoot%\system32\doskey /listsize=1000 /macrofile=%0%
;= title= cmd with aliases
;= rem In batch mode, jump to the end of the file
;= goto end
;= rem ******************************************************************
;= rem *   Filename: aliases.bat
;= rem *    Version: 1.0
;= rem *     Author: Ben Burnett 
;= rem *    Purpose: Simple, but useful aliases; this can be done by
;= rem *             other means--of course--but this is dead simple and 
;= rem *             works on EVERY Windows machine on the planet.
;= rem *    History: 
;= rem * 22/01/2002: File Created (Syncrude Canada).
;= rem * 01/05/2007: Updated author's address, added new macros, a 
;= rem *             history and some new helpful comments.
;= rem * 19/06/2007: Added Notepad, Explorer and Emacs macros.
;= rem * 20/06/2007: Fixed doskey macrofile= path problem: it is now not 
;= rem *             a relative path, so it can be called from anywhere.
;= rem ******************************************************************

;= Doskey aliases
h=doskey /history

;= File listing enhancements
ls=dir $*
ll=dir /w $*

;= Directory navigation
up=d:\up.bat $*
down=popd
back=popd
pd=pushd

;= Copy and move macros
cp=copy
mv=move

;= Delete macros
rm=del /p $*
rmf=del /q $*
rmtmp=del /q *~ *# 2>nul

;= Fast access to Notepad 
np="C:\Program Files (x86)\Notepad++\notepad++.exe" $* 

;= Fast access to Explorer
x=explorer .

;= List aliases
alias=doskey /macros | sort

;= Which
which=where $*

;= Project specific stuff
showsvn=start <>
code=cd /d <>

;= Reference
ss64=start http://ss64.com/nt/

;= IIS specific stuff
wp=c:\Windows\System32\inetsrv\appcmd.exe list wp
recycle=c:\Windows\System32\inetsrv\appcmd.exe recycle apppool /apppool.name:$*

;= Edit host file
hosts=gvim c:\windows\system32\drivers\etc\hosts
;= :end
;= rem ******************************************************************
;= rem * EOF - Don't remove the following line.  It clears out the ';' 
;= rem * macro. Were using it because there is no support for comments
;= rem * in a DOSKEY macro file.
;= rem ******************************************************************
;=

The above file would need following file stored at the same path as above file and named up.bat

@ECHO OFF
PUSHD .
SET NUM=%1
IF [%NUM%]==[] SET NUM=1
FOR /L %%G IN (1,1,%NUM%) DO CD ..

Here are some of the cool aliases I have defined

wp lists IIS worker processes and their PIDs
recycle <apppool name> recycle the apppool. wp gives the name of the apppool
up <number> navigates to "number" level up from current directory
x open file explorer
hosts open hosts file in notepad
code opens the source code path in file explorer
np opens notepad++

Then I have got Linux equivalent shortcuts for commonly used commands like dir, where, del etc.


3. Package Manager


If you have used Linux/Unix then your know how easy it is to install programs using repositories like apt-get. This simple utility makes working on shell on Linux a breeze and keeps you on shell. For a long time, there was nothing similar for Windows world. Fortunately the scene is changing and Chocolatey is first such package manager for Windows machines. Chocolatey is powered by NuGet and really takes NuGet to next level where it starts making since. Just head over to their website for installation instructions or if you are too lazy type in following (you would need PowerShell)
        
@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%systemdrive%\chocolatey\bin

Above command would download and install Chocolatey for you and it would also set all the PATHs needed. Just start a new command prompt and you are ready to get cracking with it.

You can run following commands to install one of the console emulators I mentioned in first part.

        
C:\Users\Suhas.Chatekar\cinst conemu
        
C:\Users\Suhas.Chatekar\cinst cmder
        
C:\Users\Suhas.Chatekar\cinst console2

Go give it a try and you would never feel like running away from console again. These days I install almost everything from command line using Chocolatey.

If you are using something to improve your experience on Windows command prompt then share. I would be delighted to learn from you.

Sunday, March 9, 2014

First step in test driving ASP.NET Web API application

This is a lengthy post. The sample code from this post is on github here

When I started learning TDD, I used to participate in lot of Kata and dojo sessions. In those sessions we would test drive simple algorithms. We would mostly end up building couple of simple classes that were test driven. That used to feel great I would always come back to my desk with great enthusiasm. But then, when I used to look at my next task, I was left clueless. Most of the times, the task would be about adding a new feature to an existing website or web service which is complicated than couple of classes we would build in dojo session. I would struggle to decide what test should I write first.

Over time and after learning from lot of mistakes, I think I have now found a rhythm. These days I follow Red-Green-Refactor starting at a higher acceptance test level, but I would talk about that in detail in some other day. In this post I want to talk about how do I start test driving new features in an ASP.NET Web API application. 

Web API is all about building REST services. And first thing we should be ideally designing when building REST services is resource. Once resource are finalised, we should be giving a though to URLs where those resources would be available or would created using. Let's take an example. Suppose we are building a REST service that lets us manage customers. So first thing we do is define a resource named Customer and then define what operations we need on this resource. e.g. you would want be able to POST on some URL in order to create customer etc. Following expands this information

Create Customer 

POST /api/Customer

Get Customer

GET /api/Customer?id={CUSTOMER_ID}

Update Customer

PUT /api/Customer?id={CUSTOMER_ID}

Delete Customer

Not supported - Deleting a customer is not allowed. 

At this point I am ready to dive into the code. So far I am doing all the design on a piece of paper. So what test should be write first? A route tests looks very logical at this stage.

Route tests

Route tests verify that the routes that we need are defined and are correctly handled by the controller we know if the right controller to handle that route. The test can also verify whether correct action on controller is invoked or not. So let's write our first test

        [Test]
        public void GetCustomerIsHandledByCustomerController()
        {
            var httpConfiguration = new HttpConfiguration(new HttpRouteCollection());
            WebApiConfig.Register(httpConfiguration);

            var request = new HttpRequestMessage(HttpMethod.Get, "http://dummylocalhost/api/Customer?id=1001");
            request.Properties[HttpPropertyKeys.HttpRouteDataKey] = httpConfiguration.Routes.GetRouteData(request);

            var controllerBuilder = new TestControllerBuilder(request, httpConfiguration);

            Assert.That(controllerBuilder.GetControllerName(), Is.EqualTo("CustomerController"));
            Assert.That(controllerBuilder.GetActionName(), Is.EqualTo("Get"));
        }

In the above test, we are calling WebApiConfig.Register method. This method is added by default when you created your Web Api project and contains all the route definitions on the instance of HttpConfiguration class that is passed into it. For the purpose of the test, we create an instance of HttpConfiguration and pass it in so that we get HttpConfiguration with all routes configured as per our application logic. 

We then create an instance of HttpRequestMessage whose URL points at "api/Customer?id=1001" and it uses HTTP GET. This is according to the resource and URL design we did in the first section. We then pass this into a TestControllerBuilder class. The code inside this class is interesting. What this class does is, takes in both http request and http configuration and tells which controller and which method on that controller would be invoked to handle this request. We then assert that controller and action names are correct. Here is the code for TestControllerBuilder

public class TestControllerBuilder
    {
        private readonly ApiControllerActionSelector actionSelector;
        private readonly HttpControllerContext controllerContext;
        private readonly HttpControllerDescriptor controlleDescriptor;
        private readonly HttpRequestMessage requestMessage;
        private readonly HttpConfiguration httpConfiguration;

        public TestControllerBuilder(HttpRequestMessage request, HttpConfiguration httpConfiguration)
        {
            var routeData = request.Properties[HttpPropertyKeys.HttpRouteDataKey] as IHttpRouteData;
            controllerContext = new HttpControllerContext(httpConfiguration, routeData, request);
            IHttpControllerSelector controllerSelector = httpConfiguration.Services.GetHttpControllerSelector();
            controlleDescriptor = controllerSelector.SelectController(request);
            controllerContext.ControllerDescriptor = controlleDescriptor;
            actionSelector = new ApiControllerActionSelector();
            this.httpConfiguration = httpConfiguration;
            requestMessage = request;
        }

        public string GetActionName()
        {
            var actionDescriptor = actionSelector.SelectAction(controllerContext);
            return actionDescriptor.ActionName;
        }

        public string GetControllerName()
        {
            var controllerType = controlleDescriptor.ControllerType;
            return controllerType.Name;
        }

        public HttpControllerContext HttpControllerContext
        {
            get { return controllerContext; }
        }
}

There is not much happening here. You first build ControllerContext using HttpConfiguration and Request. We then retrieve the HttpControllerSelector which in most cases is DefaultHttpControllerSelector. You pass in the request to this guy and he would tell you what controller and action would be used to serve this request.

The test would initially throw an exception saying "404 NOT FOUND". This is down to implementation of DefaultHttpControllerSelector. If no controller matching the route is found then an HTTP exception is thrown.

Go ahead and add a new Api controller named CustomerController with a Get(string id) method. run the test and it would pass. From this point onwards, you can either test drive the complete implementation of "Get Customer Api" or add route tests for the other customer services defined in first section.

The sample code is available on git here

But there is lot of  boilerplate code here!!!

If you do not use NUnit then you may not be interested in this pat.

If you have looked at my previous post Fluent controller builder for unit testing Web API controllers then you would know that I am not a big fan of boilerplate code and I love fluent API. Lets see what we can do to hide the ugly side of this boilerplate code and bring in some nice looking fluent API.

NUnit has a nice system of custom constraints. Custom constraints convert tedious asserts into simple and nice reading ones. For our example, I have built two constraints namely ControllerEqualityConstraint and ActionEqualityConstraint as below.


internal class ControllerEqualityConstraint : Constraint
    {
        private readonly string controller;

        public ControllerEqualityConstraint(string controller)
        {
            this.controller = controller;
        }

        public override bool Matches(object item)
        {
            var request = (HttpRequestMessage)item;

            if (request != null)
            {
                var httpConfiguration = new HttpConfiguration(new HttpRouteCollection());
                WebApiConfig.Register(httpConfiguration);
                request.Properties[HttpPropertyKeys.HttpRouteDataKey] = httpConfiguration.Routes.GetRouteData(request);

                var controllerBuilder = new TestControllerBuilder(request, httpConfiguration);
                return controller == controllerBuilder.GetControllerName();
            }

            return false;
        }

        public override void WriteDescriptionTo(MessageWriter writer)
        {
            writer.Write(controller);
        }
    }
public class ActionEqualityConstraint : Constraint
    {
        private readonly string action;

        public ActionEqualityConstraint(string action)
        {
            this.action = action;
        }

        public override bool Matches(object item)
        {
            var request = (HttpRequestMessage)item;

            if (request != null)
            {
                var httpConfiguration = new HttpConfiguration(new HttpRouteCollection());
                WebApiConfig.Register(httpConfiguration);
                request.Properties[HttpPropertyKeys.HttpRouteDataKey] = httpConfiguration.Routes.GetRouteData(request);

                var controllerBuilder = new TestControllerBuilder(request, httpConfiguration);
                return action == controllerBuilder.GetActionName();
            }

            return false;
        }

        public override void WriteDescriptionTo(MessageWriter writer)
        {
            writer.Write(action);
        }
    }

I then put them together using a third class like below

public class IsHandledBy
    {
        public static IResolveConstraint Controller(string controller)
        {
            return new ControllerEqualityConstraint(controller);
        }

        public static IResolveConstraint Action(string action)
        {
            return new ActionEqualityConstraint(action);
        }
    }

With the above in place, my test now looks like this

[Test]
        public void GetCustomerIsHandledByCustomerController2()
        {
            var request = new HttpRequestMessage(HttpMethod.Get, "http://dummylocalhost/api/Customer?id=1001");

            Assert.That(request, IsHandledBy.Controller("CustomerController"));
            Assert.That(request, IsHandledBy.Action("Get"));
        }

Now does that not look nice? I would love to hear your feedback.

Friday, March 7, 2014

Is manual testing dead?

Personal disclaimer - I work in a team where every developer follows TDD and is capable of writing browser automation tests in BDD style using Cucumber/Ruby. We have a manual tester on team and no automation tester. Other teams around me that I closely interact with have not manual testers on team but only automation testers.

TL;DR; - The world of programming is very fast moving towards automating everything under the sun. But IMHO we are far from calling manual testing dead. 

The objective of the article is not to reason about/against the above argument. Automation is a big investment and is good investment on most counts but it may not pay off all the time. It can be useful understand in what situations that investment does not pay off. There is no thumb rule to these situations, these are purely based on my experience working with software I have built over the years. But before we get into that, lets try to understand what does it cost to automate a test (or use case, user journey, scenario, whatever you name it). Formally this is called  "Total Cost of Ownership" of automating a test. This consists of following

  1. Cost of automating a test
  2. Maintaining the automated test over time as your code changes
  3. Cost of manually testing edge cases not covered by the automated test

This inherently indicates that manual testing is still not our of order, but lets talk on that later. The first point is interesting - cost of automating a test. If you are automating simple tests through unit tests or integration tests, then this cost is very minimal. The investment here pays off very well and you can afford to automate a large number of tests. The reduced cost is a direct outcome of maturity of modern IDEs and testing tools. Today, you can hit the ground running with a unit or integration test in no time. It does not take long time to build and run tests. It is becoming more and more easier to repeatedly run these tests on more than one machines (thus making continuous integration cheaper). These tests are almost entirely written in the same language that the software under test (SUT) is written in. That makes it even easier. Maintaining these tests as SUT changes is not hard. 

Things become little difficult when we move towards browser automation tests that let you automate complete user journey. Agreed, this field is no more new and lot of advancement has happened in last few years in this area. We now have very stable tools like Waitr and Selenium WebDriver that make programatically interacting with browsers a breeze. But there are still some challenges 
  1. A different set of skills are required to build these tests. If your team is new to this kind of testing then they would take longer time in the beginning to automate simple user journeys
  2. If the SUT changes, then it takes longer to make changes to these tests
  3. In my experience, people prefer using Ruby for automation. So if you are a .NET or Java developer then you need to invest in cost of learning a new language
  4. These tests are not as reliable as unit tests are. They may be flaky and lot of times their reliability depends on factors beyond just tests e.g. load on the machine, response of the web pages etc.

By now, you would have some clarity on what I mean by TCO. A test might be cheap to automate in the first place but maintaining it over time may be expensive.

Another important point of view when it comes to automated tests is how is the faring in detecting defects or how badly the test is hiding defects. As your code changes, your tests either do not change, change or become completely irrelevant. If your tests are not changing then it is possible that they are failing to detect defects or hiding defects. If a test becomes completely irrelevant after a code change then the investment you made in automating the test ceases. From this standpoint, it would be worthwhile to ask yourself, how frequently the code being tested is going to change. You might need to talk to BAs and product owners to get a clear answer to this but if you are confident that code is likely to change a lot in near future then you need to be judgmental about how far you go with automation (unit tests are exempt from this. I always write unit tests no matter what)

Having said all of the above, I feel, there are situations when we come to a conclusion that manual testing is a better option. Below is a list of such situations from my experience

  1. Testing long running batch jobs - It is difficult to completely automate the testing of long running batch jobs. The core logic of the batch job can be unit tested but automating a single end-to-end run of a batch job, though possible, is not feasible and not worth the investment. Even if attempted, in my experience, it does not give enough confidence about the quality of the software and we resort to some level of manual testing in the end
  2. Testing email content and delivery - Email delivery can be tested to some extent by writing emails to disk instead of delivering them but then you are not really testing delivery. Also testing content of the email and how they render in browser based email clients vs. desktop based emails clients is something that cannot be automated reliably enough. Moreover, there is no one standard when it comes to desktop and mobile phone clients. Combinations are just too many and investment in automating this with tools available today is not worth it.
  3. Testing of user interfaces - Browser automation tools confirm the present of a particular element on page but they cannot verify the look and feel of the element. There is some movement happening in this area and people are experimenting with different tools. But again, given the maturity of these tools, automating such testing is not reliable. It is best left for human eyes. Here is a list of tools that let you automate testing of visual aspects of a web page by the way.
  4. Browser compatibility testing - Gone are the days when everyone on the planet used IE6. People now use tens of different types of browsers and every product owner wants his product to work on any browser out there in the market. They do not want to loose revenue because we did not support a browser that a prospective client was using. So testing software on vast range of browsers is inevitable. If you have got some browser automation tests then you can go a step further an run you test in multiple browsers. There are tools like BrowserStack that let you do exactly that. But this approach is not scalable given the time it would take to run all your tests on all possible browser combinations. And usually, the number of user journeys automated are not large enough to build confidence that software works in all browsers. 
  5. Testing responsiveness of website - "Responsive" is the buzzword today. Every website you build has to be responsive (well, not every site, but most sites).  Testing responsiveness of a website is very difficult to automates for the fact that you would need to render the site on different devices in order to see how your pages are scaling. You can use websites that render your site in different device resolutions to show you how your pages scales but automating that is not very easy. This kind of activity is mostly preferred to be done manually on real devices.
  6. Testing user journeys spanning multiple systems - Enterprise software usually have more than one component intricately communicating with each other. In order to reliably tests such systems, data flowing through one component into another needs to be verified. It is possible to break the problem down into logical units and test each unit with support of unit tests. But again, you feel the need of one test that runs an end-to-end user journey and ensures that everything is where it belongs. Automating such tests would be time consuming. Such tests would also take a long time to run.
Each of the above situations is something I have experienced. For some, I have attempted automating tests, frustrated and given up. For others, I had to rely on manually testing after getting issues from production users. All this makes me feel that manual testing is not dead. Rather role of manual testing has become more important. Automation has made it difficult for simple defects to creep in by mistake. What is there in the software are the defects that are not easy to find. So manual testers of these days have a huge responsibility of finding defects that automated tests have hidden. 

Tuesday, March 4, 2014

Why do we write tests?

We all write tests everyday. We write different types of tests from unit tests to browser automation tests. Occasionally I meet a team members who are not sure whether they should write a test or not. If yes, why she should write that test. Or whether the test they are writing is a good quality test. An example of this situation is tests for FluentNHibernate mapping classes. The argument usually is that if you know how your table is structured then there is barely anything that needs to be designed or unit tested. If you know the rules of how to write mappings then things just work. Another example is a test below.

[Test]
public void RegistrationController_Index()
{
    var controller = new RegistrationController();

    var result = controller.Index() as ViewResult;

    Assert.That(result, Is.Not.Null);
}

This test has two obvious problems

  1. Just asserting that result is not null is not enough. We need a better assert. 
  2. The name of the test is misleading or rather it is not leading us to anywhere. 

My experience is that, people who write such tests (or decide against writing important tests) fail to understand what value a test adds. To understand what value a test adds, we need to answer this question - Why do we write tests?

Now, I am no expert at TDD. Following paragraphs are my attempt at answering this question in a way that provides some pointers to help us with understanding what value a test adds.

Prevent induced defects - This is by far the most important reason why we write tests. We work in a world dominated by delivery pressure where code is always changing. Any line of code we change can leave some other part of the application broken. Our tests should prevent us from this. If you have got tests that are not failing even when the functionality they are testing is completely changed, then those tests are of no use. The test above is an example of such a situation. If RegistrationController is supposed to return a view Index.cshtml and I change that to return Register.cshtml, then I might have broken some important feature and this test does a bad job at telling me that.

Safety Net - This is second most important reason behind writing tests. A well written test, acts as a safety net not only when you are re-factoring your code but also when you are changing an existing feature. After changing the code, if you have no failing tests, that either means your change has not altered any existing behaviour of the system or your tests are not good at detecting this. Tests for NHibernate mappings is important from this standpoint. I would like to be prevented from someone removing/modifying a property on my model that is mapped to an existing database column.

Drive design and implementation of features - This may not apply to unit tests, but does apply to automated acceptance criteria and integration tests. A lot of times, I start my Red-Green-Refactor cycle with an automated acceptance criteria or an integration test. This leads nicely into design of controllers, models and routes that efficiently satisfy the needs to feature I am building.

Confidence in the quality of the software - A well written and complete automated test suit is an indicator of the quality of the software. I personally do not feel confident about the quality of the software if there are missing tests or tests that are not carefully written. For lot of people high level of code coverage is a factor that leads to a level of confidence in the quality of the software, for others it may be something else. But at the end of the day, the confidence does come from quality of the automated tests.

Code documentation - By code documentation I do not mean technical documentation of you product. Lot of times, we deliver a feature and nothing happens around that feature for few months. After few months product owner comes with a great idea that needs changes to that feature. People who worked on the feature originally are not around and new people have to work on the change. They would first need to understand how the existing code is working. They can look at the existing code if it is simple. But they would better off by going through the tests in order to understand the code. This works quite nicely.

Lower the frustration of testers - Testers on your team are going to be frustrated if they see that nasty defect that is beyond their means to reproduce. If you have testing developers on the team, they might find a way out on their own, but not always. It helps to think about what kind of uncontrolled environment your code would be run in (e.g. multiple concurrent requests, load etc.) and try to come up with tests that can validate code's behaviour in such situations. If you have such tests that, after every commit, are validating behaviour of software in probable uncontrolled environment, then you have a happy tester who would help with finding more important issue and save your face in front of client.

That was not a big list, huh? So every time I have a doubt whether I should write that next test or whether the test I have just written add any value, I try to look for positive answers to one or more of the following questions.

  1. Will this test prevent defects induced by changes to the code being tested?
  2. Will this test provide required level of safety net to other developers or to me few months later?
  3. Is this test helping me with design of the code I am going to write?
  4. Would this test increase my or my team's confidence in the quality of the software?
  5. Can this test act as good documentation of the feature being tested?
  6. Can this test help lower the frustration of other team members, especially testers?
Some questions are difficult to answers and experience is the key to getting best answers out. But I hope trying to answer these questions when you are in doubt would set you on the right path

Monday, February 17, 2014

How to choose right technology for building scalable software

I am member of a mailing list around start-ups and entrepreneurship in India. The group has some experienced entrepreneurs as members and the mailing list is always flowing with lot of good conversation and advice all the time. Recently someone asked what technology should they be using to build their next software. They wanted scalability and performance built into the software from day one and wanted right programming framework for the job. Some people responded suggesting to use PHP or Java. These suggestions mostly came from their experience of building software which is understandable.

My thoughts on this area are quite different and I responded to the thread. I feel I should share the response with my readers here.

I am a bit surprised on the answered being offered so far. I understand tendency to suggest technology that you have used or you love. But programming language/technology/platform do not offer a ready made formula for performance and scalability. I am a .NET developer and have worked on software that both scaled well and faltered. 

Think of this - Twitter uses Ruby (RoR to be specific) and at one point of time, they determined that Ruby does not scale. What was that point? That point was when more than 12 million tweets went through their networks in a span of one hour when results on US elections were being declared. This point came for twitter after being in business for more than 8 years. Till that time they were quite happy with Ruby. 

Facebook started with plain PHP and did well when user base was well below few millions. Eventually they found out problems with PHP and went on to transport their PHP code into C++. I have even heard that they have built their own PHP implementation.

Examples can go on. But one common theme across all these companies is that they do not use one technology or platform to build their services. They wisely choose tools/technologies based on lot of factors like rate at which they are growing, usage patterns, architecture of their product, latest research and lot more. 

With above background, I would summarize with following

1. Do not rely on a particular technology to offer scalability/performance. It is a carefully designed software combined with carefully designed hardware infrastructure that delivers the scalability and performance and not a programming language
2. Start with a programming language that you know or you think is easy to hire best people knowing that technology. For founders of start-ups it is best to know some programming in the beginning
3. Build the product, launch it and see how it goes. If you are a hit then you would have lot of cash to look back at your initial implementation and make changes where needed
4. Be open to using more than one programming language/technology (this is called polyglot programming)
5. Keep an eye on new frameworks/libraries and assess how you can benefit from them
6. Follow practices like TDD along with Agile that puts you in a position to make major changes to your code-base cheaply and in least amount of time
Building software that scales is more of an art. It takes various iterations of fine tuning and swiping in and out various pieces of code to see what works and what does not. There are example of high performing websites built using almost every web technology exists under the sun. The reason they scale is because they use the right technology at the right layer along with aids like CSS/Javascript minification, caching wherever possible, following correct HTTP semantics, building services in a way that minimizes chattiness etc. There is no "one programming language" solution to the problem and the solution does not come out by following a particular style of coding.

Saturday, November 16, 2013

IamA Microsoft ASP.NET and Web Tools Team (and Azure) on reddit

Yesterday, Scott Hanselman, Damian Edwards and  Mads Kristensen from Microsoft were at Reddit answering people's questions and future of lot of things around web/cloud development and Microsoft. The transcript was quite informative and interesting to read. If you are a web developer using any of Microsoft's technologies or are apssionate about cloud, I highly recommend following the transcript here - AMA

I am yet to go through the whole transcript but here are some of the exciting bits I got to know about from the chat


  1. They talked about how the OSS initiative started at Microsoft. How Phil Haack and ScootGu played an important role and where the OSS initiative is headed. It was interesting to know that guys have managed to open source a lot of things in last couple of years. It was also nice to know that they promote external open source .NET based web frameworks like Nanacy, ServiceStack and Oak etc.
  2. Katana  project is getting serious attention. Do not be surprised if one day ASP.NET MVC works completely off Katana without any dependency on System.Web. If you have no clue what I am talking about - System.Web is part of BCL and is not open sourced. Where as, ASP.NET MVC is open sourced. MVC takes dependency on System.Web as of now which is a problem because their release cycles are not synchronised and MVC team cannot add new features to System.Web.If you are interested, the roadmap for Katana project is here 
  3. Rewriting project file structure - Microsoft is listening to people's complaints about pains of XML based project file. They are working on building a new project file formats that enable more collaboration with less conflicts after committing project files. It is not clear when this would be released.
  4. Offline NuGet - Again, they are listening. They are working on ideas to make NuGet available offline. Again, no news on when this would be available but effort is going on.
  5. Folks at Microsoft have interest in seeing remote debugging in browsers initiative. Visit their website to know more about the initiative. It would be a good boost of web developer's productivity if this initiative sees the light of the day.
  6. BrowserLink and Side-Waffle project - There are lot of interesting things happening around BrowserLink and Side-Waffle project. Take a look at Side-Waffle's github page
  7. SASS and LESS support in visual Studio - VS2013 has SAAS editor to make the lives of graphic designers among us easy. And they are actively working on a LESS editor for upcoming release.
  8. TypeScript - TypeScript team is working on visual studio tooling for TypeScript from ground up. This is in order to offer a richer experience. They are hoping to release TypeScript 1.0 and tooling support in Visual Studio together
  9. ASP.NET Web Pages - Mind you, this is not Web Forms. I did not know this existed. Take a look here - Web Pages
  10. What's happening in the world of Visual Studio
  11. Microsoft's Partnership with Xamarin and what's in future around building cross-platform mobile apps in C#
Besides the above they talk a lot about future of EF, MVC and SignalR in general, what helped them reach where they are, life at Microsoft and lot of other things. The transcript is worth reading. Here is the link again - http://www.reddit.com/r/IAmA/comments/1qp91h/iama_we_are_microsoft_aspnet_and_web_tools_team/