-->

dev tfs, team_foundation_server, ci, continuous_integration, alm, application_lifecycle_management

Setting up continuous integration environment with TFS is quite easy and free. In this post I’ll go over the details of setting up a CI environment. The tools I will use are:

  • Visual Studio 2015 Community Edition
  • TFS 2015 Express
  • AWS EC2 instance (To install TFS)
  • AWS SES (To send notification mails)

Step 01: Download and install TFS Express 2015

I always prefer the offline installer so download the whole ISO just in case (it’s 891MB so it might be a good time for a coffee break!)

Installation is standard next-next-finish so nothing particular about it. Accepting all defaults yields a working TFS instance.

Step 02: Connect to TFS

From the Team menu select Manage Connections. This will open Team Explorer on which we can choose the TFS instance.

It can connect to TFS Online or Github. For this example I will use the hosted TFS we have just installed. We first need to add it to TFS server list.

If you are using EC2 like I did don’t forget to allow inbound traffic on port 8080 beforehand.

Note that we don’t have a Team Project yet, what we have connected is the Project Collection. A collection is an additional abstraction layer used to group related projects. Using Default Collection generally works out just fine for me.

Step 03: Configure workspace

Since a local copy needs to be stored locally we have to map it to the project

Just pick a local empty folder. Leaving “$/” means the entire collection will be mapped to this folder.

Click Map & Get and you will have a blank workspace.

Step 04: Create team project

Now we can create a team project on TFS. click on Home -> Projects and My Teams then New Team Project.

Just following the wizard selects Scrum methodology and Team Foundation Version Control as the source control system. Starting with TFS 2013, Git is also supported now.

After several minutes later, our team project is ready.

Step 05: Clone and start developing

I chose Git as the version control system. If you use TFSVC the terminology you’ll see is a bit different but since the main focus of this post is establishing continuous integration it doesn’t matter much as long you as you can check-in the source code.

So now that we have a blank canvas let’s start painting! I added a class library with a single class like the one below:

public class ComplexMathClass
{
    public int Add(int x, int y)
    {
        return x + y;
    }

    public double Divide(int x, int y)
    {
        return x / y;
    }
}

and 3 unit test projects (NUnit, XUnit and MSTest).

NUnit tests:

public class ComplexMathClassTests
{
    [TestCase(1, 2, ExpectedResult = 3)]
    [TestCase(0, 5, ExpectedResult = 5)]
    [TestCase(-1, 1, ExpectedResult = 0)]
    public int Add_TwoIntegers_ShouldCalculateCorrectly(int x, int y)
    {
        var cmc = new ComplexMathClass();
        return cmc.Add(x, y);
    }

    [Test]
    // [ExpectedException(typeof(DivideByZeroException))]
    public void Divide_DenominatorZero_ShouldThrowDivideByZeroException()
    {
        var cmc = new ComplexMathClass();
        double result = cmc.Divide(5, 0);
    }
}

XUnit tests:

public class ComplexMathClassTests
{
    [Theory]
    [InlineData(1, 2, 3)]
    [InlineData(0, 5, 5)]
    [InlineData(-1, 1, 0)]
    public void Add_TwoIntegers_ShouldCalculateCorrectly(int x, int y, int expectedResult)
    {
        var cmc = new ComplexMathClass();
        int actualResult = cmc.Add(x, y);
        Assert.Equal<int>(expectedResult, actualResult);
    }

    [Fact]
    public void Divide_DenominatorZero_ShouldThrowDivideByZeroException()
    {
        var cmc = new ComplexMathClass();
        cmc.Divide(5, 0);
    //  Assert.Throws<DivideByZeroException>(() => cmc.Divide(5, 0));
    }
}

MSTest tests:

[TestClass]
public class ComplexMathClassTests
{
    [TestMethod]
    // [ExpectedException(typeof(DivideByZeroException))]
    public void Divide_DenominatorZero_ShouldThrowDivideByZeroException()
    {
        var cmc = new ComplexMathClass();
        cmc.Divide(5, 0);
    }
}

Unfortunately MSTest still doesn’t support parametrized tests which is a shame IMHO. That’s why I was never a big fan of it but added to this project for the sake of completeness.

I commented out the lines that expect exception in all tests to fail the tests. So now the setup is complete: We have a working project with a failing test. Our goal is to get alert notifications about the failing test whenever we check in our code. Let’s proceed to the next steps to accomplish this.

Step 06: Building on the server

As of TFS 2015 they renamed the Build Configuration to XAML Build Configuration for some reason and moved it under Additional Tools and Components node but everything else seems to be the same.

Default configuration installs one build controller and one agent for the Default Collection so for this project we don’t have to do add or change anything.

Each build controller is dedicated to a team project collection. Controller performs lightweight tasks such as determining the name and reporting the status of the build. Agents are the heavy-lifters and carry out processor-intensive work of the build process. Each agent is controlled by a single controller.

In order to build our project on the build server we need a build definition first. We can create one by selecting Build -> New build Definition

  • One important setting is the trigger: By default the build is not triggered automatically which means soon enough it will wither and die all forgotten! To automate the process we have to select Continuous Integration option.

  • In order to automatically trigger the build, the branch that is to be built must be added to monitored branches list in the Source Settings tab.

  • Last required setting is failing the build when a test fails. It’s a bit buried so you have to go 3.1.1 Test Source and set “Fail build on test failure” to true.

Step 07: Notifications

At this point, our build definition is triggered automatically but we don’t get any notifications if the build fails (due to a compilation error for example). TFS comes with an application called Build Notifications. It pops up an alert but it requires to be installed on the developer machine so I don’t like this solution.

A better approach is enabling E-Mail notifications. In order to do that first we need to set up Email Alert Settings for TFS. In the Team Foundation Server Express Administration Console select Application Tier and scroll down to the “Email Alert Settings” section. enter the SMTP credentials and server info here.

You can also send a test email to verify your settings.

Second and final stage is to enable the project-based alerts. In Visual Studio Team Explorer window select Home -> Settings -> Project Alerts. This will pop up a browser and redirect to alert management page. Here select “Advanced Alert Management Page”. In the advanced settings page there are a few pre-set notification conditions and the build failure is at the top of the list!

I intentionally broke the build and checked in my code and in a minute, as expected I received the following email:

With that we have automated the notifications. We already set the build to fail upon test failure in the previous step. Final step is to run the tests on the build server to complete the circuit.

Step 08: Run tests on build server

Running tests on the server is very simple. For NUnit all we have to do is install the NUnitTestAdapter package:

Install-Package NUnitTestAdapter

After I committed and pushed my code with the failing test I got the notification in a minute:

Uncommented the ExpectedException line and the build succeeded.

For xUnit the solution is similar, just install the xUnit runner NuGet package and checkin the code

Install-Package xunit.runner.visualstudio

For MSTest it works out of the box so you don’t have to install anything.

In the previous version of TFS I had to install Visual Studio on the build server as advised in this MSDN article. Seems like in TFS 2015 you don’t have to do that. The only benefit of using MSTest (as far as I can see at least) is that it’s baked in so you don’t have to install extra packages. You create a Unit Test project and all your tests are immediately visible in the test explorer and automatically run on the server.

Wrapping up

Continuous Integration is a crucial part of the development process. Team Foundation Server is a powerful tool with lots of features. In this post I tried to cover basics from installation to setting up a basic CI environment. I’ll try to cover more features in the future but my first priority will be new AWS services CodeDeploy, CodeCommit and CodePipeline. As I’m trying to migrate everything to AWS having all projects hosted and managed on AWS would make more sense in my case.

Resources

awsdev route53, csharp

If you don’t have a static IP and you need to access your home network for some reason you may find Dynamic DNS services useful. I was using No-IP until its limitations became offputting. Since I love DIY stuff and AWS I thought I could build the exact same service on my own using Route 53.

Enter DynDns53

It’s a simple application. What it does is basically:

  • Get current external IP
  • For each subdomain in the configuration, call AWS API and get domain info
  • Check if the recorded IP is different than the current one
  • If so, call AWS API to update it with the new one, if not do nothing.

In a different post on setting up Calibre on a Raspberry Pi I developed a script that did the same thing:

import boto.route53
import urllib2
currentIP = urllib2.urlopen("http://checkip.amazonaws.com/").read()

conn = boto.connect_route53()
zone = conn.get_zone("volki.info.")
change_set = boto.route53.record.ResourceRecordSets(conn, '{HOSTED_ZONE_ID}')

for rrset in conn.get_all_rrsets(zone.id):
    if rrset.name == 'calibre.volki.info.':
        u = change_set.add_change("UPSERT", rrset.name, rrset.type, ttl=60)
        rrset.resource_records[0] = currentIP
        u.add_value(rrset.resource_records[0])
        results = change_set.commit()

The difference of DynDns53 is that it can run as a background service and it has a cute logo! The C# equivalent of above code is like this:

public void Update()
{
    var config = _configHandler.GetConfig();
    string currentExternalIp = _ipChecker.GetExternalIp();

    foreach (var domain in config.DomainList)
    {
        string subdomain = domain.DomainName;
        string zoneId = domain.ZoneId;

        var listResourceRecordSetsResponse = _amazonClient.ListResourceRecordSets(new ListResourceRecordSetsRequest() { HostedZoneId = zoneId });
        var resourceRecordSet = listResourceRecordSetsResponse.ResourceRecordSets.First(recordset => recordset.Name == subdomain);
        var resourceRecord = resourceRecordSet.ResourceRecords.First();

        if (resourceRecord.Value != currentExternalIp)
        {
            resourceRecord.Value = currentExternalIp;

            _amazonClient.ChangeResourceRecordSets(new ChangeResourceRecordSetsRequest()
            {
                HostedZoneId = zoneId,
                ChangeBatch = new ChangeBatch()
                {
                    Changes = new List<Change>() 
                    { 
                        new Change(ChangeAction.UPSERT, resourceRecordSet)
                    }
                }
            });
        }
    }
}

That’s it. The rest is just user interface, a Windows service and some basic uni tests.

Usage

The project has a WPF user interface and a Windows service. Both are using the same library (DynDns53.Core) so the functionality is the same. Using the user interface is pretty straightforward.

Just add your AWS keys that have access privileges to your domains. You can set it to run at system start so you don’t have to start it manually.

I built the Windows service using Topshelf. To install it, build the application and on an elevated command prompt just run

DynDns53.Service.exe install

Similarly to uninstall:

DynDns53.Service.exe uninstall

What’s missing

  • Landing page: I’ll update the project details on the site generated by GitHub Pages.
  • Mac/Linux support: As the worlds are colliding I’d like to discover what I can do with the new .NET Core runtime. Since this is a simple project I think it’s a good place to start

Resources

devaws csharp, route53

Part 3: Consuming the APIs

So far in the series:

In this installment it’s time to implement the actual application that consumes the TLD API as well as AWS Route 53 API. I decided to turn this into a little project with a name, logo and everything. I’ll keep updating the project. Currently it just has the core library and a console application. I know it’s not enough but you have to start somewhere!

Core functionality

Basically what it does is:

  1. Get the supported TLD list

  2. For each TLD send a CheckDomainAvailability request to AWS

  3. Return the resultset

When I first started out I was meaning to make these requests run in parallel. But they got throttled and I ended up using all my allowance. So I changed the code to make a call per second:

The TldProvider I developed can be used in two different ways:

  1. As a NuGet Package

I uploaded my package to NuGet.org so you can install it by running

Install-Package TldProvider.Core.dll

and the usage would be:

private List<Tld> GetTldListFromLibrary()
{
    string url = ConfigurationManager.AppSettings["Aws.Route53.DocPageUrl"];
    var tldListProvider = new TldListProvider();
    var supportedTLDlist = tldListProvider.GetSupportedTldList(url);
    return supportedTLDlist;
}
  1. As an API the usage is a bit more complex because of the JSON to list conversion:
private List<Tld> GetTldListFromApi()
{
    string apiUrl = ConfigurationManager.AppSettings["TldProvider.Api.EndPoint"];
    var httpClient = new HttpClient();
    var request = new HttpRequestMessage() { Method = new HttpMethod("GET"), RequestUri = new Uri(apiUrl) };
    var httpResponse = httpClient.SendAsync(request).Result;
    var apiResponse = httpResponse.Content.ReadAsStringAsync().Result;
    dynamic resp = Newtonsoft.Json.JsonConvert.DeserializeObject(apiResponse);
    var resultAsStringList = JToken.Parse(apiResponse)["tldList"].ToObject<List<string>>(); ;
    var resultAsTldList = resultAsStringList.Select(t => new Tld() { Name = t }).ToList();
    return resultAsTldList;
}

Actually this method should not return List as it would require a reference to the library. The idea of using the API is to get rid of that dependency in the first place but I just wanted to add both and use them interchangably.

So finally tha main course: The check method that loops through the TLD list and gets the availability from AWS Route 53 API

public List<DomainCheckResult> Check(string domainName)
{
    string ACCESS_KEY = ConfigurationManager.AppSettings["Aws.Route53.AccessKey"];
    string SECRET_KEY = ConfigurationManager.AppSettings["Aws.Route53.SecretKey"];

    var results = new List<DomainCheckResult>();

    var credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_KEY);
    var config = new AmazonRoute53DomainsConfig()
    {
        RegionEndpoint = RegionEndpoint.USEast1
    };

    var amazonRoute53DomainsClient = new AmazonRoute53DomainsClient(credentials, config);

    // var supportedTLDlist = GetTldListFromLibrary();
    var supportedTLDlist = GetTldListFromApi();

    foreach (var tld in supportedTLDlist)
    {
        var request = new CheckDomainAvailabilityRequest() { DomainName =  $"{domainName}.{tld.Name}" };

        var response = amazonRoute53DomainsClient.CheckDomainAvailabilityAsync(request);
        var result = response.Result;
        Console.WriteLine($"{request.DomainName} --> {result.Availability}");
        results.Add(new DomainCheckResult()
        {
            Domain = domainName,
            Tld = tld.Name,
            Availability = result.Availability,
            CheckDate = DateTime.UtcNow
        });

        System.Threading.Thread.Sleep(1000);
    }

    return results;
}

TroubleShooting

I had a few problems while implementing. Learned a few things while fixing them. So here they are:

  • I created an IAM account that has access to Route53. But that wasn’t enough. There is a separate action called route53domains. I had to grant access to this as well to use domains API.

  • I use EUWest1 region since it’s the closest one. Normally Route53 doesn’t require any region setting but weirdly I had to set region to US East1 in order to access Route53Domains API.

The problems above makes me think domains didn’t fit very well with the current Route53. It feels like it has not integrated seamlessly yet.

First client: Console application

This used to be the test application but since I don’t have any user interface currently this will be my first client. It just accepts the domain name from the user, checks the availability and saves the results in JSON format. This is all the code of the console application:

static void Main(string[] args)
{
    if (args.Length == 0)
    {
        Console.WriteLine("Usage: DomChk53.UI.Console {domain name}");
        return;
    }

    string domain = args[0];
    var checker = new DomainChecker();
    var results = checker.Check(domain);

    string outputFilename = $"results-{domain}-{DateTime.UtcNow.ToString("yyyyMMdd-HHmmss")}.json";
    string json = Newtonsoft.Json.JsonConvert.SerializeObject(results);
    File.WriteAllText(outputFilename, json);
    Console.WriteLine($"Saved the results to {outputFilename}");
}

It also displays the results of each domain on the console you can have keep track of what it’s doing

It saves the results with a timestamp so a history can be maintained:

  {
    "Domain": "volkan",
    "Tld": "cool",
    "Availability": "AVAILABLE",
    "CheckDate": "2015-08-05T12:31:58.40081Z"
  },

What’s missing

It’s not much but at least it delivers what it promises: You give it a domain name and it checks the availability of all supported TLDs by AWS Route 53 and returns the results. But there’s a lot to do to improve this project. Some items in my list:

  • A web-based user interface that allows viewing these results online.
  • Get the price list for each tLD as well and display it along with the name
  • Registering selected domains: It would be nice if you could buy the available domains using the same interface
  • A landing page at domchk53.com so that it can have links to the source code, online checker etc.

Conclusion

The reason I started this project was to overcome the limitations of AWS Route 53 domains UI and be able to search all supported TLDs all at once. To consider the project complete I will prepare a single-page website about the project and develop a nice user-interface. But in terms of basic functionality I can get what I set out for so I can think of it as a small success I guess.

Resources