synology, music, streaming comments edit

I’ve been a Spotify customer for quite long time but recently realized that I wasn’t using it enough to justify 10 quid per month. Amazon made a great offer for 4 months subscription for only £0.99 and I’m trying that out now but the quality of the service didn’t impress so far. Then it dawned on me: I already have lots of MP3s from my old archives, I have a fast internet connection and I have a Synology. Why not just build my own streaming?

One device to rule them all: Synology

Everyday I’m growing more fond of my Synology and regretting for all that time I haven’t utilized it fully.

For streaming audio, we need the server and client software. The server side comes with Synology: Audio Station

The Server

Using Synology Audio Station is a breeze. You simply connect to Synology over the network and copy your albums into the music folder. Try to have a cover art named as “cover.jpg” so that your albums shows nicely on the user interface.

The Client

Synology has a suite of iOS applications which are available in the Apple App Store. The one I’m using for audio streaming is called DS Audio.

By using Synology’s Control Panel you can use a specific user for listening to music only. This way even if your account is compromised the attacker will only have read-only access to your music library.

Connecting to the server

There are two ways of connecting to your server:

  1. Dynamic DNS
  2. Quick Connect (QC)

Dynamic DNS is a builtin functionality but you’d need a Synology account. Basically your Synology pings their server so that it can detec the IP changes.

QC is the way I chose to go with. It’s a proprietary technology by Synology. The nice thing about QC is when you are connected to your local network it uses the internal IP so it doesn’t use mobile data. When you’re outside it uses the external IP and connects over the Internet.


  • You can download all the music you want from your own library without any limitations. There’s no limit set for manual downloads. For automatic downloads you can choose from no caching to caching everything or choose a fixed size from 250MB to 20GB.
  • When you’re offline you don’t need to login. On login form there’s a link to Downloaded Songs so you can skip logging in and go straight to your local cache.
  • You can pin your favourite albums to home screen.
  • Creating a playlist or adding songs to playlists is cumbersome (on iPhone at least):
    • Select a song and tap on … next to the song
    • Tap Add. This will add your song to the play queue.
    • Tap on Play button on top right corner.
    • Tap playlist icon on top right corner.
    • Tap the same icon again which is now on top left corner to go into edit mode
    • Now tap on the radio buttons on the left of the songs to select.
    • When done, tap on the icon on the bottom left corner. This will open the Add to Playlist screen (finally!)
    • Here you can choose an existing playlist or create a new one by clicking + icon.

Considering how easy this can be done on Spotify client this really needs to be improved.

  • In the library or Downloaded Songs sections, you can organise your music by Album, Artist, Composer, Genre and Folder. Of course in order for Artist/Composer/Genre classification to work you have to have your music properly tagged.
  • The client has Radio featue which has builtin support for SHOUTCast


  • You can rate songs. There’s a built-in Top Rated playlist. By rating them you can play your favourite songs without needing them to be added to playlists which is a neat feature.


I think having full control over my own music is great and even though DS Audio client has some drawbacks it’s worth it as it’s completely free. Also you can just set it up as a secondary streaming service in addition to your favourite paid one just in case so that you have a backup solution.


dev comments edit

I have been a long time Windows user. About 2 years ago I bought a MacBook but it never became my primary machine. Until now! Finally I decided to steer away from Windows and use the Macbook for development and day to day tasks.

Tipping Point

One morning I woke up and found out that Windows restarted itself again, without asking me. At the time I had a ton of open windows and there was a VMWare Virtual Machine running but none of these stopped Windows. It just abruptly shutdown the VM whic was very annoying and this wasn’t even the first time it had happened. So I decided to migrate completely to Mac. Just to give myself a better understanding of what it took and what is missing I decided to compile this post.


I thought it would be a painful process but turns out it was quite straightforward. Here’s comparison of some key applications I use:

Email: Mailbird vs. Mail

On Windows I used to use Mailbird as my email client. It allows managing multiple accounts and has a nice GUI and works fine. I was wondering if there would be an equivalent in Mac for that and how much it would cost me (I paid about £25 for Mailbird for a lifetime license but apparently it’s now free). I didn’t have to look far: The built-in Mail application does the job very well. Adding a new Google account is a breeze.

MarkdownPad 2 vs. MacDown

I like Markdown Pad 2 on Windows but it has its flaws: The live preview constanly crashes and it allows to open only 4 files in the free version. On Mac, I’m using MacDown now which has a beatiful interface and completely free.

Git Extensions vs. SourceTree

I do like Git Extensions and it’s one of the programs I wish I had on Mac but SourceTree by Atlassian seems to do the job.

Storage: Google Drive and Synology

Both have web interfaces and Google Drive has desktop clients for both Mac and Windows so no issues in migrating there.


On Windows, I like Sumatra PDF which is very clean and bloatware-free. On Mac, there is no eed to install anything. The default PDF viewer is perfect. It even handles PDF merge and editing operations.

Virtual Desktops

I love using virtual cesktops on Mac. Switching desktops is so easy and intuitive with a three-finger swipe. Windows 10 has support for virtual desktops now but switching is not as fluent so using them didn’t become a habit.

Visual Studio

Now this is the only application I cannot run on Mac. Microsoft has recently released Visual Studio for Mac and they also have Visual Studio Code which is a nice code editor but they are both stripped down versions. I don’t know if .NET Core will take off but currently I use full-blown .NET Framework which only runs under Windows so for development purposes I need to keep the Windows machine alive.

After the migration

I have absolutely no regrets for switching over. I love the Macbook. the keyboard is much better than my Asus’s and the OS is great. Mac has 16GB but outperforms Asus with 24GB (both have Core i& processors and SSD drives)

Here are some more annoying things that used to bug me in the past about Windows:

  • Quite often I cannot delete a folder that used to have a video in it because of Thumbs.db file being in use.
  • I couldn’t change settings to disable Thumbs.db completely because Windows 10 Home edition didn’t allow me to do that.
  • I couldn’t upgrade to Windows 10 Pro even though I had a license for Windows 8.1 Pro. Trying to resolve the licensing issue I found myself going in circles and nothing worked.

Mac cons

There are a few things that I don’t like about Mac or miss from Windows:

  • On Windows, quite often I need to create a blank text file, then double-click and edit it. In Finder, you can only create a new folder. Apparently some scripting is required to overcome this as shown in the resources section below.
  • iCloud seems to be forced down on me. I don’t want to use it, I don’t want to see it but I cannot get rid of it. Trying to disable is just confusing. I’ve now moved everything to a different folder that it’s not watching be default and trying to ignore it completely
  • Moving windows from dispay to display is hard. Especially in my case as I have 15.4” laptop screen and two external monitors with 27” and 40” sizes. Since the size difference is huge between these, dragging a large window from 40” monitor to 15.4” messes up because it doesn’t auto-resize and I cannot even get to the top window to resize. But now I’m using virtual desktops more frequently and using 40” for multiple applications side by side this is not as big of a problem these days.

Going back?

There’s a lot to learn on Mac but I don’t think I’ll be going back anytime soon. I’m looking into virtualizing the Windows machine now so that I can decommission the laptop. I already converted my old Windows desktop into a Linux server so would have no problem with using the laptop for other purposes.

Microsoft made flop after flop starting with Windows 8 and finally they lost another user but they don’t seem to care. If they did, they wouldn’t disrespectfully keep restarting my machine, killing all my applications and VMs!


dev comments edit

Nowadays many people use their phones as their primary web browsing device. As mobile usage is ubiquitous and increasing even more, testing the web applications on mobile platforms is becoming more important.

Chrome has a great emulator for mobile devices but sometimes it’s best to test your application on an actual phone.

If your application is the default application you can access via IP address you’re fine but the problem is if you have multiple domains you want to test at some point you’d need to enter the domain name in your phone’s browser.

Today I bumped into such an issue and my solution involved one of my favourite devices in my household: Synology DS214Play

Local DNS Server on Synology

Step 01: First, I installed DNS Server package by simply searching DNS and clicking install on Package Center.

Step 02: Then, I opened the DNS Server settings and created a new Master Zone. I simply entered the domain name of my site which is hosted on IIS on my development machine and the local network IP address of the Synology as the Master DNS Server.

Step 03: Next, I needed to point to the actual web server. In order to do that I created an A record with the IP address of the local server a.k.a. my development machine.

Step 04: For all the domains that my DNS server didn’t know about (which is basically everything else!) I needed to forward the requests to “actual” DNS servers. In my case I use Google’s DNS servers so I entered those IPs as forwarders.

Step 05: At this point the Synology DNS server is pointing to the web server and web server is hosting the website. All is left is pointing the client’s (phone or laptop) DNS setting to the local DNS server.

Step 06: Now that it’s all setup I could access to my development machine using a locally-defined domain name from my phone:


Another simple alternative to achieve this on Windows laptops is to edit hosts file under C:\Windows\System32\drivers\etc folder but when you have multiple clients in the network i.e macbooks and phones, it’s simpler just to point to the DNS server rather than editing each and every single device. And also it’s more fun this way!


dev comments edit

I like playing around with PDFs especially when planning my week. I have my daily plans and often need to merge them into a single PDF to print easily. As I decided to migrate to Mac for daily use now I can merge them very easily from command line as I described in my TIL site here.

Mac’s PDF viewer is great as it also allows to simply drag and drop a PDF into another one to merge them. Windows doesn’t have this kind of nicety so I had to develop my own application to achieve this. I was planning to ad more PDF operations but since I’m not using it anymore I don’t think it will happen anytime soon so I decided to open the source.

Very simple application anyway but hope it helps someone save some time.


It uses iTextSharp NuGet package to handle the merge operation:

public class PdfMerger
    public string MergePdfs(List<string> sourceFileList, string outputFilePath)
        using (var stream = new FileStream(outputFilePath, FileMode.Create))
            using (var pdfDoc = new Document())
                var pdf = new PdfCopy(pdfDoc, stream);
                foreach (string file in sourceFileList)
                    pdf.AddDocument(new PdfReader(file));

        return outputFilePath;

Also uses Fluent Command Line Parse, another of my favourite NuGet packages to take care of the input parameters:

var parser = new FluentCommandLineParser<Settings>();
parser.Setup(arg => arg.RootFolder).As('d', "directory");
parser.Setup(arg => arg.FileList).As('f', "files");
parser.Setup(arg => arg.OutputPath).As('o', "output").Required();
parser.Setup(arg => arg.AllInFolder).As('a', "all");
var result = parser.Parse(args);
if (result.HasErrors)

var p = new Program();

The full source code can be found in the GitHub repository (link down below).


dev comments edit

Slack is a great messaging platform and it can integrate very easily with C# applications.

Step 01: Enable incoming webhooks

First go to Incoming Webhooks page and turn on the webhooks if it’s not already turned on.

Step 02. Create a new configuration

You can select an existing channel or user to post messages to. Or you can create a new channel. (May need a refresh for the new one to appear in the list)

Step 03. Install Slack.Webhooks Nuget package

In the package manager console, run

Install-Package Slack.Webhooks

Step 04. Write some code!

var url = "{Webhook URL created in Step 2}";

var slackClient = new SlackClient(url);

var slackMessage = new SlackMessage
    Channel = "#general",
    Text = "New message coming in!",
    IconEmoji = Emoji.CreditCard,
    Username = "any-name-would-do"



That’s it! Very easy and painless integration to get real-time desktop notifications.

Some notes

  • Even though you choose a channel while creating the webhook, in my experience you can use the same one to post to different channels. You don’t need to create a new webhook for each channel.
  • Username can be any text basically. It doesn’t need to correspond to a Slack account.
  • First time you send a message with a username, it uses the emoji you specify in the message. You can leave it null in which case it uses the default. On consequent posts, it uses the same emoji for that user even if you set a different one.


dev comments edit

HTTP/2 is a major update to the HTTP 1.x protocol and I decided to spare some time to have a general idea what it is all about:

Here are my findings:

  • It’s based on SPDY (a protocol developed by Google, currently deprecated)
  • It uses same methods, status codes etc. so it is backwards-compatible and the main focus is on performance
  • The problem it is addressing is HTTP requiring a TCP connection per request.
  • Key differences:
    • It is binary rather than text.
    • It can use one connection for multiple requests
    • Allows servers push responses to browser caches. This way it can automatically start sending assets before the browser parses the HTML and sends a request for each of them (images, JavaScript, CSS etc)
  • The protocol doesn’t have built-in encryption but currently Firefox, Internet Explorer, Safari, and Chrome agree that HTTPS is required.
  • There will be a negotiation process between the client and server to select which version to use
  • WireShark has support for it but Fiddler doesn’t.
  • As the speed is the main focus it’s especially important for CDNs to support it. In September 2016, AWS announced that they now support HTTP/2. For existing distributions it needs to enabled explicitly by updating the settings.

    AWS CloudFront HTTP/2 Support

  • On the client side looks like it’s been widely adopted and supported. also confirms that it’s only allowed over HTTPS on all browsers that support it.

    HTTP/2 Browser Support

What Does It Look Like on the Wire

As it’s binary I was curious, as a developer, to see what the actual bits looked like. Normally it’s easy to inspect HTTP requests/responses because it’s just text.

Apparently the easiest way to do it is WireShark. First, I had to enable session logging by creating a user variable in Windows:

Windows environment variable to capture TLS session keys

and pointing the WireShark to use that log (Edit -> Preferences -> Protocols -> SSL)

This is a very neat trick and it can be used to analyse all encrypted traffic so it serves a broader purpose. After restarting the browser and WireShark I was able to see the captured session keys and by starting a new capture with WireShark I could see the decrypted HTTP/2 traffic.

WireShark HTTP/2 capture

It’s hard to make sense of everything in the packets but I guess it’s a good start to be able to inspect the wire format of the new protocol.


ios, swift comments edit

I have a drawer full of gadgets that I bought at one point in time with hopes and dreams of magnificent projects and never even touched!

Some time ago I started a simple spreadsheet to help myself with the impulse buys. The idea was before I bought something I had to put it to that spreadsheet and it had to wait at least 7 days before I allowed myself to buy it.

After 7 days strange things started to happen: In most cases I realised I had lost appetite for that shiny new thing that I once thought was a definite must-have!

I kept at listing all the stuff but quickly it started to become hard to wield by just a spreadsheet.

Sleep On It

The idea behind the app is to automate and “beautify” that process a little bit. It has one Shopping Cart in which the items have waiting periods.

It seemed wasteful to me doing nothing during the waiting period. After all it’s not just about dissuading myself from buying new items. I should use that time to make informed decisions about the stuff I’m planning to buy. That’s why I added the product comparison feature.

The shopping cart has a limited size. Otherwise you would be able to add anything whenever you think of something to game the system so their waiting period would start (well, at least that’s how my mind works!) if your cart is full you can still add items to the wish list and start reviewing products. It’s basically a backlog of items. This way at least you wouldn’t forget about that thing you saw in your favourite online marketplace. Once you clear up some space in your cart either by waiting to buy or deleting them permanently, you can transfer items from wish list to the cart and officially kick off the waiting period.

I have a lot of ideas to improve it but you gotta release at some point and I think it has enough to get me started. Hope anyone else finds it useful too.

If you’re interested in the app please contact me. I might be able to hook you up with a promo code.


ios, swift comments edit

When I started learning Swift for iOS development I also started to compile some notes along the way. This post is the first instalment of my notes. First some basic concepts (in no particular order):


  • Swift supports REPL (Read Evaluate Print Loop) and you can write code and get feedback very quickly this way by using XCode Playground or command line.

As seen in the screenshot there is no need to explicitly print the values, they are automatically displayed on the right-hand side of the screen.

It can also be executed without specifying the interpreter by adding #!/usr/bin/swift at the top of the file.


Swift supports C-style comments like // for single-line comments and /* */ for multi-line comments.

The great thing about the multi-line comments is that you can nest them. For example the following is a valid comment:

/* This is a 
    /* valid mult-line */
    comment that is not available in C#

Considering how many times Visual Studio punished me while trying to comment out a block of code that had multi-line comments in it, this feature looks fantastic!

It also supports doc comments (///) and supports markdown. It even supports emojis (Ctrl + Cmd + Space for the emoji keyboard)


Standard libraries are imported automatically but the main frameworks such as Foundation, UIKit need to be imported explicitly.

Swift 2 supports a new type of import which is preceded by @testable keyword.

@testable import CustomFramework

It allows to access non-public members of a class. So that you can access them externally from a unit test project. Before this they all needed to be public in order to be testable.


Built-in string type is String. There is also NSString in the Foundation framework. They can be used interchangably sometimes, for example you can assign a String to a NSString but the opposite is not valid. You have cast it explicitly to String first:

import Foundation

var string : String = "swiftString"
var nsString : NSString = "foundationString"

nsString = string // Works fine
string = nsString as String // Wouldn't work without the cast
  • startIndex is not an int but an object. To get the next character


To get the last character


For a specific position s[advance(s.startIndex, 1)]

let vs. var

Values created with let keyword are immutable. So let is used to create constants. Variables can be created with var keyword. If you create a value

let x1 = 7
x1 = 8 // won't compile

var x2 = 10
x2 = 11 // this works

The same principle applies to arrays:

let x3 = [1, 2, 3]
x3.append(4) // no go!

Type conversion

Types are inferred and there is no need to declare them while declaring a variable.

let someInt = 10
let someDouble = 10.0
let x = someDouble + Double(someInt)

Structs and Classes

  • Structs are value objects and a copy of the value is passed around. Classess are reference objects.

  • Constructors are called initializers and they are special methods named init. Must specify an init method or default values when declaring the class.

class Person {
  var name: String = ""
  var age: Int = 0

  init (name: String, age: Int) { = name
    self.age = age
  • There is no new operator. So declaring a new object looks simply like this:
let p = Person()
  • The equivalent of destructor is deinit method. Only classes can have deinitializers.


Array: An ordered list of items

  • An empty array xan be declared in a verbose way such as

      var n = Array<Int>()

    or with the shorthand notation

      var n = [Int]()
  • An array with items can be intialized with

      var n = [1, 2, 3]
  • Arrays can be concatenated with +=

      n += [4, 5, 6]
  • Items can be added by append method

  • Items can be inserted to a specific index

      n.insert(8, atIndex: 3)
      print(n) // -> "[1, 2, 3, 8, 4, 5, 6, 7]"
  • Items can be deleted by removeAtIndex

      print(n) // -> "[1, 2, 3, 8, 4, 5, 7]"
  • Items can be accessed by their index

      let aNumber = n[2]
  • A range of items can be replaced at once

      var n = [1 ,2, 3, 4]
      n[1...2] = [5, 6, 7]
      print(n) // prints [1, 5, 6, 7, 4]"
  • 2-dimensional arrays can be declared as elements as arrays and multiple subscripts can be used to access sub items

      var n = [ [1, 2, 3], [4, 5, 6] ]
      n[0][1] // value 2

Dictionary: A collection of key-value pairs

  • Can be initialized without items

      var dict = [String:Int]()

    or with items

      var dict = ["key1": 5, "key2": 3, "key3": 4]
  • To add items, assign a value to a key using subscript syntax

      dict["key4"] = 666
  • To remove an item, assign nil

      dict["key2"] = nil
      print(dict) // prints ["key1": 5, "key4": 666, "key3": 4]"
  • To update a value, subscript can be used as adding the item or updateValue method can be called.

    updateValue returns an optional. If it didn’t update anything the optional has nil in it. So it can be used to check the value was actually updated or not.

      var result = dict.updateValue(45, forKey: "key2")
      if let r = result {
          print (dict["key2"])
      } else {
          print ("could not update") // --> This line would be printed

    The interesting behaviour is that if it can’t update it, it will add the new value.

      var dict = ["key1":5, "key2":3, "key3":4]
      var result = dict.updateValue(45, forKey: "key4")
      if let r = result {
          print (dict["key4"])
      } else {
          print ("could not update")
      print(dict) // prints "["key1": 5, "key4": 45, "key2": 3, "key3": 4]"
                    // key4 has been added after calling updateValue

    After a successful update it would return the old value

      result = dict.updateValue(45, forKey: "key1")
      if let r = result {
          print (r) // --> This would run and print "5"
      } else {
          print ("could not update")

    This is consistent with the unsuccessful update returning nil. It always returns the former value.

  • To get a value subscript syntax is used

      var i = dict["key1"] // 45

Set: An unordered list of distinct values

  • Initialization notation is similar to the others

      var emo : Set<Character> = [ "😡", "😎", "😬" ]
  • If duplicate items are added it doesn’t throw an error but prunes the list automatically

      var emo : Set<Character> = [ "😡", "😎", "😬", "😬" ]
      emo.count // prints 3
  • New items can be added with insert method

      var emo : Set<Character> = [ "😡", "😎", "😬", "😬" ]
      print(emo) // prints "["😱", "😎", "🤔", "😬", "😡"]"

    There is no atIndex parameter like array and the index is unpredicatable as shown above

Among the three, only arrays have ordered and can have repeated values.


  • Semi-colons are not required at the end of each line

  • Supports string interpolation

  • Swift uses reference counting and there is garbage collection.

  • Curly braces are required even if there is only one statement inside the body. For instance the following block wouldn’t compile:

      let x = 10
      if x == 10
  • println function has been renamed to print. print adds a new line to the end automatically. This behaviour can be overriden by explicitly specfying appendNewLine attribute

      print ("Hello, world without a new line", appendNewLine: false)
  • #available can be used to check compatibility

      if #available(iOS 9, *) {
          // use NSDataAsset
      } else {
          // Panic!
  • Range can be checked with … and ~= operators. For example:

      let x = 10
      if 1...100 ~= x {

    The variable is on the right in this expression. It wouldn’t compile the other way around.

  • There is Range object that can be used to define, well, ranges!

      var ageRange = 18...45
      print(ageRange) // prints "18..<46"
      print(ageRange.count) // prints "28"

    The other range operator is ..< which doesn’t include the end value

      var ageRange = 18..<45
      ageRange.contains(45) // prints "false"


personal, leisure, travel comments edit

When April 6th, 2016 marked my 5th anniversary in the UK I thought I should do something special.

I don’t know if you have seen the movie The World’s End but I liked it a lot. Inspired by the Golden Mile concept I saw in that movie, my decision was to have 5 pints in 5 pubs in the neighbourhood. 1 pint for each year. It may sound unsustainable in the long run but I’ll cross that bridge when I come to that. Without further ado, here’s the pubs I picked for my first annual celebration:

…and the winners are

The Dacre Arms

This one is definitely my favourite pub in the area. It has a very cosy and warm environment. It’s not on the main road so almost felt like discovering a hidden gem when I first noticed it. In the past it served as a nice harbour to get away from my loud and obnoxious neighbour and collect my thoughts and maintain my sanity.

Duke of Edinburgh

I don’t watch football games in pubs a lot but when I do this is my go-to pub. Nice big screen TVs all around. The last game I saw didn’t bring much joy though.

I remember Arsenal smashing Fenerbahce 3-0 in 2013 without even breaking a sweat. Part of the reason why I don’t watch football in pubs: no need for public humiliation!

The Old Tiger’s Head

The way I remember this one was it was quite spacious with a pool table and a lot of tables to sit. I used to come here a lot to wok on my blog. Although I can’t tell the difference, I’ve been informed that this was an Irish pub. I stopped by on one St. Paddy’s day a few years ago and there was quite a colourful celebration going on. I guess that’s one way of identifying whether a pub is Irish or English!

This visit was a bit disappointing though as the whole layout has changed. Pool table was gone and my favourite booth was removed. Oddly enough there were a few bookshelves and there was even a family with two toddlers inside. It felt more like a cafe than a pub.

The Swan / The Rambles

My 4th stop was The Rambles. Actually it used to be a pub called The Swan which was the first pub I ever visited in the UK. It was run by two kind and nice ladies. When you move to a different country and try to settle in there’s a lot of challenges you need to tackle initially and it can be overwhelming at first. The Swan was a nice refuge for me at those times to wind down and relax.

Unfortunately it closed down a few years ago and now there is bar / comedy club named The Rambles. The club doesn’t mean much to me except I’ve been there once before to see a comedy show but in remembrance of The Swan I went there anyway. As it’s bar now they open and close much later than a pub so I was the first customer there! So it might be a good place to work in quite and have a cold one if I’m looking to change venues.

Princess of Wales

I love Blackheath! It’s on a hill and a bit windy but it has a nice little lake and a great view.

I remember the Bonfire Night a few years ago which had a great fireworks show. Princess of Wales is a nice pub by the lake with a great view and I enjoyed my pint there quite a bit.

Same time, next year!

I don’t know if I should try the same path with adding a 6th one to the end of the chain or start with a brand new set or even I’ll be in the mood to do a pub crawl but I’ll decide that later. After all it’s just one of those nice problems to have!

development, aws, route53, angularjs comments edit

Previously in this series:

So far I was using the console client but I thought I chould use a prettier web-based-UI and came up with this:

DomChk53 AngularJS Client

It’s using AngularJS and Bootstrap which significantly improved the development process.

API in the backend is AWS API Gateway on a custom domain ( and using Lambda functions to do the actual work. One great thing about API Gatewayis that it’s very easy to set requests rates:

Currently I set it to max 5 requests per second. I chose this value because of the limitation on AWS API as stated here:

All requests – Five requests per second per AWS account. If you submit more than five requests per second, Amazon Route 53 returns an HTTP 400 error (Bad request). The response header also includes a Code element with a value of Throttling and a Message element with a value of Rate exceeded.

Of course limiting this on the client side assumes a single client so you may still get “Rate exceeded” errors even if running single query at a time. I’m planning to implement a Node server using SQS to move the queue to server side but that’s not one of my priorities right now.

The Lambda function is straightforward enough. Just calls the checkDomainAvailability API method with the supplied parameters:

exports.handler = function(event, context) {
    var AWS = require('aws-sdk');
	var options = {
	    region: "us-east-1"
	var route53domains = new AWS.Route53Domains(options);
    var params = {
        DomainName: event.domain + '.' + event.tld

    route53domains.checkDomainAvailability(params, function (err, data) {

        if (err) {
        } else {
            var result = {
                Domain: event.domain,
                Tld: event.tld,
                CheckDate: new Date().toISOString(),
                RequestResult: "OK",
                Availability: data.Availability


I wanted this tool as an improvement to AWS already provides. What you can do with Management Console is search a single domain and it searches it against the popular 13 TLDs. If you need anything outside these 13 you have to pick them manually.

In DomChk53 you can search multiple domain names at once against all supported TLDs (293 as of this writing).

Also you can group TLDs into lists so you can for example search the most common ones (com, net, etc.) and say finance related ones (money, cash, finance etc.). Depending on the domain name one group may be more relevant.

You can cancel a query at any time to avoid wasting precious requests if you change your mind about the domain.

What’s missing

For a while I’m planning to leave it as is but when I have it in me to revisit the project I will implement:

  • Server-side queueing of requests
  • The option to export/email the results in PDF format

I’m also open to other suggestions…


aws, route53, angularjs, development comments edit

I’ve been using my own dynamic DNS application (which I named DynDns53 and blogged about it here). So far it had a WPF application and I was happy with it but I thought if I could develop a web-based application I wouldn’t have to install anything (which is what I’m shooting for these days) and achieve the same results.

So I built a JavaScript client with AngularJS framework. The idea is exactly the same, the only difference is it’s all happening inside the browser.

DynDns53 web client


To have a dynamic DNS client you need to have the following

  1. A way to get your external IP address
  2. A way to update your DNS record
  3. An application that performs Step 1 & 2 perpetually

Step 1: Getting the IP Address

I have done and blogged about this several times now. (Feels like I’m repeating myself a bit, I guess I have to find something original to work with. But first I have to finish this project and have closure!)

Since it’s a simple GET request it sounds easy but I quickly hit the CORS wall when I tried the following bit:

app.factory('ExternalIP', function ($http) {
    return $http.get('', { cache: false });

In my WPF client I can call whatever service I want whenever I want but when running inside the browser things are a bit different. So I decided to take a detour and create my own service that allowed cross-origin resource sharing.

AWS Lambda & API Gateway

First I thought I could do it even without Lambda function by using the HTTP proxy integration. I could return what the external site returns:

Unfortunately this didn’t work because it was returning the IP of the AWS machine that’s actually running the API gateway. So I had to get the client’s IP from the request and send it back in my own Lambda function.

Turns out in order to get HTTP headers you need to fiddle with some template mapping and assign the client’s IP address to a variable:

This can be later referred to in the Lambda function through event parameter:

exports.handler = function(event, context) {
        "ip": event.ip

And now that we have our own service we can allow CORS and be able call it from our client inside the browser:

Step 2: Updating DNS

This bit is very similar to WPF version. Instead of using the AWS .NET SDK I just used the JavaScript SDK. AWS has a great SDK builder which lets you to select the pieces you need:

It also shows if the service supports CORS. It’s a relief that Route53 does so we can keep going.

The whole source code is on GitHub but here’s the gist of it: Loop through all the subdomains, get all the resource records in the zone, find the matching record and update it with the new IP:

  $scope.updateAllDomains = function() {
      angular.forEach($, function(value, key) {
        $scope.updateDomainInfo(, value.zoneId);
  $scope.updateDomainInfo = function(domainName, zoneId) {
    var options = {
      'accessKeyId': $rootScope.accessKey,
      'secretAccessKey': $rootScope.secretKey
    var route53 = new AWS.Route53(options);
    var params = {
      HostedZoneId: zoneId

    route53.listResourceRecordSets(params, function(err, data) {
        if (err) { 
          $rootScope.$emit('rootScope:log', err.message);
        } else {
          angular.forEach(data.ResourceRecordSets, function(value, key) {
              if (value.Name.slice(0, -1) == domainName) {
                var externalIPAddress = "";
                     externalIPAddress =;
                     $scope.changeIP(domainName, zoneId, externalIPAddress)
  $scope.changeIP = function(domainName, zoneId, newIPAddress) {
    var options = {
      'accessKeyId': $rootScope.accessKey,
      'secretAccessKey': $rootScope.secretKey

    var route53 = new AWS.Route53(options);
    var params = {
      ChangeBatch: {
        Changes: [
            Action: 'UPSERT',
            ResourceRecordSet: {
              Name: domainName,
              Type: 'A',
              TTL: 300,
              ResourceRecords: [ {
                  Value: newIPAddress
      HostedZoneId: zoneId

    route53.changeResourceRecordSets(params, function(err, data) {
      if (err) { 
        $rootScope.$emit('rootScope:log', err.message); 
      else { 
        var logMessage = "Updated domain: " + domainName + " ZoneID: " + zoneId + " with IP Address: " + externalIPAddress;
        $rootScope.$emit('rootScope:log', logMessage);

The only part that trippped me up was that I wasn’t setting the TTL in the changeResourceRecordSets parameters and I was getting an error but found a StackOverflow question that helped to get past the issue.

Step 3: A tool to bind them

Now the fun part: An AngularJS client to call these services. I guess the UI is straight-forward. Basically it just requires the user to enter AWS IAM keys and domains to update.

I didn’t want to deal with the hassle of sending the keys to a remote server and host them securely. Instead I thought it would be simpler just to use browser’s local storage with HTML5. This way the keys never leave the browser.

It also only updates the IP address if it has changed so saves unnecessary API calls.

Also it’s possible to view what’s going on in the event log area.

I guess I can have my closure now and move on!


ios, swift comments edit

I’ve been working on iOS development with Swift for some time and finally I managed to publish my first app on the Apple iOS app store: NoteMap.

NoteMap on iTunes

It’s a simple app that allows you to take notes on the map. You can take photos and attach them to the note as well as text. I thought this might be helpful if you take ad hoc pictures and then forget when and why you took them.

The main challenge was working with the location manager as you require permissions to get user’s location. You have to take into all combinations of permissions as they may be changed in the phone’s settings later on.

I have a long list of features to add but wanted to keep this one simple enough just to do the bare essentials. As I hadn’t submitted an app before I wasn’t even sure it would make it to the store. But after waiting 6 days for the app to be reviewed, now it’s out there, which is a huge relief and motivation. I’ll make sure to keep at it and add more features to NoteMap and submit more!


aws, ssl, aws certificate manager, acm comments edit

Paying a ton of money to a digital certificate, which costs nothing to generate, has always bugged me. Fortunately it isn’t just me and recently I heard about Let’s Encrypt in this blog post.

I was just planning to give it a go but I noticed a new service on AWS Management Console:

Apparently AWS is now issuing free SSL certificates, which was too tempting to pass on so I decided to dive in.

Enter AWS Certificate Manager

Requesting a certificate just takes seconds as it’s a 3-step process:

First, enter the list of domains you want the certificates for:

Wildcard SSL certificates don’t cover the zone apex so I had to enter both. (Hey it’s free so no complaints here!)

Then review and confirm and request has been made:

A verification email has been sent to the email addresses listed in the confirmation step.

At this point I could define MX records and use Google Apps to create a new user and receive the verification email. The problem is I don’t want all this hassle and certainly don’t need another email account to monitor.

SES to the rescue

I always considered SES as a simple SMTP service to send emails but while dabbling with alternatives I realized that now we can receive emails too!

To receive emails you need to verify your domain first. Also an MX record pointing to AWS SMTP server must be added. Fortunately since everything here is AWS it can be done automatically using Route53:

After this we can move on, we’ll receive a confirmation email once the domain has been verified:

In the next step we decide what to do with the incoming mails. We can bounce them, call a Lambda function, create a SNS notification etc. These all sound fun to experiment with but in this case I’ll opt for simplicity and just drop them to a S3 bucket.

Great thing is I can even assign a prefix so I can a single bucket to collect emails from a bunch of different addresses all separated into their own folders.

In step 3, we can specify more options. Another pleasant surprise was to see spam and virus protection:

After reviewing everything and confirming we are ready to receive emails to our bucket. In fact nice folks at AWS are so considerate that they even sent us a test email already:

Back to certificates

OK, after a short detour we are back on getting our SSL certificate. As I didn’t have my mailbox setup during the validation step I had to go to actions menu and select Resend validation email.

And after requesting it I immediately received the email containing a link to verify ownership of the domain.

After the approval process we get ourselves a nice free wildcard SSL certificate:

Test drive

To leverage the new certificate we need to use CloudFront to create a distribution. Here again we benefit from the integrated services. The certificate we have been issued can be selected from the dropdown list:

So after entering simple basics like the domain name and default page I created the distribution and pointed the Route53 records to this distribution instead of the S3 bucket.

And finally, after waiting (quite a bit) for the CloudFront distribution to be deployed we can see that little green padlock we’ve been looking forward to see!:

UPDATE 1 [03/03/2016]

Yesterday I was so excited about discovering this I didn’t look any further like downloading the certificate and using it on your servers.

Today unfortunately I realized that the usability is quite limited: It only works with AWS Elastic Load Balancer and CloudFront. I was hoping to use it with API Gateway but even that’s another AWS service it’s not integrated with ACM yet.

I do hope they make the cert bits available so we can have full control over them and deploy to wherever we want. So I guess Let’s Encrpt is a better option for now considering this limitation.


c#, development, gadget, fitbit, aria comments edit
"What gets measured, gets managed." - Peter Drucker

It’s important to have goals, especially SMART goals. The “M” in S.M.A.R.T. stands for Measurable. Having enough data about a process helps tremendously to improve that process. To this effect, I started to collect exercise data from my Microsoft band which I blogged about here.

Weight tracking is also crucial for me. I used to record my weight manually on a piece of paper but, for the obvious reasons, I abandoned it quickly and decided to give Fitbit Aria a shot.

Fitbit Aria Wi-Fi Smart Scale

Aria is basically a scale that can connect to your Wi-Fi network and send your weight results to Fitbit automatically which can then be viewed via Fitbit web application.


Since it doesn’t have a keyboard or any other way to interact directly setup is carried out by running a program on your computer

It’s mostly just following the steps on the setup tool. You basically let it connect to your Wi-Fi network so that it can synchronize with Fitbit servers.

Putting the scale into setup mode proved to be tricky in the past though. Also it was not easy to change Wi-Fi so I had to reset the go back to factory settings and ran the setup tool again.

Getting the data via API

Here comes the fun part! Similar to my MS Band workout demo, I developed a WPF program to get my data from Fitbit’s API. Ultimately the goal is to combine all these data in one application and make sense of it.

Like MS Health API, FitBit uses OAuth 2.0 authorization and requires a registered application.

The endpoint that returns weight data accepts a few various formats depending on your needs. As I wanted a range instead of a single day I used the following format:{user ID}/body/log/weight/date/{startDate}/{endDate}.json

This call returns an array of the following JSON objects:

	"bmi": xx.xx,
	"date": "yyyy-mm-dd",
	"fat": xx.xxxxxxxxxxxxxxx,
	"logId": xxxxxxxxxxxxx,
	"source": "Aria",
	"time": "hh:mm:ss",
	"weight": xx.xx

Sample application

The bulk of the application is very similar to MS Band sample: It first opens an authorization window and once the client consents for the app to be granted some privileges it uses the access token to retrieve the actual data.

There are a few minor differences though:

  • Unlike MS Health API it requires Authorization header in the authorization code request calls which is basically Base64 encoded client ID and client secret
string base64String = Convert.ToBase64String(Encoding.UTF8.GetBytes($"{Settings.Default.ClientID}:{Settings.Default.ClientSecret}"));
request.AddHeader("Authorization", $"Basic {base64String}");
  • It requires a POST request to redeem URL. Apparently RestSharp has a weird behaviour. You’d think a method called AddBody could be used to send the request body, right? Not quite! It doesn’t transmit the header so I kept getting a missing field error. So instead I used AddParameter:
string requestBody = $"client_id={Settings.Default.ClientID}&grant_type=authorization_code&redirect_uri={_redirectUri}&code={code}";
request.AddParameter("application/x-www-form-urlencoded", requestBody, ParameterType.RequestBody);

I found a lot of SO questions and a hillarious blog post addressing the issue. It’s good to know I wasn’t alone in this!

The rest is very straightforward. Make the request, parse JSON and assign the list to the chart:

public void GetWeightData()
    var endDate = DateTime.Today.ToString("yyyy-MM-dd");
    var startDate = DateTime.Today.AddDays(-30).ToString("yyyy-MM-dd");
    var url = $"{startDate}/{endDate}.json";
    var response = SendRequest(url);


public void ParseWeightData(string rawContent)
    var weightJsonArray = JObject.Parse(rawContent)["weight"].ToArray();
    foreach (var weightJson in weightJsonArray)
        var weight = new FitbitWeightResult();
        weight.Weight = weightJson["weight"]?.Value<decimal>() ?? 0;
        weight.Date = weightJson["date"].Value<DateTime>();

And the output is:


So far I managed to collect walking data from MS Band, weight data from Fitbit Aria. In this demo I limited the scope with weight data only but Fitbit API can be used to track sleep, exercise and nutrition.

I currently use My Fitness Pal to log what I eat. They too have an API but even though I requested twice they haven’t given me a key yet! Good news is Fitbit has a key and I can get my MFP logs through Fitbit API. I also log my sleep on Fitbit manually so next step is to combine all these in one application to have a nice overview.


c#, development, gadget, band comments edit

I bought this about 6 months ago and in this post I’ll talk about my experiences so far. They released version 2 of it in last November so I thought I should write about it before it gets terribly outdated!

Choosing the correct size

It comes in 3 sizes: Small, Medium and Large and finding the correct size is the first challenge. They seem to have improved the sizing guide for version 2. In the original one they didn’t mention the appropriate size for wrist’s circumference.

To have the same effect I followed someone’s advice on a forum regarding the circumferences. Downloaded a printable ruler to measure mine. It was at the border of medium and laarge and I decided to go with medium but even at the largest setting it’s not comfortable and irritates my skin. Most of the time I have to wear it on top of a large plaster

Wearing notes

I hope they fixed it in v2 but the first generation Band is quite bulky and uncomfortable. To be honest most of the time I just kept wearing it because I had spent £170 and didn’t come to terms with making a terrible investment. I wear it when I’m walking but as soon as I arrive at home or work I take it off because it’s almost impossible to type something with it.

Band in action

For solely getting fitness data purposes you can use it without pairing with your phone but pairing is helpful as you can read your texts on it, see emails and answer calls.

I also installed the Microsoft Health app and started using Microsoft Health dashboard:


As soon as I started using it I noticed a discrepancy with the step count on the Microsoft Health dashboard. Turns out by default it was using phone’s motion tracker as well so it was doubling my steps. After I turned it off started getting the exact same results as on Band.

Turn off motion tracking to get accurate results

Developing with Band and Cloud API

Recording data about something helps tremendously to make it manageable. That’s why I like using these health & fitness gadgets. But of course it doesn’t mean much if you don’t make sense of that data.

In my sample application I used Microsoft Health Cloud API to get the Band’s data. In order this to work Band needs to sync with Microsoft Health app on my phone and the app syncs with my MS account.

The API has a great guide here that can be downloaded as a PDF. It outlines all the necessary steps very clearly and in detail.

Long story short, firstly you need to go to Microsoft Account Developer Center and register an application. This will give you a client ID and client secret that will be used for OAuth 2.0 authentication.

API uses OAuth 2.0 authentication. After the token has been acquired, using the actual API is quite simple, in my example app I used /Summaries endpoint to get the daily step counts.


The sample application is a simple WPF desktop application. Upon launch it checks if the user has an access token stored, if not then it shows the OAuth window and the user need to login to their account.

To let the user login to their Microsoft account I added a web browser control to a window and navigated to authorization page:

string authUri = $"{baseUrl}/oauth20_authorize.srf?client_id={Settings.Default.ClientID}&scope={_scope}&response_type=code&redirect_uri={_redirectUri}";

Once the authorization is complete, the web browser is redirected to with a query parameter code. This is not the actual token we need. Now, we need to go to another URL (oauth20_token.srf) with this code and client secret as parameters and redeem the actual access token:

private void webBrowser_Navigated(object sender, System.Windows.Navigation.NavigationEventArgs e)
    if (e.Uri.Query.Contains("code=") && e.Uri.Query.Contains("lc="))
        string code = e.Uri.Query.Substring(1).Split('&')[0].Split('=')[1];

        string authUriRedeem = $"/oauth20_token.srf?client_id={Settings.Default.ClientID}&redirect_uri={_redirectUri}&client_secret={Settings.Default.ClientSecret}&code={code}&grant_type=authorization_code";

        var client = new RestClient(baseUrl);
        var request = new RestRequest(authUriRedeem, Method.GET);
        var response = (RestResponse)client.Execute(request);
        var content = response.Content;

        // Parse content and get auth code
        Settings.Default.AccessToken = JObject.Parse(content)["access_token"].Value<string>();


After we get the authorization out of the way we can actually call the API and get some results. It’s a simple GET call ( and the response JSON is pretty straightforward. The only thing to keep in mind is to add the access token to Authorization header:

request.AddHeader("Authorization", $"bearer {Settings.Default.AccessToken}");

Here’s a sample output for a daily summary:

	"userId": "67491ecc-c408-47b6-a3ad-041edb410524",
	"startTime": "2016-01-18T00:00:00.000+00:00",
	"endTime": "2016-01-19T00:00:00.000+00:00",
	"parentDay": "2016-01-18T00:00:00.000+00:00",
	"isTransitDay": false,
	"period": "Daily",
	"duration": "P1D",
	"stepsTaken": 2784,
	"caloriesBurnedSummary": {
		"period": "Daily",
		"totalCalories": 1119
	"heartRateSummary": {
		"period": "Daily",
		"averageHeartRate": 77,
		"peakHeartRate": 88,
		"lowestHeartRate": 68
	"distanceSummary": {
		"period": "Daily",
		"totalDistance": 232468,
		"totalDistanceOnFoot": 232468

Since we now have the data, we can visualize it:

If you want to play with the sample code don’t forget to register an app and update the settings with your client ID and secret


I guess the most fun would be to develop something that actually runs on the device. My next goal with my Band is to develop a custom tile using its SDK. I hope I can finish it while a first-gen device is still fairly relevant.


aws, s3, ec2, eip comments edit

I’ve been using AWS for a few years now and over the years I noticed there some questions that keep popping up. I was confused by these issues at first and as they look like they are tripping everybody up at some point I decided to compile of a small list of common gotchas. I’ll update this or post another when if I come across more of these.

1. The S3 folder delusion

When you AWS console you can create folders to group objects but this is just a delusion deliberately created by AWS to simplify the usage. In reality, S3 has a flat structure and all the objects are on the same level. Here’s the excerpt from AWS documentation that states this fact:

In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.

So essentially AWS is just smart enough to recognize the standard folder notation we’ve been using for ages to make this things easier for us.

2. Reserved instance confusion

Reserved instances cost less but require some resource planning and paying some money up-front. Although there is now an option to buy reserved instances with no upfront payment they generally shine on long-term commitments with heavy usage (always-on machines). The confusing bit you don’t reserve actual instances. Unfortunately management console doesn’t do a great job in bridging that gap and when you buy a reserved instance you don’t even know which running instance it covers.

Basically you just buy a subscription for 1 or 3 years and you pay less for any machine that meets that criteria. For instance, say you reserved 1 Linux t1.small instance for 12 months and you are running 2 t1.small Linux instances at the moment. You will pay reserved instance prices for one of them and on-demand price for the other. From a financial point of view it doesn’t matter which one is which. If you shut down one of those instances, again regardless of the instance, you still pay for reserved instance price as it matches your reserved instance criteria.

So that’s all there is to it really. Reserved instance is just about billing and has nothing to do with the actual running instances.

3. Public/Elastic IP uncertainty

There are 3 types of IP addresses in AWS:

Private IPs are internal IP that every instance are assigned. They remain the same throughout the lifespan of the instance and as the name implies they are not addressable from the Internet.

Public IPs are optional. They remain the same as long as the instance is running but they are likely to change after a reboot. So they are not reliable for web-accessible applications.

Elastic IPs are basically static IPs that never change. By default AWS gives up to 5 EIPs. If you need more you have to contact their support. They come free of charge as long as they are associated with a running instance. It costs a small amount if you just want to keep them around without using them though.


csharp, wpf, syncfusion, pdf comments edit

Every few months I have to clean up my desktop computer as dust gets stuck in the CPU fan and it gets hot and slow and loud and annoying! A few days ago I snapped and decided to phase out the desktop and made my laptop as main machine. Even though I love making a fresh start on a new computer it comes with re-installing a bunch of stuff.

One missing thing that made itself obvious at the very start was a PDF reader. So far I’ve always been disappointed with PDF viewers. They are too bloated with unnecessary features and they always try to install a browser toolbar or an anti-virus trial.

My DIY PDF Reader

I started looking into my options to build my own PDF viewer and fortunately didn’t have to look too much. SyncFusion is offering a free license to their products for indie developers and small startups. I used their great wizard control in a past project (Image2PDF) so I first checked if they had something for me. Turns out they have exactly everything I needed wrapped in an easy to use control. Their WPF suite comes with a PdfViewerControl. It supports standard navigation and zooming functions which is pretty much what I need from a PDF viewer. So all I had to do was a start a new WPF project, drag & drop PdfViewerControl and run!

The whole XAML code looks like this:

        xmlns:PdfViewer="clr-namespace:Syncfusion.Windows.PdfViewer;assembly=Syncfusion.PdfViewer.WPF" x:Class="PdfViewer.MainWindow"
        Title="Easy PDF Viewer" 
        <PdfViewer:PdfViewerControl  HorizontalAlignment="Stretch" VerticalAlignment="Stretch"/>

And for my 5 minutes, this is the application I got:


If I need more features in the future I think I’ll just build on this. I always have the open source PDF library iTextSharp which I like quite a lot and now have SyncFusion PDF components and libraries in my arsenal, I have no intention to deal with adware-ridden, bloated applications with lots of security flaws.


csharp, ios, swift, wpf, xamarin, tfl api comments edit

Recently I discovered that Transport for London (TFL) has some great APIs that I can play around with some familiar data. It’s very to use as an API key is not even mandatory. My main goal here is to discover what I can do with this data and build a few user interfaces consuming it. All source code is available on GitHub

Tube status

The API endpoint I will use returns the current status of tube lines, an array of the following JSON objects:

    "$type": "Tfl.Api.Presentation.Entities.Line, Tfl.Api.Presentation.Entities",
    "id": "central",
    "name": "Central",
    "modeName": "tube",
    "created": "2015-10-14T10:31:00.39",
    "modified": "2015-10-14T10:31:00.39",
    "lineStatuses": [
            "$type": "Tfl.Api.Presentation.Entities.LineStatus, Tfl.Api.Presentation.Entities",
            "id": 0,
            "statusSeverity": 10,
            "statusSeverityDescription": "Good Service",
            "created": "0001-01-01T00:00:00",
            "validityPeriods": []
    "routeSections": [],
    "serviceTypes": [
            "$type": "Tfl.Api.Presentation.Entities.LineServiceTypeInfo, Tfl.Api.Presentation.Entities",
            "name": "Regular",
            "uri": "/Line/Route?ids=Central&serviceTypes=Regular"

Visualizing the data - line colours

TFL have standard colours for tube lines which are documented here. So I created a small lookup json using that reference:

    { "id": "bakerloo", "CMYK": { "M": 58, "Y": 100, "K": 33 }, "RGB":  { "R": 137, "G": 78, "B": 36 } },
    { "id": "central", "CMYK": { "M": 95, "Y": 100 }, "RGB":  { "R": 220, "G": 36, "B": 31 } },
    { "id": "circle",  "CMYK": { "M": 16, "Y": 100 }, "RGB":  { "R": 255, "G": 206, "B": 0 } },
    { "id": "district", "CMYK": { "C":  95, "Y": 100, "K": 27 }, "RGB":  { "R": 0, "G": 114, "B": 41 } },
    { "id": "hammersmith-city", "CMYK": { "M": 45, "Y": 10 }, "RGB":  { "R": 215, "G": 153, "B": 175 } },
    { "id": "jubilee", "CMYK": { "C": 5, "K": 45 }, "RGB":  { "R": 134, "G": 143, "B": 152 } },
    { "id": "metropolitan", "CMYK": { "C": 5, "M": 100, "K": 40 }, "RGB":  { "R": 117, "G": 16, "B": 86 } },
    { "id": "northern", "CMYK": { "K": 100 }, "RGB":  { "R": 0, "G": 0, "B": 0 } },
    { "id": "piccadilly", "CMYK": { "C": 100, "M": 88, "K": 5 }, "RGB":  { "R": 0, "G": 25, "B": 168 } },
    { "id": "victoria", "CMYK": { "C": 85, "M": 19 }, "RGB":  { "R": 0, "G": 160, "B": 226 } },
    { "id": "waterloo-city", "CMYK": { "C": 47, "Y": 32 }, "RGB":  { "R": 118, "G": 208, "B": 189 } }

I was hoping to map status values to colours as well (i.e. “Severe delays” to red) but there is no official guide to that. The status codes and values can be retrieved from this endpoint: which returns a collection of objects like this:

    "$type": "Tfl.Api.Presentation.Entities.StatusSeverity, Tfl.Api.Presentation.Entities",
    "modeName": "tube",
    "severityLevel": 2,
    "description": "Suspended"

I simplified it for my purposes (just the values for tube):

    { "severityLevel": 0, "description": "Special Service" },
    { "severityLevel": 1, "description": "Closed" },
    { "severityLevel": 2, "description": "Suspended" },
    { "severityLevel": 3, "description": "Part Suspended" },
    { "severityLevel": 4, "description": "Planned Closure" },
    { "severityLevel": 5, "description": "Part Closure" },
    { "severityLevel": 6, "description": "Severe Delays" },
    { "severityLevel": 7, "description": "Reduced Service" },
    { "severityLevel": 8, "description": "Bus Service" },
    { "severityLevel": 9, "description": "Minor Delays" },
    { "severityLevel": 10, "description": "Good Service" },
    { "severityLevel": 11, "description": "Part Closed" },
    { "severityLevel": 12, "description": "Exist Only" },
    { "severityLevel": 13, "description": "No Step Free Access" },
    { "severityLevel": 14, "description": "Change of frequency" },
    { "severityLevel": 15, "description": "Diverted" },
    { "severityLevel": 16, "description": "Not Running" },
    { "severityLevel": 17, "description": "Issues Reported" },
    { "severityLevel": 18, "description": "No Issues" },
    { "severityLevel": 19, "description": "Information" },
    { "severityLevel": 20, "description": "Service Closed" },

I will keep it around but in this initial version I won’t use it as description is returned with the status query anyway. But it was still a useful exercise to figure out there is no “official” colour for status values. After all what’s the colour of “No Step Free Access” or “Exist Only”? There is a also reason field that explains the effects of any delays etc. which should ne displayed along with the severity especially when there are some disruptions in the service.

‘Nuff said about the data! Let’s start building something with it!

Core library

As I will build several API call to retrieve tube status is encapsulated in the core library which basically has sends the HTTP request, parses the JSON and returns the LineInfo list:

public class Fetcher
    private readonly string _apiEndPoint = "";

    public List<LineInfo> GetTubeInfo()
        var client = new RestClient(_apiEndPoint);
        var request = new RestRequest("/", Method.GET);
        request.AddHeader("Content-Type", "application/json");
        var response = (RestResponse)client.Execute(request);
        var content = response.Content;
        var tflResponse = JsonConvert.DeserializeObject<List<TflLineInfo>>(content);

        var lineInfoList = tflResponse.Select(t =>
            new LineInfo()
                Id =,
                Name =,
                Reason = t.lineStatuses[0].reason,
                StatusSeverityDescription = t.lineStatuses[0].statusSeverityDescription,
                StatusSeverity = t.lineStatuses[0].statusSeverity

        return lineInfoList;

LineInfo class contains the current status with the description. It also contains the colour defined by TFL for that tube line:

public class LineInfo
    public string Id { get; set; }
    public string Name { get; set; }
    public int StatusSeverity { get; set; }
    public string StatusSeverityDescription { get; set; }
    public string Reason { get; set; }
    public RGB LineColour
            return TubeColourHelper.GetRGBColour(this.Id);

As the line colours aren’t returned by the service I have to populate it by a helper class:

public class TubeColourHelper
    private static Dictionary<string, RGB> _tubeColorRGBDictionary = new Dictionary<string, RGB>();

    static TubeColourHelper()
        _tubeColorRGBDictionary = new Dictionary<string, RGB>();

        string json = File.ReadAllText("./data/colours.json");
        var tubeColors = JArray.Parse(json);
        foreach (var tubeColor in tubeColors)
            _tubeColorRGBDictionary.Add(tubeColor["id"].Value<string>(), new RGB(
                tubeColor["RGB"]["R"]?.Value<int>() ?? 0,
                tubeColor["RGB"]["G"]?.Value<int>() ?? 0,
                tubeColor["RGB"]["B"]?.Value<int>() ?? 0));
    public static RGB GetRGBColour(string lineId)
        if (!_tubeColorRGBDictionary.ContainsKey(lineId))
            throw new ArgumentException($"Colour for line [{lineId}] could not be found in RGB colour map");

        return _tubeColorRGBDictionary[lineId];

The static constructor runs only the first time it is accessed, reads the colours.json and populates the dictionary. From then on it’s just a lookup in memory.

First client: C# Console Application on Windows

Time to develop our first client and see some actual results. As it’s generally the case with console applications this one is pretty simple and hassle-free. I decided to start with that one just to see the core library is working as expected.

class Program
    static void Main(string[] args)
        var fetcher = new Fetcher();
        var viewer = new ConsoleViewer();

        bool exit = false;


            ConsoleKeyInfo key = System.Console.ReadKey();
            switch (key.Key)
                case ConsoleKey.F5:
                case ConsoleKey.Q:
                    exit = true;
                    System.Console.WriteLine("Unknown command");
        while (!exit);

Displays the results when it’s first run. You can refresh by pressing F5 or quit by pressing Q. The output looks like this:

The problem with console application is that I wasn’t able to use RGB values directly as the console only supports an enumeration called ConsoleColor.

Second client: WPF Application on Windows

Now let’s look at a more graphical UI, a WPF client:

Same idea, display the results upon first run then call the service again on Refresh button’s click event.

Third client: iOS App with Xamarin

I’ve recently subscribed to Xamarin and one of the main reasons for starting this project was to see it in action. What I was mostly curious about was if I could use my C# libraries using NuGet packages on an iOS application developed with Xamarin. This would allow me build apps significantly faster.

It didn’t work out of the box because I used C# 6.0 and .NET Framework 4.5.2 on the Windows side but it wasn’t available on the Mac.

But wasn’t too hard to change the framework and make some small modifications to make it work. Good news is that it supports NuGet and most common libraries have Mono support including RestSharp and Newtonsoft.Json which I used in this project

I had to remove and add them but finally they worked fine so I didn’t have to change anything in the code.

I won’t go into implementation details as there’s not much change. The app has one table view controller and it calls the core library to get the results and assigns them to the table’s data source. It’s a relief that I could have the same functionality as Windows with just minor changes.

public override void ViewDidLoad()

    var fetcher = new Fetcher();
    var lineInfoList = fetcher.GetTubeInfo();
    TableView.Source = new TubeStatusTableViewControllerSource(lineInfoList.ToArray());

Anyway, more on Xamarin later after I cover the Swift version.

Fourth client: iOS App with Swift

Last but not least, here comes Swift client built with XCode. Naturally this one cannot use the core library that the first 3 clients shared (which is good because I was looking for a chance to practice handling HTTP requests and parsinng JSON with Swift anyway).

I didn’t use any external libraries so the implementation is a bit long but mainly it sends the request using NSURLSession and NSSessionDataTask.

func getTubeStatus(completionHandler: (result: [LineInfo]?, error: NSError?) -> Void) {
    let parameters = ["detail" : "true"]
    let mutableMethod : String = Methods.TubeStatus
    taskForGETMethod(mutableMethod, parameters: parameters) { JSONResult, error in
        if let error = error {
            completionHandler(result: nil, error: error)
        } else {
            if let results = JSONResult as? [AnyObject] {
                let lineStatus = LineInfo.lineStatusFromResults(results)
                completionHandler(result: lineStatus, error: nil)

Then constructs the LineInfo objects by calling the static lineStatusFromResults method:

static func lineStatusFromResults(results: [AnyObject]) -> [LineInfo] {
    var lineStatus = [LineInfo]()
    for result in results {
        lineStatus.append(LineInfo(status: result))
    return lineStatus

which creates a new LineInfo and adds to resultset:

init(status: AnyObject) {

    Id = status["id"] as! String
    Name = status["name"] as! String
    StatusSeverity = status["lineStatuses"]!![0]!["statusSeverity"] as! Int
    StatusSeverityDescription = status["lineStatuses"]!![0]!["statusSeverityDescription"] as! String
    LineColour = RGB(R: 0, G: 0, B: 0)

JSON parsing is a bit nasy because of unwrapping the optionals. I’ll look into SwiftyJSON later on which is a popular JSON library for Swift.

Finally the controller displays the results:

override func viewWillAppear(animated: Bool) {
    TFLClient.sharedInstance().getTubeStatus { lineStatus, error in
        if let lineStatus = lineStatus {
            self.lineInfoList = lineStatus
            dispatch_async(dispatch_get_main_queue()) {
        } else {

And the custom cells are created when data is loaded and the text and colours are set:

override func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
    let cell = tableView.dequeueReusableCellWithIdentifier("TubeInfoCell", forIndexPath: indexPath) as! TubeInfoTableViewCell
    let lineStatus = lineInfoList[indexPath.row]
    cell.backgroundColor = colourHelper.getTubeColor(lineStatus.Id)
    cell.lineName?.text = lineStatus.Name
    cell.lineName?.textColor = UIColor.whiteColor()
    cell.severityDescription?.text = lineStatus.StatusSeverityDescription
    cell.severityDescription?.textColor = UIColor.whiteColor()

    return cell

And here’s the output:

Xamarin vs Swift

Here’s a quick overview and comparison of both platforms based on my (limited) experiences with this toy project:

  • XCode is much faster when building and deploying
  • XamarinStudio doesn’t seem to be very intuitive at times. For examples the code snippets use Java-notation
  • The more I use Swift the more I like it and it doesn’t slow me down terribly. Once you get used to it the difference is syntax more or less. For example the following two methods do the same thing:


      public override nint RowsInSection (UITableView tableview, nint section)
          return lineInfoList.Length;


      override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
              return lineInfoList.count

    I can even argue I’d be more comfortable with the Swift version here as I have no idea what a “nint” is as it’s an input parameter in the Xamarin version.

  • The idea behind Xamarin subscription was to develop iOS apps quickly as I’m a seasoned C# developer and feel comfortable with it. But turns it, I can’t move as fast as I expected. With the Indie subscription you can only use Xamarin Studio. Enabling Visual Studio is only allowed with the business version which costs $1000/year. And Xamarin Studio is a brand new IDE for me so it definitely has a learning curve. Also I’m getting used to XCode now (besides the fact that it crashes hundred times a day on average!)


This was just a reconnaisance mission to explore the TFL API, iOS development with Xamarin and Swift. It was a fun exercise for me, I hope anyone who reads this can benefit from it too.


csharp, nancy comments edit

Recently I needed to simulate HTTP responses from a 3rd party. I decided to use Nancy to quickly build a local web server that would handle my test requests and return the responses I wanted.

Here’s the definition of Nancy from their official website:

Nancy is a lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono.

It can handle DELETE, GET, HEAD, OPTIONS, POST, PUT and PATCH requests. It’s very easy to customize and extend as it’s module-based. In order to build our tiny web server we are going to need self-hosting package:

Install-Package Nancy.Hosting.Self

This would automatically install Nancy as it depends on that package.

Self-hosting in action

The container application can be anything as long it keeps running one way or another. A background service would be ideal for this task. Since all I need is testing I just created a console application and added Console.ReadKey() statement to keep it “alive”

class Program
    private string _url = "http://localhost";
    private int _port = 12345;
    private NancyHost _nancy;

    public Program()
        var uri = new Uri( $"{_url}:{_port}/");
        _nancy = new NancyHost(uri);

    private void Start()
        Console.WriteLine($"Started listennig port {_port}");

    static void Main(string[] args)
        var p = new Program();

If you try this code, it’s likely that you’ll have an error (AutomaticUrlReservationCreationFailureException)


The Nancy self host was unable to start, as no namespace reservation existed for the provided url(s).

Please either enable UrlReservations.CreateAutomatically on the HostConfiguration provided to 
the NancyHost, or create the reservations manually with the (elevated) command(s):

netsh http add urlacl url="http://+:12345/" user="Everyone"

There are 3 ways to resolve this issue and two of which are already suggested in the error message:

  1. In an elevated command prompt (fancy way of saying run as administrator!), run

     netsh http add urlacl url="http://+:12345/" user="Everyone"

    What add urlacl does is

    Reserves the specified URL for non-administrator users and accounts

    If you want to delete it later on you can use the following command

     netsh http delete urlacl url=http://+:12345/
  2. Specify a host configuration to NancyHost like this:

     var configuration = new HostConfiguration()
         UrlReservations = new UrlReservations() { CreateAutomatically = true }
     _nancy = new NancyHost(configuration, uri);

    This essentially does the same thing and a UAC prompt pops up so it’s not that automatical!

  3. Run the Visual Studio (and the standalone application when deployed) as administrator

After applying either one of the 3 solutions, let’s run the application and try the address http://localhost:12345 in a browser and we get …

Excellent! We are actually getting a response from the server even though it’s just a 404 error.

Now let’s add some functionality, otherwise it isn’t terribly useful.

Handling requests

Requests are handled by modules. Creating a module is as simple as creating a class deriving from NancyModule. Let’s create two handlers for the root, one for GET verbs and one for POST:

public class SimpleModule : Nancy.NancyModule
    public SimpleModule()
        Get["/"] = _ => "Received GET request";

        Post["/"] = _ => "Received POST request";

Nancy automatically discovers all modules so we don’t have to register them. If there are conflicting handlers the last one discovered overrides the previous ones. For example the following example would work fine and the second GET handler will be executed:

public class SimpleModule : Nancy.NancyModule
    public SimpleModule()
        Get["/"] = _ => "Received GET request";

        Post["/"] = _ => "Received POST request";

        Get["/"] = _ => "Let me have the request!";

Working with input data: Request parameters

In the simple we used underscore to represent input as didn’t care but most of the time we would. In that case we can get the request parameters as a DynamicDictionary (a type that comes with Nancy). For example let’s create a route for /user:

public SimpleModule()
    Get["/user/{id}"] = parameters =>
        if (((int) == 666)
            return $"All hail user #{}! \\m/";
            return "Just a regular user!";

And send the GET request:

GET http://localhost:12345/user/666 HTTP/1.1
User-Agent: Fiddler
Host: localhost:12345
Content-Length: 2

which would return the response:

HTTP/1.1 200 OK
Content-Type: text/html
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 10 Nov 2015 11:40:08 GMT
Content-Length: 23

All hail user #666! \m/

Working with input data: Request body

Now let’s try to handle the data posted in the request body. Data posted in the body can be accessed though this.Request.Body property such as for the following request

POST http://localhost:12345/ HTTP/1.1
User-Agent: Fiddler
Host: localhost:12345
Content-Length: 55
Content-Type: application/json

    "username": "volkan",
    "isAdmin": "sure!"

this code would first convert the request stream to a string and deserialize it to a POCO:

Post["/"] = _ =>
    var id = this.Request.Body;
    var length = this.Request.Body.Length;
    var data = new byte[length];
    id.Read(data, 0, (int)length);
    var body = System.Text.Encoding.Default.GetString(data);

    var request = JsonConvert.DeserializeObject<SimpleRequest>(body);
    return 200;

If the was posted from a form for example and sent in the following format in the body


then we could simply convert it to a dictionary with a little bit of LINQ:

Post["/"] = parameters =>
    var id = this.Request.Body;
    long length = this.Request.Body.Length;
    byte[] data = new byte[length];
    id.Read(data, 0, (int)length);
    string body = System.Text.Encoding.Default.GetString(data);
    var p = body.Split('&')
        .Select(s => s.Split('='))
        .ToDictionary(k => k.ElementAt(0), v => v.ElementAt(1));

    if (p["username"] == "volkan")
        return "awesome!";
        return "meh!";

This is nice but it’s a lot of work to read the whole and manually deserialize it! Fortunately Nancy supports model binding. First we need to add the using statement as the Bind extension method lives in Nancy.ModelBinding

using Nancy.ModelBinding;

Now we can simplify the code by the help of model binding:

Post["/"] = _ =>
    var request = this.Bind<SimpleRequest>();
    return request.username;

The important thing to note is to send the data with the appropriate content type. For the form data example the request should be like this:

POST http://localhost:12345/ HTTP/1.1
User-Agent: Fiddler
Host: localhost:12345
Content-Length: 29
Content-Type: application/x-www-form-urlencoded


It also works for binding JSON to the same POCO.

Preparing responses

Nancy is very flexible in terms of responses. As shown in the above examples you can return a string

Post["/"] = _ =>
    return "This is a valid response";

which would yield this HTTP message on the wire:

HTTP/1.1 200 OK
Content-Type: text/html
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 10 Nov 2015 15:48:12 GMT
Content-Length: 20

This is a valid response

Response code is set to 200 - OK automatically and the text is sent in the response body.

We can just set the code and return a response with a simple one-liner:

Post["/"] = _ => 405;

which would produce:

HTTP/1.1 405 Method Not Allowed
Content-Type: text/html
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 10 Nov 2015 15:51:36 GMT
Content-Length: 0

To prepare more complex responses with headers and everything we can construct a new Response object like this:

Post["/"] = _ =>
    string jsonString = "{ username: \"admin\", password: \"just kidding\" }";
    byte[] jsonBytes = Encoding.UTF8.GetBytes(jsonString);

    return new Response()
        StatusCode = HttpStatusCode.OK,
        ContentType = "application/json",
        ReasonPhrase = "Because why not!",
        Headers = new Dictionary<string, string>()
            { "Content-Type", "application/json" },
            { "X-Custom-Header", "Sup?" }
        Contents = c => c.Write(jsonBytes, 0, jsonBytes.Length)

and we would get this at the other end of the line:

HTTP/1.1 200 Because why not!
Content-Type: application/json
Server: Microsoft-HTTPAPI/2.0
X-Custom-Header: Sup?
Date: Tue, 10 Nov 2015 16:09:19 GMT
Content-Length: 47

{ username: "admin", password: "just kidding" }

Response also comes with a lot of useful methods like AsJson, AsXml and AsRedirect. For example we could simplify returning a JSON response like this:

Post["/"] = _ =>
    return Response.AsJson<SimpleResponse>(
        new SimpleResponse()
            Status = "A-OK!", ErrorCode = 1, Description = "All systems are go!"

and the result would contain the appropriate header and status code:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 10 Nov 2015 16:19:18 GMT
Content-Length: 68

{"status":"A-OK!","errorCode":1,"description":"All systems are go!"}

One extension I like is the AsRedirect method. The following example would return Google search results for a given parameter:

Get["/search"] = parameters =>
    string s = this.Request.Query["q"];
    return Response.AsRedirect($"{s}");


What if we needed to support HTTPS for our tests for some reason? Fear not, Nancy covers that too. By default, if we just try to use HTTPS by changing the protocol we would get this exception:

The connection to ‘localhost’ failed. System.Security.SecurityException Failed to negotiate HTTPS connection with HTTPS handshake to localhost (for #2) failed. System.IO.IOException Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

The solution is to add create a self-signed certificate and add it using netsh http add command. Here’s the step-by-step process:

  1. Create a self-signed certificate: Open a Visual Studio command prompt and enter the following command:
makecert nancy.cer

You can provide more properties so that it would look with a name that makes sense. Here’s an MSD page to

  1. Run mmc and add Certificates snap-in. Make sure to select Computer Account.

I selected My User Account at first and it gave the following error:

SSL Certificate add failed, Error: 1312 A specified logon session does not exist. It may already have been terminated.

In that case the solution is just to drag and drop the certificate to the computer account as shown below:

  1. Right-click on Certificates (Local Computer) -> Personal -> Certificates and select All tasks -> Import and browse to nancy.cer file created in Step 1

  2. Double-click on the certificate, switch to Details tab and scroll to the bottom and copy the Thumbprint value (and remove the spaces after copied it)

  1. Now enter the following commands. The first one is the same as before, just with HTTPS as protocol. The second command add the certificate we’ve just created.
netsh http add urlacl url=https://+:12345/ user="Everyone"

netsh http add sslcert ipport= ccerthash=653a1c60d4daaae00b2a103f242eac965ca21bec appid={A0DEC7A4-CF28-42FD-9B85-AFFDDD4FDD0F} clientcertnegotiation=enable

Here appid can be any GUID.

Let’s take it out for a test drive:

Get["/"] = parameters =>
    return "Response over HTTPS! Weeee!";

This request

GET https://localhost:12345 HTTP/1.1
Host: localhost:12345

returns this response

HTTP/1.1 200 OK
Content-Type: text/html
Server: Microsoft-HTTPAPI/2.0
Date: Wed, 11 Nov 2015 10:24:58 GMT
Content-Length: 27

Response over HTTPS! Weeee!


There are a few alternatives when you need a small web server to test something locally. Nancy is one of them. It’s easy to configure, use and it’s lightweight. Apparently you can even host in on a Raspberry Pi!


csharp, design, development, mef, managed extensibility framework comments edit

In this post I will try to cover some of the basic concepts and features of MEF over a working example. In the future I’ll post more articles demonstraint MEF usage with more complex applications.


Many successful and popular applications, such as Visual Studio, Eclipse, Sublime Text, support a plug-in model. Adopting a plugin-based model, whenever possible, has quite a few advantages:

  • Helps to keep the core lightweight instead of cramming all features into the same code-base.
  • Helps to make the application more robust: New functionality can be added without changing any existing code.
  • Helps to make development easier as different modules can be developed by different people simultaneously
  • Allows plugin development without distributing the main the source code

Extensibility is based on composition and it is very helpful to build SOLID compliant applications as it adopts Open/closed and Dependency Inversion principles.

MEF is part of .NET framework as of version 4.0 and it lives inside System.ComponentModel.Composition namespace. This is also the standard extension model that has been used in Visual Studio. It is not meant to replace Invesion of Control (IoC) frameworks. It is rather meant to simplify building extensible applications using dependency injection based on component composition.

Some terminology

Before diving into the sample, let’s look at some MEF terminology and core terms:

  • Part: Basic elements in MEF are called parts. Parts can provide services to other parts (exporting) and can consume other parts’ services (importing).

  • Container: This is the part that performs the composition. Most common one is CompositionContainer class.

  • Catalog: In order to discover the parts, containers use catalogs. There are various catelogs suplied by MEF such as

    • AssemblyCatalog: Discovers attributed parts in a managed code assembly.
    • DirectoryCatalog: Discovers attributed parts in the assemblies in a specified directory.
    • AggregateCatalog: A catalog that combines the elements of ComposablePartCatalog objects.
    • ApplicationCatalog: Discovers attributed parts in the dynamic link library (DLL) and EXE files in an application’s directory and path
  • Export / import: The way the plugins make themselves discoverable is by exporting their implementation of a contract. A contract is simply a common interface that the application and the plugins understand so they can speak the same language so to speak.

Sample Project

As I learn best by playing around, I decided to start with a simple project. I’ve recently published a sample project for Strategy design pattern which I blogged here. In this post I will use the same project and convert it into a plugin-based version.

IP Checker with MEF v1: Bare Essentials

At this point we have everything we need for the first version of the plugin-based IP checker. Firstly, I divided my project into 5 parts:

  • IPCheckerWithMEF.Lab: The consumer application
  • IPCheckerWithMEF.Contract: Project containing the common interface
  • Plugins: Extensions for the main application
    • IPCheckerWithMEF.Plugins.AwsIPChecker
    • IPCheckerWithMEF.Plugins.CustomIPChecker
    • IPCheckerWithMEF.Plugins.DynDnsIPChecker

I set the output folder of the plugins to a directory called Plugins at the project level.

Let’s see some code!

For this basic version we need 3 things:

  • A container to handle the composition.
  • A catalog that the container can use to discover the plugins.
  • A way to tell which classes should be discovered and imported

In this sample I used a DirectoryCatalog that points to the output folder of the plugin projects. So after adding the required parts above the main application shaped up to be something like this:

public class MainApplication
    private CompositionContainer _container;

    public List<IIpChecker> IpCheckerList { get; set; }

    public MainApplication(string pluginFolder)
        var catalog = new DirectoryCatalog(pluginFolder);
        _container = new CompositionContainer(catalog);


    public void LoadPlugins()
        catch (CompositionException compositionException)

In the constructor, it instantiates a DirectoryCatalog with the given path and passes it to the container. The container imports IIpChecker type objects found in the assemblies inside that folder. Note that we didn’t do anything about IpCheckerList. By decorating it with ImportMany attribute we declared that it’s to be filled by the composition engine. In this example we could only use ImportMany as opposed to Import which would look for a single part to compose. If we used Import we would get the following exception:

Now to complete the circle we need to export our plugins with Export attribute such as:

public class AwsIPChecker : IIpChecker
    public string GetExternalIp()
        // ...

Alternatively we can use InheritedExport attribute on the interface to export any class that implements the IIpChecker interface.

public interface IIpChecker
    string GetExternalIp();

This way the plugins would still be discovered even if they weren’t decorated with Export attribute because of this inheritance model.

Putting it together

Now that we’ve seen the plugins that export the implementation and part that discovers and imports them let’s see them all in action:

class Program
    static void Main(string[] args)
        Console.WriteLine("Starting the main application");

        string pluginFolder = @"..\..\..\Plugins\";
        var app = new MainApplication(pluginFolder);

        Console.WriteLine($"{app.IpCheckerList.Count} plugin(s) loaded..");
        Console.WriteLine("Executing all plugins...");

        foreach (var ipChecker in app.IpCheckerList)

    private static string ObfuscateIP(string actualIp)
        return Regex.Replace(actualIp, "[0-9]", "*");

We create the consumer application that loads all the plugins in the directory we specify. Then we can loop over and execute all of them:

So far so good. Now, let’s try to export some metadata about our plugins so that we can display the loaded plugins to the user.

IP Checker with MEF v2: Metadata comes into play

In almost all applications plugins come with some sort of information so that the user can identify which ones have been installed and what they do. To export the extra data let’s add a new interface:

public interface IPluginInfo
    string DisplayName { get; }
    string Description { get; }
    string Version { get; }

And on the plugins we fill that data and export it using the ExportMetadata attribute:

[ExportMetadata("DisplayName", "Custom IP Checker")]
[ExportMetadata("Description", "Uses homebrew service developed with Node.js and hosted on Heroku")]
[ExportMetadata("Version", "2.1")]
public class CustomIpChecker : IIpChecker
    // ...

In v1, we only imported a list of objects implementing IIpChecker. So how do we accommodate this new piece of information? In order to do that we have to change the way we import the plugins and use the Lazy construct:

public List<Lazy<IIpChecker, IPluginInfo>> Plugins { get; set; }

According to MSDN this is mandatory to get metadata out of plugins:

The importing part can use this data to decide which exports to use, or to gather information about an export without having to construct it. For this reason, an import must be lazy to use metadata

So let’s load and display this new plugin information:

private static void PrintPluginInfo()
    Console.WriteLine($"{_app.Plugins.Count} plugin(s) loaded..");
    Console.WriteLine("Displaying plugin info...");

    foreach (var ipChecker in _app.Plugins)
        Console.WriteLine($"Name: {ipChecker.Metadata.DisplayName}");
        Console.WriteLine($"Description: {ipChecker.Metadata.Description}");
        Console.WriteLine($"Version: {ipChecker.Metadata.Version}");

Notice that we access the metadata through [PluginName].Metadata.[PropertyName] properties. To access the actual plugin and call the exported methods we have to use [PluginName].Value such as:

foreach (var ipChecker in _app.Plugins)

Managing the plugins

What if we want to add or remove plugins at runtime? We can do it without restarting the application but refreshing the catalog and calling the container’s ComposeParts method again.

In this sample application I added a FileSystemWatcher that listens to the Created and Deleted events on the Plugins folder and calls the LoadPlugins method of the application when an event fires. LoadPlugins first refreshes the catalog and composes the parts:

public void LoadPlugins()
    catch (CompositionException compositionException)

But making this change alone isn’t sufficient and we would end up getting a CompositionException:

By default recomposition is disabled so we have to specify it explicitly while importing parts:

[ImportMany(AllowRecomposition = true)]
public List<Lazy<IIpChecker, IPluginInfo>> Plugins { get; set; }

After these changes the final version of composing class looks like this:

public class MainApplication
    private CompositionContainer _container;
    private DirectoryCatalog _catalog;

    [ImportMany(AllowRecomposition = true)]
    public List<Lazy<IIpChecker, IPluginInfo>> Plugins { get; set; }

    public MainApplication(string pluginFolder)
        _catalog = new DirectoryCatalog(pluginFolder);
        _container = new CompositionContainer(_catalog);


    public void LoadPlugins()
        catch (CompositionException compositionException)

and the client app:

class Program
    private static readonly string _pluginFolder = @"..\..\..\Plugins\";
    private static FileSystemWatcher _pluginWatcher;
    private static MainApplication _app;

    static void Main(string[] args)
        Console.WriteLine("Starting the main application");

        _pluginWatcher = new FileSystemWatcher(_pluginFolder);
        _pluginWatcher.Created += PluginWatcher_FolderUpdated;
        _pluginWatcher.Deleted += PluginWatcher_FolderUpdated;
        _pluginWatcher.EnableRaisingEvents = true;

        _app = new MainApplication(_pluginFolder);



    private static void PrintPluginInfo()
        Console.WriteLine($"{_app.Plugins.Count} plugin(s) loaded..");
        Console.WriteLine("Displaying plugin info...");

        foreach (var ipChecker in _app.Plugins)
            Console.WriteLine($"Name: {ipChecker.Metadata.DisplayName}");
            Console.WriteLine($"Description: {ipChecker.Metadata.Description}");
            Console.WriteLine($"Version: {ipChecker.Metadata.Version}");

    private static void PluginWatcher_FolderUpdated(object sender, FileSystemEventArgs e)
        Console.WriteLine("Folder changed. Reloading plugins...");


After these changes I started the application with 2 plugins in the target folder and added a 3rd one while it’s running and got this output:

It also works the same way for deleted plugins but not for updates because the assemblies are locked by .NET. Adding new plugins at runtime is painless but removing and updating would require more attention as the plugin might be running at the time.