dev csharp, nancy

Recently I needed to simulate HTTP responses from a 3rd party. I decided to use Nancy to quickly build a local web server that would handle my test requests and return the responses I wanted.

Here’s the definition of Nancy from their official website:

Nancy is a lightweight, low-ceremony, framework for building HTTP based services on .Net and Mono.

It can handle DELETE, GET, HEAD, OPTIONS, POST, PUT and PATCH requests. It’s very easy to customize and extend as it’s module-based. In order to build our tiny web server we are going to need self-hosting package:

Install-Package Nancy.Hosting.Self

This would automatically install Nancy as it depends on that package.

Self-hosting in action

The container application can be anything as long it keeps running one way or another. A background service would be ideal for this task. Since all I need is testing I just created a console application and added Console.ReadKey() statement to keep it “alive”

class Program
{
    private string _url = "http://localhost";
    private int _port = 12345;
    private NancyHost _nancy;

    public Program()
    {
        var uri = new Uri( $"{_url}:{_port}/");
        _nancy = new NancyHost(uri);
    }

    private void Start()
    {
        _nancy.Start();
        Console.WriteLine($"Started listennig port {_port}");
        Console.ReadKey();
        _nancy.Stop();
    }

    static void Main(string[] args)
    {
        var p = new Program();
        p.Start();
    }
}

If you try this code, it’s likely that you’ll have an error (AutomaticUrlReservationCreationFailureException)

saying:

The Nancy self host was unable to start, as no namespace reservation existed for the provided url(s).

Please either enable UrlReservations.CreateAutomatically on the HostConfiguration provided to 
the NancyHost, or create the reservations manually with the (elevated) command(s):

netsh http add urlacl url="http://+:12345/" user="Everyone"

There are 3 ways to resolve this issue and two of which are already suggested in the error message:

  1. In an elevated command prompt (fancy way of saying run as administrator!), run

     netsh http add urlacl url="http://+:12345/" user="Everyone"
    

    What add urlacl does is

    Reserves the specified URL for non-administrator users and accounts

    If you want to delete it later on you can use the following command

     netsh http delete urlacl url=http://+:12345/
    
  2. Specify a host configuration to NancyHost like this:

     var configuration = new HostConfiguration()
     {
         UrlReservations = new UrlReservations() { CreateAutomatically = true }
     };
    	
     _nancy = new NancyHost(configuration, uri);
    

    This essentially does the same thing and a UAC prompt pops up so it’s not that automatical!

  3. Run the Visual Studio (and the standalone application when deployed) as administrator

After applying either one of the 3 solutions, let’s run the application and try the address http://localhost:12345 in a browser and we get …

Excellent! We are actually getting a response from the server even though it’s just a 404 error.

Now let’s add some functionality, otherwise it isn’t terribly useful.

Handling requests

Requests are handled by modules. Creating a module is as simple as creating a class deriving from NancyModule. Let’s create two handlers for the root, one for GET verbs and one for POST:

public class SimpleModule : Nancy.NancyModule
{
    public SimpleModule()
    {
        Get["/"] = _ => "Received GET request";

        Post["/"] = _ => "Received POST request";
    }
}

Nancy automatically discovers all modules so we don’t have to register them. If there are conflicting handlers the last one discovered overrides the previous ones. For example the following example would work fine and the second GET handler will be executed:

public class SimpleModule : Nancy.NancyModule
{
    public SimpleModule()
    {
        Get["/"] = _ => "Received GET request";

        Post["/"] = _ => "Received POST request";

        Get["/"] = _ => "Let me have the request!";
    }
}

Working with input data: Request parameters

In the simple we used underscore to represent input as didn’t care but most of the time we would. In that case we can get the request parameters as a DynamicDictionary (a type that comes with Nancy). For example let’s create a route for /user:

public SimpleModule()
{
    Get["/user/{id}"] = parameters =>
    {
        if (((int)parameters.id) == 666)
        {
            return $"All hail user #{parameters.id}! \\m/";
        }
        else
        {
            return "Just a regular user!";
        }
    };
}

And send the GET request:

GET http://localhost:12345/user/666 HTTP/1.1
User-Agent: Fiddler
Host: localhost:12345
Content-Length: 2

which would return the response:

HTTP/1.1 200 OK
Content-Type: text/html
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 10 Nov 2015 11:40:08 GMT
Content-Length: 23

All hail user #666! \m/

Working with input data: Request body

Now let’s try to handle the data posted in the request body. Data posted in the body can be accessed though this.Request.Body property such as for the following request

POST http://localhost:12345/ HTTP/1.1
User-Agent: Fiddler
Host: localhost:12345
Content-Length: 55
Content-Type: application/json

{
    "username": "volkan",
    "isAdmin": "sure!"
}

this code would first convert the request stream to a string and deserialize it to a POCO:

Post["/"] = _ =>
{
    var id = this.Request.Body;
    var length = this.Request.Body.Length;
    var data = new byte[length];
    id.Read(data, 0, (int)length);
    var body = System.Text.Encoding.Default.GetString(data);

    var request = JsonConvert.DeserializeObject<SimpleRequest>(body);
    return 200;
};

If the was posted from a form for example and sent in the following format in the body

username=volkan&isAdmin=sure!

then we could simply convert it to a dictionary with a little bit of LINQ:

Post["/"] = parameters =>
{
    var id = this.Request.Body;
    long length = this.Request.Body.Length;
    byte[] data = new byte[length];
    id.Read(data, 0, (int)length);
    string body = System.Text.Encoding.Default.GetString(data);
    var p = body.Split('&')
        .Select(s => s.Split('='))
        .ToDictionary(k => k.ElementAt(0), v => v.ElementAt(1));

    if (p["username"] == "volkan")
        return "awesome!";
    else
        return "meh!";
};

This is nice but it’s a lot of work to read the whole and manually deserialize it! Fortunately Nancy supports model binding. First we need to add the using statement as the Bind extension method lives in Nancy.ModelBinding

using Nancy.ModelBinding;

Now we can simplify the code by the help of model binding:

Post["/"] = _ =>
{
    var request = this.Bind<SimpleRequest>();
    return request.username;
};

The important thing to note is to send the data with the appropriate content type. For the form data example the request should be like this:

POST http://localhost:12345/ HTTP/1.1
User-Agent: Fiddler
Host: localhost:12345
Content-Length: 29
Content-Type: application/x-www-form-urlencoded

username=volkan&isAdmin=sure!

It also works for binding JSON to the same POCO.

Preparing responses

Nancy is very flexible in terms of responses. As shown in the above examples you can return a string

Post["/"] = _ =>
{
    return "This is a valid response";
};

which would yield this HTTP message on the wire:

HTTP/1.1 200 OK
Content-Type: text/html
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 10 Nov 2015 15:48:12 GMT
Content-Length: 20

This is a valid response

Response code is set to 200 - OK automatically and the text is sent in the response body.

We can just set the code and return a response with a simple one-liner:

Post["/"] = _ => 405;

which would produce:

HTTP/1.1 405 Method Not Allowed
Content-Type: text/html
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 10 Nov 2015 15:51:36 GMT
Content-Length: 0

To prepare more complex responses with headers and everything we can construct a new Response object like this:

Post["/"] = _ =>
{
    string jsonString = "{ username: \"admin\", password: \"just kidding\" }";
    byte[] jsonBytes = Encoding.UTF8.GetBytes(jsonString);

    return new Response()
    {
        StatusCode = HttpStatusCode.OK,
        ContentType = "application/json",
        ReasonPhrase = "Because why not!",
        Headers = new Dictionary<string, string>()
        {
            { "Content-Type", "application/json" },
            { "X-Custom-Header", "Sup?" }
        },
        Contents = c => c.Write(jsonBytes, 0, jsonBytes.Length)
    };
};

and we would get this at the other end of the line:

HTTP/1.1 200 Because why not!
Content-Type: application/json
Server: Microsoft-HTTPAPI/2.0
X-Custom-Header: Sup?
Date: Tue, 10 Nov 2015 16:09:19 GMT
Content-Length: 47

{ username: "admin", password: "just kidding" }

Response also comes with a lot of useful methods like AsJson, AsXml and AsRedirect. For example we could simplify returning a JSON response like this:

Post["/"] = _ =>
{
    return Response.AsJson<SimpleResponse>(
        new SimpleResponse()
        {
            Status = "A-OK!", ErrorCode = 1, Description = "All systems are go!"
        });
};

and the result would contain the appropriate header and status code:

HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 10 Nov 2015 16:19:18 GMT
Content-Length: 68

{"status":"A-OK!","errorCode":1,"description":"All systems are go!"}

One extension I like is the AsRedirect method. The following example would return Google search results for a given parameter:

Get["/search"] = parameters =>
{
    string s = this.Request.Query["q"];
    return Response.AsRedirect($"http://www.google.com/search?q={s}");
};

HTTPS

What if we needed to support HTTPS for our tests for some reason? Fear not, Nancy covers that too. By default, if we just try to use HTTPS by changing the protocol we would get this exception:

The connection to ‘localhost’ failed. System.Security.SecurityException Failed to negotiate HTTPS connection with server.fiddler.network.https HTTPS handshake to localhost (for #2) failed. System.IO.IOException Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.

The solution is to add create a self-signed certificate and add it using netsh http add command. Here’s the step-by-step process:

  1. Create a self-signed certificate: Open a Visual Studio command prompt and enter the following command:
makecert nancy.cer

You can provide more properties so that it would look with a name that makes sense. Here’s an MSD page to

  1. Run mmc and add Certificates snap-in. Make sure to select Computer Account.

I selected My User Account at first and it gave the following error:

SSL Certificate add failed, Error: 1312 A specified logon session does not exist. It may already have been terminated.

In that case the solution is just to drag and drop the certificate to the computer account as shown below:

  1. Right-click on Certificates (Local Computer) -> Personal -> Certificates and select All tasks -> Import and browse to nancy.cer file created in Step 1

  2. Double-click on the certificate, switch to Details tab and scroll to the bottom and copy the Thumbprint value (and remove the spaces after copied it)

  1. Now enter the following commands. The first one is the same as before, just with HTTPS as protocol. The second command add the certificate we’ve just created.
netsh http add urlacl url=https://+:12345/ user="Everyone"

netsh http add sslcert ipport=0.0.0.0:1234 ccerthash=653a1c60d4daaae00b2a103f242eac965ca21bec appid={A0DEC7A4-CF28-42FD-9B85-AFFDDD4FDD0F} clientcertnegotiation=enable

Here appid can be any GUID.

Let’s take it out for a test drive:

Get["/"] = parameters =>
{
    return "Response over HTTPS! Weeee!";
};

This request

GET https://localhost:12345 HTTP/1.1
Host: localhost:12345


returns this response

HTTP/1.1 200 OK
Content-Type: text/html
Server: Microsoft-HTTPAPI/2.0
Date: Wed, 11 Nov 2015 10:24:58 GMT
Content-Length: 27

Response over HTTPS! Weeee!

Conclusion

There are a few alternatives when you need a small web server to test something locally. Nancy is one of them. It’s easy to configure, use and it’s lightweight. Apparently you can even host in on a Raspberry Pi!

Resources

devdesign csharp, mef, managed_extensibility_framework, plugin

In this post I will try to cover some of the basic concepts and features of MEF over a working example. In the future I’ll post more articles demonstraint MEF usage with more complex applications.

Background

Many successful and popular applications, such as Visual Studio, Eclipse, Sublime Text, support a plug-in model. Adopting a plugin-based model, whenever possible, has quite a few advantages:

  • Helps to keep the core lightweight instead of cramming all features into the same code-base.
  • Helps to make the application more robust: New functionality can be added without changing any existing code.
  • Helps to make development easier as different modules can be developed by different people simultaneously
  • Allows plugin development without distributing the main the source code

Extensibility is based on composition and it is very helpful to build SOLID compliant applications as it adopts Open/closed and Dependency Inversion principles.

MEF is part of .NET framework as of version 4.0 and it lives inside System.ComponentModel.Composition namespace. This is also the standard extension model that has been used in Visual Studio. It is not meant to replace Invesion of Control (IoC) frameworks. It is rather meant to simplify building extensible applications using dependency injection based on component composition.

Some terminology

Before diving into the sample, let’s look at some MEF terminology and core terms:

  • Part: Basic elements in MEF are called parts. Parts can provide services to other parts (exporting) and can consume other parts’ services (importing).

  • Container: This is the part that performs the composition. Most common one is CompositionContainer class.

  • Catalog: In order to discover the parts, containers use catalogs. There are various catelogs suplied by MEF such as

    • AssemblyCatalog: Discovers attributed parts in a managed code assembly.
    • DirectoryCatalog: Discovers attributed parts in the assemblies in a specified directory.
    • AggregateCatalog: A catalog that combines the elements of ComposablePartCatalog objects.
    • ApplicationCatalog: Discovers attributed parts in the dynamic link library (DLL) and EXE files in an application’s directory and path
  • Export / import: The way the plugins make themselves discoverable is by exporting their implementation of a contract. A contract is simply a common interface that the application and the plugins understand so they can speak the same language so to speak.

Sample Project

As I learn best by playing around, I decided to start with a simple project. I’ve recently published a sample project for Strategy design pattern which I blogged here. In this post I will use the same project and convert it into a plugin-based version.

IP Checker with MEF v1: Bare Essentials

At this point we have everything we need for the first version of the plugin-based IP checker. Firstly, I divided my project into 5 parts:

  • IPCheckerWithMEF.Lab: The consumer application
  • IPCheckerWithMEF.Contract: Project containing the common interface
  • Plugins: Extensions for the main application
    • IPCheckerWithMEF.Plugins.AwsIPChecker
    • IPCheckerWithMEF.Plugins.CustomIPChecker
    • IPCheckerWithMEF.Plugins.DynDnsIPChecker

I set the output folder of the plugins to a directory called Plugins at the project level.

Let’s see some code!

For this basic version we need 3 things:

  • A container to handle the composition.
  • A catalog that the container can use to discover the plugins.
  • A way to tell which classes should be discovered and imported

In this sample I used a DirectoryCatalog that points to the output folder of the plugin projects. So after adding the required parts above the main application shaped up to be something like this:

public class MainApplication
{
    private CompositionContainer _container;

    [ImportMany(typeof(IIpChecker))]
    public List<IIpChecker> IpCheckerList { get; set; }

    public MainApplication(string pluginFolder)
    {
        var catalog = new DirectoryCatalog(pluginFolder);
        _container = new CompositionContainer(catalog);

        LoadPlugins();
    }

    public void LoadPlugins()
    {
        try
        {
            _container.ComposeParts(this);
        }
        catch (CompositionException compositionException)
        {
            Console.WriteLine(compositionException.ToString());
        }
    }
}

In the constructor, it instantiates a DirectoryCatalog with the given path and passes it to the container. The container imports IIpChecker type objects found in the assemblies inside that folder. Note that we didn’t do anything about IpCheckerList. By decorating it with ImportMany attribute we declared that it’s to be filled by the composition engine. In this example we could only use ImportMany as opposed to Import which would look for a single part to compose. If we used Import we would get the following exception:

Now to complete the circle we need to export our plugins with Export attribute such as:

[Export(typeof(IIpChecker))]
public class AwsIPChecker : IIpChecker
{
    public string GetExternalIp()
    {
        // ...
    }
}

Alternatively we can use InheritedExport attribute on the interface to export any class that implements the IIpChecker interface.

[InheritedExport(typeof(IIpChecker))]
public interface IIpChecker
{
    string GetExternalIp();
}

This way the plugins would still be discovered even if they weren’t decorated with Export attribute because of this inheritance model.

Putting it together

Now that we’ve seen the plugins that export the implementation and part that discovers and imports them let’s see them all in action:

class Program
{
    static void Main(string[] args)
    {
        Console.WriteLine("Starting the main application");

        string pluginFolder = @"..\..\..\Plugins\";
        var app = new MainApplication(pluginFolder);

        Console.WriteLine($"{app.IpCheckerList.Count} plugin(s) loaded..");
        Console.WriteLine("Executing all plugins...");

        foreach (var ipChecker in app.IpCheckerList)
        {
            Console.WriteLine(ObfuscateIP(ipChecker.GetExternalIp()));
        }
    }

    private static string ObfuscateIP(string actualIp)
    {
        return Regex.Replace(actualIp, "[0-9]", "*");
    }
}

We create the consumer application that loads all the plugins in the directory we specify. Then we can loop over and execute all of them:

So far so good. Now, let’s try to export some metadata about our plugins so that we can display the loaded plugins to the user.

IP Checker with MEF v2: Metadata comes into play

In almost all applications plugins come with some sort of information so that the user can identify which ones have been installed and what they do. To export the extra data let’s add a new interface:

public interface IPluginInfo
{
    string DisplayName { get; }
    string Description { get; }
    string Version { get; }
}

And on the plugins we fill that data and export it using the ExportMetadata attribute:

[Export(typeof(IIpChecker))]
[ExportMetadata("DisplayName", "Custom IP Checker")]
[ExportMetadata("Description", "Uses homebrew service developed with Node.js and hosted on Heroku")]
[ExportMetadata("Version", "2.1")]
public class CustomIpChecker : IIpChecker
{
    // ...
}

In v1, we only imported a list of objects implementing IIpChecker. So how do we accommodate this new piece of information? In order to do that we have to change the way we import the plugins and use the Lazy construct:

[ImportMany]
public List<Lazy<IIpChecker, IPluginInfo>> Plugins { get; set; }

According to MSDN this is mandatory to get metadata out of plugins:

The importing part can use this data to decide which exports to use, or to gather information about an export without having to construct it. For this reason, an import must be lazy to use metadata

So let’s load and display this new plugin information:

private static void PrintPluginInfo()
{
    Console.WriteLine($"{_app.Plugins.Count} plugin(s) loaded..");
    Console.WriteLine("Displaying plugin info...");
    Console.WriteLine();

    foreach (var ipChecker in _app.Plugins)
    {
        Console.WriteLine("----------------------------------------");
        Console.WriteLine($"Name: {ipChecker.Metadata.DisplayName}");
        Console.WriteLine($"Description: {ipChecker.Metadata.Description}");
        Console.WriteLine($"Version: {ipChecker.Metadata.Version}");
    }
}

Notice that we access the metadata through [PluginName].Metadata.[PropertyName] properties. To access the actual plugin and call the exported methods we have to use [PluginName].Value such as:

foreach (var ipChecker in _app.Plugins)
{
    ipChecker.Value.GetExternalIp();
}

Managing the plugins

What if we want to add or remove plugins at runtime? We can do it without restarting the application but refreshing the catalog and calling the container’s ComposeParts method again.

In this sample application I added a FileSystemWatcher that listens to the Created and Deleted events on the Plugins folder and calls the LoadPlugins method of the application when an event fires. LoadPlugins first refreshes the catalog and composes the parts:

public void LoadPlugins()
{
    try
    {
        _catalog.Refresh();
        _container.ComposeParts(this);
    }
    catch (CompositionException compositionException)
    {
        Console.WriteLine(compositionException.ToString());
    }
}

But making this change alone isn’t sufficient and we would end up getting a CompositionException:

By default recomposition is disabled so we have to specify it explicitly while importing parts:

[ImportMany(AllowRecomposition = true)]
public List<Lazy<IIpChecker, IPluginInfo>> Plugins { get; set; }

After these changes the final version of composing class looks like this:

public class MainApplication
{
    private CompositionContainer _container;
    private DirectoryCatalog _catalog;

    [ImportMany(AllowRecomposition = true)]
    public List<Lazy<IIpChecker, IPluginInfo>> Plugins { get; set; }

    public MainApplication(string pluginFolder)
    {
        _catalog = new DirectoryCatalog(pluginFolder);
        _container = new CompositionContainer(_catalog);

        LoadPlugins();
    }

    public void LoadPlugins()
    {
        try
        {
            _catalog.Refresh();
            _container.ComposeParts(this);
        }
        catch (CompositionException compositionException)
        {
            Console.WriteLine(compositionException.ToString());
        }
    }
}

and the client app:

class Program
{
    private static readonly string _pluginFolder = @"..\..\..\Plugins\";
    private static FileSystemWatcher _pluginWatcher;
    private static MainApplication _app;

    static void Main(string[] args)
    {
        Console.WriteLine("Starting the main application");

        _pluginWatcher = new FileSystemWatcher(_pluginFolder);
        _pluginWatcher.Created += PluginWatcher_FolderUpdated;
        _pluginWatcher.Deleted += PluginWatcher_FolderUpdated;
        _pluginWatcher.EnableRaisingEvents = true;

        _app = new MainApplication(_pluginFolder);

        PrintPluginInfo();

        Console.ReadLine();
    }

    private static void PrintPluginInfo()
    {
        Console.WriteLine($"{_app.Plugins.Count} plugin(s) loaded..");
        Console.WriteLine("Displaying plugin info...");
        Console.WriteLine();

        foreach (var ipChecker in _app.Plugins)
        {
            Console.WriteLine("----------------------------------------");
            Console.WriteLine($"Name: {ipChecker.Metadata.DisplayName}");
            Console.WriteLine($"Description: {ipChecker.Metadata.Description}");
            Console.WriteLine($"Version: {ipChecker.Metadata.Version}");
        }
    }

    private static void PluginWatcher_FolderUpdated(object sender, FileSystemEventArgs e)
    {
        Console.WriteLine();
        Console.WriteLine("====================================");
        Console.WriteLine("Folder changed. Reloading plugins...");
        Console.WriteLine();
        
        _app.LoadPlugins();

        PrintPluginInfo();
    }
}

After these changes I started the application with 2 plugins in the target folder and added a 3rd one while it’s running and got this output:

It also works the same way for deleted plugins but not for updates because the assemblies are locked by .NET. Adding new plugins at runtime is painless but removing and updating would require more attention as the plugin might be running at the time.

Resources

designdev design_patterns, csharp

A few days ago I published a post discussing Factory Method pattern. This article is about the other factory design pattern: Abstract Factory.

Use case: Switching between configuration sources easily

Imagine in a C# application you accessed ConfigurationManager.AppSettings whenever you needed a value from the configuration. This would essentially be hardcoding the configuration source and it would be hard to change if you needed to switch to another configuration source (database, web service, etc). A nicer way would be to “outsource” the creation of configuration source to another class.

What is Abstract Factory?

Here’s the official definition from GoF:

Provide an interface for creating families of related or dependent objects without specifying their concrete classes.

Implementation

The application first composes the main class (ArticleFeedGenerator) with the services it will use and starts the process.

static void Main(string[] args)
{
    IConfigurationFactory configFactory = new AppConfigConfigurationFactory();

    IApiSettings apiSettings = configFactory.GetApiSettings();
    IFeedSettings feedSettings = configFactory.GetFeedSettings();
    IFeedServiceSettings feedServiceSettings = configFactory.GetFeedServiceSettings();
    IS3PublisherSettings s3PublishSettings = configFactory.GetS3PublisherSettings();
    IOfflineClientSettings offlineClientSettings = configFactory.GetOfflineClientSettings();

    var client = new OfflineClient(offlineClientSettings);
    var articleFeedService = new ArticleFeedService(feedServiceSettings);
    var publishService = new S3PublishService(s3PublishSettings, feedSettings);
    var feedGenerator = new ArticleFeedGenerator(client, articleFeedService, publishService, feedSettings);

    feedGenerator.Run();
}

This version uses AppConfigConfigurationFactory to get the values from the App.config. When I need to switch to DynamoDB which I used in this example all I have to do is replace one line of code in the application:

var configFactory = new DynamoDBConfigurationFactory();

With this change alone we are essentially replacing a whole family of related classes.

On the factory floor

The abstract factory and the concrete factories implement it are shown below:

Concrete configuration factories create the classes that deal with specific configuration values (concrete products). For instance AppConfigConfigurationFactory looks like this (simplified for brevity):

public class AppConfigConfigurationFactory : IConfigurationFactory
{
    public IApiSettings GetApiSettings()
    {
        return new AppConfigApiSettings();
    }

    public IFeedServiceSettings GetFeedServiceSettings()
    {
        return new AppConfigFeedServiceSettings();
    }
}

Similarly, DynamoDBConfigurationFactory is responsible for creating concrete classes that access DynamoDB values:

public class DynamoDBConfigurationFactory : IConfigurationFactory
{
    protected Table _configTable;
    
    public DynamoDBConfigurationFactory()
    {
        AmazonDynamoDBClient dynmamoClient = new AmazonDynamoDBClient("accessKey", "secretKey", RegionEndpoint.EUWest1);
        _configTable = Table.LoadTable(dynmamoClient, "tableName");
    }
    
    public IApiSettings GetApiSettings()
    {
        return new DynamoDBApiSettings(_configTable);
    }

    public IFeedServiceSettings GetFeedServiceSettings()
    {
        return new DynamoDBFeedServiceSettings(_configTable);
    }
}

Notice all the “concrete products” implement the same “abstract product” interface and hence they are interchangable. With the product classes in the picture the diagram now looks like this:

Finally let’s have a look at the concrete objects that carry out the actual job. For example the IApiSettings exposes 2 string properties:

public interface IApiSettings
{
    string ApiKey { get; }
    string ApiEndPoint { get; }
}

If we want to read these values from App.config it’s very straightforward:

public class AppConfigApiSettings : IApiSettings
{
    public string ApiKey
    {
        get { return ConfigurationManager.AppSettings["ApiKey"]; }
    }

    public string ApiEndPoint
    {
        get { return ConfigurationManager.AppSettings["ApiEndPoint"]; }
    }
}

The DynamoDB version is fairly more complex but it makes no difference from the consumer’s point of view. Here GetValue is a method in the base class that returns the value from the encapsulated Amazon.DynamoDBv2.DocumentModel.Table object.

public class DynamoDBApiSettings : DynamoDBSettingsBase, IApiSettings
{
    public DynamoDBApiSettings(Table configTable)
        : base (configTable)
    {
    }

    public string ApiKey
    {
        get { return GetValue("ApiKey"); }
    }

    public string ApiEndPoint
    {
        get { return GetValue("ApiEndPoint"); }
    }
}

The concrete factory is responsible for creating the concrete classes it uses. So the client is completely oblivious to the classes such as DynamoDBApiSettings or AppConfigApiSettings. This means we can add a whole new set of configuration classes (i.e. a web service) and all we have to change in the client code will be one line where we instantiate the concrete factory.

This approach also allows us to be more flexible with the concerete class implementations. For example DynamoDB config class family requires a Table object in their constructors. To avoid code repetition I derived them all from a base class and moved the table to the base but the that doesn’t change anything in the client code.

Resources