Game comments edit

I’m not much of a gamer but recently discovered this game and likes it quite a lot. Not surprising to me of course given that I like everything about The Simpsons. Actually the game is very simple and lack of challenge so if it wasn’t about The Simpsons there wouldn’t be any compelling reason for me to keep playing.

The story is Springfield explodes because of a failure in the nuclear power plant (not surprisingly caused by Homer). The game starts with Homer and Lisa trying to rebuild new Springfield. You unlock characters as you complete tasks. You build houses and collect taxes to buy more items to decorate the town. Here’s what my Springfield looks like:

Simpsons Tapped Out

Simpsons Tapped Out

I like the fact that you can just assign the tasks to the characters and after 8 or 12 hours you tapped them to collect the money and experience points (required to pass levels) they earned. So you don’t have to play it constantly.

There are lots of premium items available but I found the pricing too expensive so I’m just playing it for free. For example you can buy 300 donuts for £14 and for that amount of donuts you can only buy a few premium characters (Otto and Professor Frink for instance). I think for that money you should be able to buy everything in that game. Still I’m happy with the free Springfield I built.

Resources

Gadget, Security comments edit

One of the online shows I enjoy is Hak5.org’s podcast (http://hak5.org). Hak5 also manufactures tools for penetration testers. WiFi Pineapple (https://wifipineapple.com/) is one of the devices they manufacture. It is a “hotspot honeypot” and its most powerful feature is something called a Karma attack.

What is Karma Attack?

Simply put when our wireless devices keep sending out probe requests searching for the networks they “know” to re-associate. Normally all APs that don not have the SSID that’s probed for simply ignore these packets. But not WiFi Pineapple! It runs a modified firmware and replies to all probe requests claiming that it is the network our device is looking for. The modified firmware is called Jasager (yes-man in German) which explains a lot I think.

Build or Buy One

Base WiFi Pineapple costs $99. You can buy one from here: http://hakshop.myshopify.com/collections/gadgets/products/wifi-pineapple

Wi-Fi Pineapple

If you like getting your hands dirty to dig deeper you can build one on your own. The firmware is a free download. The router inside WiFi Pineapple is an Alfa AP121U which costs around £40 or you can go with the bare board which costs around £20 (here on Amazon) Also you need to flash it via serial port and you need a USB TTL cable (here on Amazon) They have a great step-by-step tutorial (see References down below). After following the instructions you can have your own homemade WiFi Pineapple within 20 minutes.

So what is the risk?

If you have a habit of using unsecured wireless networks than you are under risk. As by default most devices try to connect to previous networks automatically, there is a chance to connect to attacker’s AP as it is faking to be your old friendly network that you used to be connected. Good news is that pineapple doesn’t support Karma attack for protected networks. So if you manage to stay away from open networks then you are off the hook. But still it doesn’t hurt to be careful and watch out closely to where you are connecting.

Resources

Cloud Computing, System Administration comments edit

Evernote has been recently hacked. Dropbox has been hacked many times. Who knows what’s going in the other services we are using. So I decided to phase out my cloud service providers and create my own cloud. There are bunch of ways of running this tool. For instance, you can just download a VM image with everything installed. I decided to start from scratch and perform a manual installation on a new Ubuntu server. It’s very easy. First we need to install dependencies:

apt-get install apache2 php5 php5-gd php-xml-parser php5-intl
apt-get install php5-sqlite php5-mysql smbclient curl libcurl3 php5-curl

Then extract the downladed compressed file:

tar -xjf path/to/downloaded/owncloud-x.x.x.tar.bz2
cp -r owncloud /path/to/your/webserver

Set the directory permissions:

chown -R www-data:www-data /path/to/your/owncloud/

Enable .htaccess by settings AllowOverride to “All” in /var/www directory in Apache config which is in /etc/apache2/sites-enabled/000-default on Ubuntu Finally run mod_rewrite:

a2enmod rewrite
a2enmod headers

I got these instructions from Admin Manual which can be found here: ownCloud Admin Manual It’s quite straighforward. Then all we have to do is navigate to login page, create an admin account and start uploading files:

Own Cloud

My favourite features are:

  • Ability to share password protected links with specific users
  • Ability to set expiry date to shared files
  • Ability to sync mulitple local folders (it doesn’t have to mimic the directory structure of server, you can select and map separate folders)
  • Supports plugins. A simple note taking plugin is quite helpful to take and sync notes. Also I installed YubiAuth plugin which supposedly enables using my Yubikey with it. But couldn’t make it work yet. My only negative observation about it is SMTP settings didn’t work. When I tried to send someone a link of shared file I got a bizarre error. On their forums I saw other people having similar problems. To me it’s not a crucial issue (as a single user, who am I going to mail anyway) but for an organization it may quickly become an annoying issue.

Gadget, Programming comments edit

To me a technology that enables you to collect data about your brain activity sounds fascinating. It always felt like Sci-Fi and unreachable. So when I heard about the affordable MindWave I immediately ordered it.

MindWave Mobile

This gizmo is manufactured by a company called NeuroSky focusing on brainwave technologies. I bought the MindWave mobile version as it has support to mobile devices which increases the possibilities of creating something cool. The best thing about it is that it comes with an SDK and you can develop your own applications on the platform. To get more info about the SDK visit http://developer.neurosky.com They even have an app store that you can sell your applications. But the developer program costs $1500 so I don’t think I will sign up for that quite a while.

How does it work

The gadget communicates via Bluetooth. It supports lots of platforms and comes with an API ported to different languages. I prefered .NET and it worked without any problems. The real power of the device comes from the ThinkGear chipset. The API lets the developer to get results from the ThinkGear chipset. When you install the software bundled with the device, it installs ThinkGear connector and a bunch of games. First thing to do is pair the headset with your PC or iOS/Android device. Frankly, I didn’t quite like the applications that come with it. But it is not that important. After all I bought this thing to write my own programs against it.

NeuroSky

The tutorial application, on the other hand, is very useful for testing the device and connection status.

Developing with MindWave

The starting point is definitely here: http://developer.neurosky.com/

The site steers the user very well so that you can select your goals and start developing right away. Actually the API is quite easy to use. After you connect you start receiving values from the sensor. In .NET wrapper the values are encapsulated in a class called ThinkGearState, which looks like this (I got this from its metadata):

public class ThinkGearState
{
    public float Alpha1;
    public float Alpha2;
    public float Attention;
    public float Battery;
    public float Beta1;
    public float Beta2;
    public float BlinkStrength;
    public float Delta;
    public bool Error;
    public float Gamma1;
    public float Gamma2;
    public float Meditation;
    public int PacketsRead;
    public float PoorSignal;
    public float Raw;
    public float Theta;
    public int Version;

    public ThinkGearState();

    public override string ToString();
}

The key fields for me are Attention and Meditation. BlinkStrength is also interesting. If you blink intentionally and strongly, the value wanders around 150 – 200. For normal blinks that we do quite often, it is around 50 – 60. So it is easy to differentiate if someone blinks. I wondered if this could be used as a communication method for Hector Salamanca in Breaking Bad. Instead of ringing a bell he could just blink. Admittedly it wouldn’t provide any extra functionality but it would look much cooler.

Breaking Bad Hector Salamanca

I don’t know how the Attention and Meditation values are calculated. The device also returns values for the various brain waves such as alpha, beta, theta, gamma and delta. I had no clue what these meant so here’s what I’ve learned from here and here.

  • Alpha: Increases in the state of physical and mental relaxation
  • Beta: Increases when we are consciously alert, or we feel agitated, tense, afraid
  • Theta: Shows the state of reduced consciousness
  • Delta: Increases when there is unconsciousness, deep sleep or catalepsy
  • Gamma: These waves are associated with peak concentration and extremely high levels of cognitive functioning

I don’t know why Alpha, Beta and Gamma waves return 2 values whereas Delta and Theta have only 1. As my knowledge on this subject is almost zero, I’ll just concentrate on the already-calculated Attention and Meditation values. I’ll try to develop a project using this gizmo and post it when it’ is ready. I think it is a very cool thing to have the ability to measure brain waves and write programs using those values. I guess the only problem for me is that I already constantly wear a wireless headset so it’s a bit hard to have them both on my head!

Resources

Programming, Raspberry Pi comments edit

I was going to write myself a desktop notification user control. I was planning it to be a simple window popping up when an event occurred. Before investing time and effort into this, I decided to look around to find a similar project and build on it. Unfortunately I couldn’t find something to my liking but discovered Growl. It has all feature you might expect from desktop notification tool. One additional feature that pleasingly surprised me is that you can send notifications to another machine over the network. This sounds good to me as I’m working on running my applications on Raspberry Pi using Mono lately. So the idea is to run the program on my Pi and receive the notifications on my desktop where I spend most of my time. Another benefit of Growl is that it is open-source which can be found here: https://code.google.com/p/growl-for-windows/

I don’t like my programs to be dependent on some external software that needs to be installed on the client machine but I thought this could be optional because desktop notifications can be one channel for communications and others can be added if necessary. So it is not a dependency but rather it enhancement in functionality. Also a pitfall in software development is the anti-pattern described as Not Invented Here. One simply cannot develop every piece of software needed to build complex systems. It’s not feasible. Of course when I write code on my own, my main goal is to learn something new but still I like to get results and produce working software. So best practices for commercial software still apply. Having convinced myself to use Growl for messaging I started looking for ways integrate it with my application. It comes with .NET SDK which is quite easy to use.

There are two assemblies need to be referred to:

  • Growl.CoreLibrary.dll
  • Growl.Connector.dll

The interesting bits are in Growl.Connector library. First you need to create an instance of GrowlConnectorclass. You can specify the remote hostname and password to send the notifications over the network which is what I wanted to do.

this.growl = new GrowlConnector("password", "192.168.1.64", GrowlConnector.TCP_PORT);

Next, you have to register the application. If it is not registered, Growl will discard notifications coming from this source

this.application = new Growl.Connector.Application("Test notifier from ROHAN");
this.notificationType = new NotificationType(sampleNotificationType, "Sample Notification");
this.growl.Register(this.application, new NotificationType[] { notificationType });

Final step is to enable notifications over the network. By default it only accepts messages from the local machine.

Growl Security Settings

After the setup is completed we can send a test notification by this piece of simple code:

string text = string.Format("DateTime: {0}", DateTime.Now.ToString("dd/MM/yyyy HH:mm"));
Notification notification = new Notification(this.application.Name, this.notificationType.Name, DateTime.Now.Ticks.ToString(), "Mmessage from ROHAN", text);
this.growl.Notify(notification);

And the result is:

Growl Message

So far so good. With only a few lines of code we managed to send a desktop notification over the network. We could specify a callback method to handle responses from the Growl host. We could also specify the encryption algorithm to enhance security. Now the last thing to test for me is to see it running on Raspberry Pi. To do that I created a sample console application that looks like this:

using System;
using Growl.Connector;

namespace MonoWorkout
{
	class MainClass
	{
		public static void Main(string[] args)
		{
			GrowlConnector growl = new GrowlConnector("password", "192.168.1.64", GrowlConnector.TCP_PORT);
			growl.EncryptionAlgorithm = Cryptography.SymmetricAlgorithmType.PlainText;
			Growl.Connector.Application application = new Application("Test notifier from Raspberry Pi");
			NotificationType notificationType = new NotificationType("SAMPLE_NOTIFICATION", "Sample Notification");
			growl.Register(application, new NotificationType[] { notificationType });

			Console.WriteLine("Type message to generate notification");

			string message = string.Empty;
			while ((message = Console.ReadLine()) != null)
			{
				if (message == "q")
				{
					Console.WriteLine("Quitting program");
					break;
				}

				string text = string.Format("DateTime: {0} \t Message: {1}", DateTime.Now.ToString("dd/MM/yyyy HH:mm"), message);
				Notification notification = new Notification(application.Name, notificationType.Name, DateTime.Now.Ticks.ToString(), "Message from Raspberry Pi", text);
				growl.Notify(notification);
				Console.WriteLine("Notification sent");
			}
		}
	}
}

I ran the application but did not receive the results. I immediately ran WireShark and could see the packages coming to my desktop machine so it is not a network or firewall issue. After Googling a little bit I’ve found that there is Mono branch in the source code. I downloaded it and replaced the binaries with their Mono counterparts. Tested it again but to no avail.

Growl_Message_Capture

When I sent the message I can clearly see it in wireshark but I don’t know why Growl is rejecting them. I ran the same application in both a Windows 7 instance and Raspberry Pi and captured the message packets. Outcome is interesting:

Here’s the message sent from Windows:

GNTP/1.0 NOTIFY NONE MD5:C5FB01D47A56832A17B3F941BC6F327F.3ECBA79D164DA5F8
Application-Name: Test notifier from Raspberry Pi
Notification-Name: SAMPLE_NOTIFICATION
Notification-ID: 634982701932496447
Notification-Title: Message from Raspberry Pi
Notification-Text: DateTime: 07/03/2013 16:23 	 Message: TEST_ROHAN
Notification-Sticky: No
Notification-Priority: 0
Notification-Coalescing-ID: 
Origin-Machine-Name: ROHAN
Origin-Software-Name: GrowlConnector
Origin-Software-Version: 2.0.0.0
Origin-Platform-Name: Microsoft Windows NT 6.1.7601 Service Pack 1
Origin-Platform-Version: 6.1.7601.65536
</pre>
And this is the one coming from Raspberry Pi:
<pre name="code" class="c-sharp:nocontrols">
0'_`E{@P@ZTg<
	DGNTP/1.0 NOTIFY NONE MD5:45DF50ED8E166AE3AF39F0FEFFC36F5D.6EC74DE6BCD67A71
Application-Name: Test notifier from Raspberry Pi
Notification-Name: SAMPLE_NOTIFICATION
Notification-ID: 634982703645630380
Notification-Title: Message from Raspberry Pi
Notification-Text: DateTime: 07/03/2013 16:26 	 Message: TEST_RASPBERRYPI
Notification-Sticky: No
Notification-Priority: 0
Notification-Coalescing-ID: 
Origin-Machine-Name: raspberrypi
Origin-Software-Name: GrowlConnector
Origin-Software-Version: 2.0.0.0
Origin-Platform-Name: Unix 3.1.9.0
Origin-Platform-Version: 3.1.9.0

There is a 16 byte block at the beginning and I believe because of that Growl cannot parse the message therefore end up discarding it.

At this point, I’ll shelf this problem and look for alternative solutions. I hate leaving a problem unsolved like this but it is not a crucial feature so I’d rather not invest too much time into it. So for now my official opinion is, despite the Mono branch in the SVN, I don’t think Growl supports Mono.

Programming, Raspberry Pi comments edit

As a developer my initial plan was to develop something running on Raspberry Pi. Unfortunately being a .NET developer and playing around with Microsoft stack all the time, my arsenal for Linux development is very limited. Before I master Python, I wanted to run small applications using Mono. This would be a good chance to see how smoothly .NET programs can run independent from the platform.

So I booted my Raspberry Pi with a Raspbian image (hard-float ABI). And installed Mono runtime and MonoDevelop IDE.

sudo apt-get update
sudo apt-get install mono-runtime
sudo apt-get install monodevelop

Launched MonoDevelop eagerly to write my first Hello World program on Raspberry Pi and boom! I got the following error:

MonoDevelop Exception

The good old “Object reference not set to an instance of an object” exception!

After searching around I found out that Mono doesn’t run on Raspbian image and it requires an image with “soft-float ABI“. Turns out soft-float version runs floating point operations using software instead of FPU (Floating Point Unit). Therefore soft-float version it is slower than Raspbian. I quickly downloaded the soft-float image and tried to boot it up again. This time I couldn’t even see the login screen. It got stuck at a stage saying “Waiting for /dev to be fully populated” After some time it timed out and started giving some errors.

Raspberry_SoftFloat_with_512MB

Having no idea what’s going on, consulted Google again and found out other people had the same problem. The proposed solution was to replace start.elf with the one from the hard-float image. I tried running it with the replaced elf file but got the same result. I’ve been doing all these experiments on my new Raspberry Pi which is 512MB. Having failed where others seemed to succeed, I put the blame on the hardware I’m using and decided to try the same image with the old Pi. The result was promising: I could boot the Pi with the soft-float version finally. I installed the Mono runtime and MonoDevelop again but looks like MonoDevelop is above Pi’s paygrade! It was excruciatingly slow that I decided to create the sample project on my desktop PC and carry it over with a USB flash drive. Mounted the flash drive using the following commands (replace tosh with directory name you want and make sure you’re mounting the correct device.)

sudo mkdir /media/tosh
sudo mount -t vfat -o uid=pi,gid=pi /dev/sda1 /media/tosh/

Here comes the moment of truth. I changed the directory to the copied files and ran the exe file. Here’s the output:

Mono on Raspberry

The screen glares but at the bottom of the screen you can see the glowing (by all means) phrase: Hello World! Of course, this is just the beginning. I’ll see how compatible and reliable Mono framework is after I deploy more complex applications on Raspberry Pi.

Programming comments edit

Today I came across an interesting namespace collision. I’m writing a library to wrap a 3rd party API. So without getting into specifics I’ll try to illustrate the situation on a sample piece of code. Let’s say we have a class called Test in TestNamespace namespace.

namespace TestNamespace
{
    public class Test
    {
        public static void StaticMethod()
        {
        }
    }
}

and the calling class is something like this:

namespace DifferentNamespace.TestNamespace
{
    class Program
    {
        static void Main(string[] args)
        {
            TestNamespace.Test.StaticMethod();
        }
     }
}

This code doesn’t compile because compiler thinks “TestNamespace.Test” is actually “DifferentNamespace.TestNamespace.Test”.

Adding a using directive doesn’t help either. As it has the same namespace as the subnamespace of the calling class it always resolves to calling class’s namespace. The solution is using “global” namespace.

namespace DifferentNamespace.TestNamespace
{
    using TestNamespace = global::TestNamespace;

    class Program
    {
        static void Main(string[] args)
        {
            TestNamespace.Test.StaticMethod();
        }
    }
}

By explicitly specifying which TestNamespace we are referring to we resolve the conflict. One thing to keep in mind is that we have to define it inside the namespace. If we used it outside the DifferentNamespace.TestNamespace, then inside the namespace TestNamespace would still mean “DifferentNamespace.TestNamespace”

Before this incident, I never had to use the global keyword. Probably the best way to avoid this is by naming conventions but sometimes you may not be able to change the namespace name. You can break lots of things if there are dependant parties on that code. So every now and then this tip may come in handy, just like it did to me in this instance.

Productivity, Programming comments edit

Some code snippets are extremely helpful like prop for properties and ctor for constructors. But writing a method is always taking relatively long time as there is no snippet for methods. For good reason I guess as there are all different shapes and colours of methods but I think a snippet can save some time for simple methods. So I decided to create my own snippets. Here’s how to do it in 3 simple steps:

STEP 01: Download the snippet designer from here: http://snippetdesigner.codeplex.com Install and restart visual studio. From File –> New menu select Code Snippet file type.

STEP 02: Save the output of the snippet designer under code snippets folder %USERPROFILE%\Documents\Visual Studio 2012\Code Snippets\Visual C#\My Code Snippets which looks like this


<?xml version="1.0" encoding="utf-8"?>
<CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
  <CodeSnippet Format="1.0.0">
    <Header>
      <SnippetTypes>
        <SnippetType>Expansion</SnippetType>
      </SnippetTypes>
      <Title>MethodVoid</Title>
      <Author>dummy</Author>
      <Description>
      </Description>
      <HelpUrl>
      </HelpUrl>
      <Shortcut>method_void</Shortcut>
    </Header>
    <Snippet>
      <Declarations>
        <Literal Editable="false">
          <ID>Method1</ID>
          <ToolTip></ToolTip>
          <Default>
          </Default>
          <Function>
          </Function>
        </Literal>
      </Declarations>
      <Code Language="csharp"><![CDATA[public void MyMetHod()
    {
    }]]></Code>
    </Snippet>
  </CodeSnippet>
</CodeSnippets>

STEP 03: Restart Visual Studio for the changes to take effect. I created a few snippets for simple methods returning primitive types. Depending on your needs you can choose the optimum number of snippets. I especially like using the test method snippet which looks like this:

TestMethodSnippet

Another great feature about this tool is you can export a selected text block as a snippet. All you have to do is right-click and select Export as Snippet, make the final touches in the editor and save.

Hardware, Networking comments edit

There aren’t too many reasons why someone would like to make their own Ethernet cables. Sheer fun, learning the nitty-gritty (and useless in most cases) details of how they are made are to name a few. Also as I have too many gadgets making my own cables at any length I please would be convenient and can save a few bucks in the long run. So let’s get started.

The toolkit

Cable:

CAT6: http://www.amazon.co.uk/gp/product/B002SQPDXS/ref=oh_details_o00_s00_i00

CAT5: http://www.amazon.co.uk/gp/product/B000HWY304/ref=wms_ohs_product

For the obvious reasons! It’s best to get the latest version generally. I will use CAT6 all around my network but I also bought CAT5e as I was anticipating some errors at the beginning so I’d better make them by wasting cheaper cable.

RJ45 Modular Connectors:

http://www.amazon.co.uk/gp/product/B004HTK30M/ref=oh_details_o01_s00_i00

I’ve watched a lot of tutorial videos. When an expert is showing it, it looks quite easy but I quickly found out it’s not. Aligning all 8 cables and placing them in the correct order is not as easy as it seems. So I definitely recommend two-piece crimps that come with a “guide”. It’s a small piece that allows you to insert all the cables relatively easily and then you insert the whole block into the crimp.

RJ45 Crimp Tool: http://www.amazon.co.uk/gp/product/B004J02DRU/ref=oh_details_o00_s00_i00

The set I bought comes with a crimper and a cutter. It looks good enough to do tis job and quite cheap. After the cables are inserted in the connector, the crimper is used to press them all in and connect with the

Tester:

http://www.amazon.co.uk/gp/product/B007CJUEDA/ref=wms_ohs_product

Altough it is easy to test the cable by connecting to a machine, it is generally recommended to use a tester. I guess it makes sense especially when you have to make lots of cables. I bought one for £4 so I guess it’s good deal. The downside is doesn’t support CAT6 which I didn’t notice at the or ordering. If manual testing doesn’t prove to be helpful I would consider buying a better product.

Boots:

http://www.amazon.co.uk/gp/product/B009EPCOP6/ref=wms_ohs_product

These are plastic sleeves over the connector. They are helpful sometimes to avoid the connector clip from getting broken

CableStuff[1]

Technical Details

CAT5 vs. CAT5e vs. CAT6: The difference is that CAT6 supports 1000Base-T/1000Base-TX (Gigabit Ethernet). CAT5 and CAT5e supports 10Base-T/100Base-TX (Maximum 100Mbit/s)  CAT5e is an improvement over CAT5. It introduces new crosstalk specifications. (Crosstalk means a signal creating a detrimental effect on another channel)

T568A vs. T568B: The order of the cables matters and they have to be in the same order in both ends. These specific orders are named as T568A and B. What I understand you can use as long as you use it for both ends but all resources I’ve found favoured using T568B so I’ll use that one as well.

Crossover cable: When you connect one end in T568A and the other T568B it becomes a crossover cable (Regular ones are called patch cables). Crossover cables are used to connect two computers instead connecting a computer to a switch or router. I’ll test creating one of these as well.

Action!

I think we have everything ready to get started. Here’s what I did step-by-step:

  1. Cut the required length of cable.
  2. Remove the outer jacket.
  3. Arrange the wires by referring to the wiring standard (T568B) and insert them into the guide of the connector.
  4. Insert the guide into the connector.
  5. Insert the connector to the crimp tool and press it firmly.
  6. Repeat Steps 1- 5 for the other end. 10 minutes later I had my first homebrew CAT5 cable:

CAT5

Let’s use the tester to verify we did good. Using the tester is quite simple: Just plug in the both ends to the device. If you lights blinking from 1 to 8 simultaneously on both sides that means we are good. I tested it with a broken table too. In that case no lights were on so it’s easy to see if it works or not.

Cable Tester

CAT6 is slightly different: It has plastic part in the middle of the cable which holds the cables apart as shown in the image below.

CAT6

All we have to do cut that piece before separating the pairs and the rest is exactly the same as CAT5.

Resources

  • There is a nice KnowHow episode about Arduino and cable making:

NOSQL, Programming comments edit

In this post we are diving into coding and developing a small application using the beer sample database that ships with Couchbase 2.0.

Environment Setup

To develop a .NET application with a Couchbase backend, we need the Couchbase .NET SDK. The current version as of this writing can be downloaded from here. But the best way to get it using Nuget. Using the SDK is fairly simple. It comes with a main class called *CouchbaseClient. *All operations are performed using this class.

Connecting to server

The first step is connecting to the server and the easiest way to do is using the configuration file.

<configuration>
    <configsections>
        <section name="couchbase" type="Couchbase.Configuration.CouchbaseClientSection, Couchbase" />
    </configsections>
    <couchbase>
        <servers bucket="beer-sample" bucketpassword="">
            <add uri="http://192.168.1.111:8091/pools/" />

            <add uri="http://192.168.1.112:8091/pools/" />

        </servers>
    </couchbase>
</configuration>

As you can see from the configuration section, if you have multiple nodes in the cluster just add their URIs to the servers list. Once IPs and the bucket and the password are specified we are done. We don’t need to explicitly connect to the database, we can just create a new client instance and start calling methods

using (CouchbaseClient client = new CouchbaseClient())
{
    // DB operations go here

}

Basic Operations

OK so far so good. We are connected to the server without a hassle. As there is already data in the server let’s get some sample data from the database. As the database is a key/value store we can add any type of data we want to. We can create our JSON objects in a string and insert/update data with it. But most likely we want to use our domain objects instead of manipulating raw JSON. There are 2 things to consider here. Once we tackle those issues the rest is quite easy:

  1. Mark your objects as Serializable: This is required to persist any object. Once you make the class serializable you can run CRUD operations on it.
  2. The default serializer is binary serializer. That means when you store an object using by calling Store method you will get something like this when you try to view the object:

Beer Binary

This is not too helpful. We cannot read and index. So we’d rather store it in JSON format. Luckily StoreJson method comes to rescue. The following code produces the result below which is exactly what we wanted. To map the key’s in JSON object to the properties in our class we use JsonProperty attribute in the Newtonsoft.Json library which is used the SDK itself.

Beer JSON Code

Beer JSON Output

Store and StoreJson methods accept an argument of type StoreMode. The values of StoreMode are Add, Set and Replace. Add is used to create a new record (INSERT), Replace is used to update an existing record (UPDATE). Set adds the record if it doesn’t exist and updates it if it exists (MERGE – but simpler). To delete an object we call the Remove method with the objects key as argument. So basically we perform CRUD operations with Get/GetJson, Store/StoreJson and Remove methods.

Querying database with views

Views in Couchbase 2.0 are functions written in JavaScript that use a technique called Map/Reduce. Map/Reduce is a complex topic that I have not fully covered yet but basically it’s a method for processing large data sets in a distributed environment. It is developed by Google. It involves 2 functions called map and reduce. The map function filters entries for certain information and can extract information. The result of a map function is an ordered list of key/value pairs called an index. The results of map functions are stored in disk by the Couchbase server. Reduce function is optional and can be used to perform sum, aggregate or similar calculations on the output of map function. Views can be grouped in design documents which can be associated with a bucket. I consider them as namespaces. Couchbase Server offers two kinds of views: Development and Production. As creating a view means creating an index. it may incur some overhead on the performance of the system. So development views are handy to fully test before publishing to production environment. Also production views cannot be edited via admin console which forces the developer to develop and test the view in development environment first. So to demonstrate what they look like let’s examine the view that returns all the breweries.

Beer_View_MapFunc

We have 2 types of objects in the database (beer and brewery). This function only emits the objects that are of type brewery.

Demo

So all this theory means nothing if we don’t put it into good use. You can get source code of the sample application (I call it Beer Explorer) from my Github account. Also if you want to see what it looks like before diving into the code I host a live version here: http://beerexplorer.me. Feel free to play with it.

NOSQL comments edit

In this post, I’ll talk about some technical details and terminology of Couchbase. The official documentation is very comprehensive and I highly recommend taking a look at it: http://www.couchbase.com/docs/

Installation

First of all I recommend you check the supported OS list here. I tried to install it on Windows 8 but turns out it’s not supported yet. Then I installed it on Windows Server 2008 R2 and a Ubuntu Server 12.10. You can find Linux installation instructions here.

Installation is quite easy. There are a few things that need to paid attention though.

  1. File locations: Actually this step is very easy, just accept the default location. But Couchbase recommends storing document and index data on different disks to get the best performance,
  2. Memory Size: First node in the cluster determines the quota and that value is inherited to the following nodes. To update it, on the management console, select Data Buckets and click on the arrow on the left of the bucket name. Then by clicking on Edit you can change this value.
  3. Bucket Type: memcached and Couchbase bucket types are significantly different so you have to choose carefully. memcached buckets don’t support persistence nor replication. They are meant to be an in-memory caching solution.
  4. Bucket Name: During setup you cannot change the name of the default bucket. Couchbase recommends to use it for testing purposes only. So it’s best to create your own bucket  for the actual data once the installation is over.
  5. Flush: This is a very dangerous operation. It allows you delete all the data in a bucket. Default is disabled and I’d recommend to keep it that way.

Basic concepts

  • A couch database is called a bucket.
  • A document is a self-contained piece of data. It is a JSON object. A row in a RDBMS would be stored in a document with all the data it’s related to. (i.e: A customer record may contain a list of orders). This approach is called Single-Document approach and the document is called an aggregate. More about it in Modeling Documents section later in this post. A new feature that came with v2.0 is these records can be indexed and queried.
  • vBucket is short for “Virtual Bucket” and they work functionally equivalent to database shards in traditional relational databases. Good news is that Couchbase will automatically manage vBuckets.
  • XDCR stands for Cross Data Center Replication. It’s a very cool feature that can be used in a multiple of scenarios such as spreading data geographically or creating an active offsite backup.

Modelling Documents: has-many vs. belongs-to

The way we model data should depend on the structure and nature of the data. There are two approaches when modelling the data. has-many means storing all the child records with the parent. For example a standard Customer – Order relation could be expressed like this:

{
    "id" : 123,
    "name": Valued,
    "surname": "Customer"
    "orders": [ "order1", "order2", "order3" ]
}
{
    "id": "order1",
    "orderDate": "2012-12-20",
    "status": "sent"
}

The Customer stores the IDs of the orders. This method can be problematic if the parent (Customer in this example) is updated frequently. As orders can be accessed via customer this will effect the overall query performance. belongs-to approach suggests approaching it from the other direction. If we modeled the above example with belongs-to approach we would come up with something like this:

{
    "id" : 123,
    "name": Valued,
    "surname": "Customer"
}
{
    "id": "order1",
    "orderDate": "2012-12-20",
    "status": "sent",
    "customerId": 123
}
{
    "id": "order2",
    "orderDate": "2012-12-10",
    "status": "pending",
    "customerId": 123
 }

This is preferable to avoid contention. With this method we need to use indexing to be able query all orders by customerId. has-many approach performs better because a multiple-retrieve query is faster than indexing and querying.

Backup and Restore

Before diving into playing with the data it’s always a good practice to backup the original data. Couchbase provides 2 options to accomplish these tasks:

  1. Good ol’ file copy Copy the data files stored under the default path (which is “C:\Program Files\couchbase\server\var\lib\couchbase\data” for Windows). The disadvantage of this method is that it can only be restored to offline nodes in an identical cluster environment. Also database is not compressed.

  2. cbbackup / cbrestore These tools can be found in the bin folder.

Couchbase_Backup

I think a slight disadvantage is that you have to specify password in clear text in the command line. I was expecting just providing –p parameter would end up it asking me the password after I enter the command. Instead I got an error saying the password cannot be empty.

Couchbase_Restore

Advantages are that it allows a backup to be restored onto a different size and configuration. Also it compresses the data so it’s disk-space friendly.

Tip: When specifying the backup path to cbrestore make sure to remove the trailing backslash from the path.   In the next instalment of this series I’ll post a sample application using the Beer sample database that is shipped with Couchbase 2.0

Gadget, Hardware comments edit

It is world famous now. It is a dirt cheap ARM-based computer running Linux. Just bought one for myself. I installed Raspbian Wheezy which can be downloaded from here: http://www.raspberrypi.org/downloads. It is the recommended download for newbies so I went straight to it. I used Win32DiskImager and formatted an SD card. Installed it on the Raspberry Pi and it was good to go.

I definitely recommend buying a case which makes it a lot more fun to play with it. I also bought a 3.5” display. I think small screen goes well with the small device. If I’m going to plug something in to my 23” LED monitor, I’d prefer it to be my desktop. The display I bought can be found on Amazon. It doesn’t come with a power supply so you also have to buy a 12V – 2A DC power supply. I also needed a male – male RCA cable to connect the display to the Pi.

The result is the smallest computer I have ever had:

Raspberry Pi

I hope I can do something useful with it too.

NOSQL comments edit

Couch is one of move most popular databases in NOSQL movement. When I first started playing around with Couch I was a bit confused by the naming. I had thought there was one product but turns out there are two actually.

CouchDB vs. Couchbase

Apache CouchDB was created by Damien Katz who then started a company called CouchOne Inc. After some time they decided to merge their company with Membase Inc. which developed another open-source distributed key-value database called Membase. They merged the two products so that it would use Membase as storage backend and some portions were rewritten. The end result was called Couchbase. So even though it’s based on Apache CouchDB it’s a different product and is being developed by a different company. But it’s still open source and licensed under the Apache 2.0 license.

Which one to use?

They serve different needs. Couchbase has a built-in memcached-based caching technology whereas Apache CouchDB is a disk-based database. Therefore Couchbase is better suited low latency requirements. Couchbase has built-in replication which allows data to be spread across all the nodes in the cluster automatically. Apache CouchDB supports peer-to-peer replication. I find auto-replication feature of Couchbase marvellous and it’s extremely easy to manage. When you create a new node it can be a new cluster on its own or it can be added to an existing cluster. Adding it to a cluster consists of just providing the IP address/hostname and administrator credentials of a machine in that cluster and the rest is automagically taken care of. I’m using Couchbase in my test applications.

What’s new in Couchbase 2.0

Couchbase released a new major version recently. Highlights of the new features are:

  • Cross Data-Center Replication (XDCR) enhancements
  • 2 cool sample buckets (beer-sample and gamesim-sample)
  • A new REST-API
  • New command-line tools
  • Querying views during rebalance

In the next post I’ll go into more technical details.

Networking, Security comments edit

When I saw this gadget, I knew I had to have it. Didn’t exactly know what to use it for but it looked and sounded cool. So I ordered one along with a pro version. Unfortunately only the pro version arrived as the other one was out of stock. It would be more fun to build it myself but just seeing it in action is fun too. Of course it’s not as cool as a throwing star but functionality is exactly the same.

LAN Tap Throwing Star

The idea is instead of directly connecting your computer to a switch, you connect the machine to this gizmo and connect the port across to the switch. So essentially getting between the target machine and its final destination for network traffic. The other 2 ports are for monitoring. One of them is for received packets and the other is for the transmitted. Connect a monitoring device to one of these ports and it’s done. The rest is firing WireShark in the monitoring machine and watching the traffic of the other machine. A few cool things about it:

  • It doesn’t require any power source
  • It’s unobtrusive and undetectable

If you want to learn more, here is a nice video about it from Hak5:

Hak5–Throwing Star LAN Tap

I learned that it is commonly used for Intrusion Detection Systems (IDS) so it would be nice to one handy if I can start using one finally. The limitation is of course it only can be used to monitor one target device only. To listen to whole network I’ll need a switch with port mirroring or SPAN support. But for now let’s make sure this device is working properly first. The problem with the pro version is that it doesn’t have any indicators of which ports are for monitoring. So I randomly selected one, connected it between my desktop and the router, connected the laptop to one of the remaining ports. To test it I’m simply pinging google.com. With this confiugration I got nothing, Let’s change the ports and give in another try.. and voila! I filter the packets by my desktop’s IP and ICMP protocol so it’s easy to observe the sniffed packets.

Captued_Ping_Request

But as you can in the above screenshot there’s a problem: This is only one-way traffic. Let’s use the other monitoring port to see what’s going to change. Another ping to Google and this is what we get:

Captued_Ping_Reply

Now we receive only ping reply packets.As Darren Kitchen mentioned in the Hak5 video we can overcome this problem by using a USB Ethernet adapter with multiple ports. I don’t have one of those so I’ll just take his word for it. Verdict: Only monitoring one machine in one direction makes it a bit useless for me. I was planning to use something to see everything in both directions but overall it was a valuable  experience. After all, before I heard about LAN tapping in a TWIET episode (http://twit.tv/twiet) I didn’t even know such thing existed. Hearing about it in a podcast is nice but nothing beats hands-on experience.

System Administration comments edit

When you have Windows Services you must also implement a monitoring solution to make sure that they are running at all times. Some time ago I needed a quick and dirty solution to notify myself when one of the services stopped. The solution I depict here is by no means an ideal one. The only advantage of it is it’s very fast to implement if you don’t already have a monitoring system. Disclaimer aside let’s get to work!

The tools we need come with Windows so no need to install anything. The idea is simple: Create a task scheduler that is triggered on an event. The triggering even will be the stopping of the monitored service and action that will be taken will be sending the notification email.

STEP 01: Create a new filter a. Launch Task Scheduler. b. Right click Task Scheduler Library and select Create Task c. Select the Triggers tab. d. Click New… e. In the Begin the task list select “On an event” f. In the Settings section select Custom and click New Event Filter g. In the New Event Filter dialog, select XML tab and check “Edit query manually” h. As the query text type in the following:

<QueryList>
 <Query Id="0">
 <Select Path="Application">
 *[System[Provider[@Name='Service1']]]
 *[EventData[Data and (Data='Service stopped successfully.')]]
 </Select>
 </Query>
</QueryList>

TaskScheduler_NewEventFilter

Change the name of the service name and the message it displays when it stops. Note that service name is not what you see in the services list. You have to right –click and view properties. For example, as shown in the picture below, service name for DNS client is “Dnscache” where as display name is “DNS Client”.

Service Name

STEP 02: Create action to send mail a. Select the Actions tab and click on New b. From the Action list select “Send an e-mail” c. Fill in the details for the notification email. At this point we are good to go. An email will be fired when the service stops and logs the text we are looking for. Keep in my mind that it’s quite fragile because it will stop working if the text the service logs changes. Having a built-in send mail capability is great but if you need more features, like adding Cc/Bcc recipients or setting the priority of the mail this option would not be enough for you. In that case, playing around with PowerShell would do the trick.

STEP 03: [Optional] Create a script to send mails PowerShell is built on top of .NET framework so with a few lines of code we can send mails just like we can in C#:

$email = New-Object System.Net.Mail.MailMessage
$email.From = "user1@someDomain.com"
$email.To.Add("user2@anotherDomain.com")
$email.CC.Add("user3@yetAnotherOne.com")
$email.Priority = [System.Net.Mail.MailPriority]::High
$email.Subject = "Your notification subject"
$email.Body = "A bleak and gloomy text to drive the recipient into panic"
$smtpClient = New-Object Net.Mail.SmtpClient("SMTP hostname or IP address", 587)
$smtpClient.EnableSsl = $true
$smtpClient.Credentials = New-Object System.Net.NetworkCredential("username", "password");
$smtpClient.Send($email)

This example uses port 587 and SSL, your configuration may vary. That’s all there is to it to send a mail with PowerShell and you have full control over it.

To run this script in the actions list select “Start a program” from the actions list. In the Program/script textbox enter “powershell” and enter the full path of the script in the arguments textbox. Don’t forget to save it with a ps1 extension.

Virtualization comments edit

VMWare is one of my favourite IT companies. They are specialized in one area and they create very nice products. And they mind their own business. I mean you don’t read about them in patent dispute related news. As virtualization is the key technology behind cloud computing in a way VMWare is one of the pioneers to make it happen. They say Microsoft is advancing with HyperV 3.0 but currently I’ll stick to VMWare Workstation for now. As of version 8.0 VMWare Workstation comes with a cool feature called VM Sharing. As the name implies, you can sharing a whole machine and connect to it from another workstation application and manage that machine as if it was a local machine. So if you need to access a virtual machine from multiple computers you can accomplish it without creating multiple copies of the machine. All you have to do is open the VM you want to share and select VM -–> Manage –-> Share. Keep in mind that the machine must be powered off.

VMWare

Sharing wizard is very simple. It asks if you want to clone the machine and move it under the shared VM folder. I like moving it because I don’t want to deal with multiple copies. Then from the client side select File –-> Connect to server.

VMWare

Then provide the hostname / IP address along with administrator credentials and you can see the shared VMs under (not surprisingly) Shared VMs menu at the bottom of the left menu.

VMWare

The rest is exactly the same as the regular process. You can manage the remote virtual machine as if it resides in your local environment.

VMWare

Amazon Web Services, Cloud Computing, Development, Tips & Tricks comments edit

AWS must be short for awesome! I love using it. It makes managing virtual machines so much easier yet provides full power to the user through its API. Thanks to vision of Jeff Bezos, every function you see on the management console can be accessed via API as well. Back in 2002 Jeff Bezos mandated that all teams will expose their data and functionality through service interfaces. This approach make complete sense. It makes separation of layers much more easier, makes the code testable. That’s why I’m currently big on ServiceStack and WebAPI but that’s a discussion for another post. In this post I’d like to share some of the tips & tricks that I picked during my involvement with AWS. Of course, as many IT related things, this is an ongoing process and I may post sequel to this one in the future. Currently my tips are as follows:

TIP 01: Always create production servers with termination protection on If there is one thing I don’t like about AWS is that in the management console there is no way of separating the production and test/staging machines. So first use a clear naming convention to distinguish them but sometimes that’s not enough. In the heat of the moment you can attempt to stop or terminate a production instance. If you don’t have termination protection enabled this attempt would become a tragedy but if you have it on simply nothing happens and you get to keep your job. If you forgot to turn it on while creating an instance you can always change it by right-clicking on the instance and selecting Change Termination Protection.

AWS Termination Protection

TIP 02: You can change instance type in a few minutes One of my favourite features is that you can stop the instance and change it’s type. This way you can upgrade or downgrade a machine within minutes. So don’t worry if you are not sure what instance size you would need for a specific job. Just ballpark it, observe and upgrade/downgrade at an idle time.

TIP 03: Use auto-scaling This feature is not available via management console but it’s possible with API. You can write your application but it’s even easier by using command line developer tools. Basically you create a scaling policy for scaling up and one for scaling down. You define the alarm conditions and when these conditions are met the policy you specify is executed. This way if your web servers are under heavy load, for example, you can automatically launch another machine. They all have to be under the same load balancer of course. You can find more about auto-scaling here: http://aws.amazon.com/autoscaling/

TIP 04: Use Multi-AZ (Availability Zone) deployment Regions have several availability zones in them. Although you cannot create cross-datacentre systems, you can create instances using different AZs. So  if one data centre goes down other instances can still be responsive. It’s the simple principle of not putting all the eggs in the same basket.

TIP 05: Customize management console AWS management console comes with a cool feature: It enables you to pin your favourite services on top of the page for easy access. There are a bunch of them but most likely you’ll need EC2 and S3 available at all times. At least I do. You can pin them by simply dragging the service name and dropping it onto the top bar. After pinning them on top, they are always one click away.

AWS Customize Menu

TIP 06: Change disk size while creating the image This is especially handy for Windows instances as they demand more space than Linux ones. The default size for a Windows Server is 35GB. It’s actually quite enough for a standard Windows installation but I guess Amazon is reserving some of the space for some reason because when you launch the machine you only get around 3GB free disk space which to me sounds terrifying. If a log file gets out of hand a little bit it can bring down the whole machine. So it’s best to get some free space upfront. At least for the peace of mind if nothing else.

AWS Change Disk Size

AWS Change Disk Size

TIP 07: Don’t forget to delete manually attached EBS volumes When you terminate an instance make sure you delete all the attached EBS volumes that are not set to auto-delete. The default volume that comes with the instance has Delete on termination option checked in the wizard so they are automatically cleaned up. But if you create a volume manually and attach it to an instance there is no option to set this flag. So you have to delete them manually. AWS is kind enough to warn you to delete them when deleting the instance. If you don’t take care of them immediately and you have auto-scaling you may end up with terminating lots of instances that leaves unused disks that you keep paying for.

AWS Delete Instance

TIP 08: Reserve as early as you can This is another budget tip. If you are certain about the size of an instance then buy a reserved instance for that type. Reserved instance is not a technical concept. When you buy one you start paying less by the hour for an instance of that type. For a comparison to see how much you can save check out here: http://aws.amazon.com/pricing/ec2/

Development comments edit

It’s been a while since I’ve started using StyleCop in my projects. Last year I managed to sneak it in to my company’s projects as well. Applying it to existing projects and fixing all the errors was a tiring process at first but I believe it was worth it. It really helps for consistency. Regardless of the developer of a certain block of code it’s very easy to read it because everybody has to adhere to same rules across the company. Here are a few tips to manage this:

01. Force StyleCop warnings to be treated as errors. I hate warnings completely actually. That’s why I set treat warnings as errors to All on the projects I work on. This helps to eliminate many potential bugs before they become an issue.

Treat warnings as errors

Unfortunately, StyleCop errors are not included in this. But with a little tweak we can turn on this feature for StyleCop warnings as well. Just add the following line to your project’s .csproj file inside the first PropertyGroup tag:

<stylecoptreaterrorsaswarnings>false</stylecoptreaterrorsaswarnings>

The wording is the opposite of Visual Studio’s (treat errors as warnings instead of the other way around) so we have to set this to false. After reloading the project, you won’t be able to successfully build your project without fixing all the StyleCop rule violations (which is a good thing!)

02. Integrate StyleCop to MSBuild Naturally if the process is not automatic it won’t work. If, as a developer, it’s left to me to right-click on the project and run StyleCop manually I’d forget it after a few times. The easiest way to integrate it with MSBuild is adding StyleCop.MSBuild NuGet package to your project. Alternatively if you have installed full StyleCop application you have StyleCop.Targets file under your installation directory. By adding that file to the project you can achieve MSBuild integration.

For multiple developer environment it’s best to use a fixed path so that when someone new starts working on the project they can still build the project. To accomplish that, we mapped R drive to a folder that contains the targets file so that the build doesn’t break. Of course needless to say new developers have to do the mapping to make this work.

03. Run StyleCop on the server as well The problem with manually enabling treating warnings as errors feature on the developer system is that it can be easily forgotten or can be temporarily disabled for some reason. When the developer forgets to re-enable it,he/she can check-in code that violates code convention rules. To avoid that we should reject code on the source control during the check-in process.This is where StyleCopSVN comes in. Of course as the name implies this solution works only for SVN. I haven’t yet looked into other source control systems like Git or TFS for this feature yet. You can get SVNStyleCop here: http://svnstylecop.codeplex.com/

The way it works is quite simple and the official page has a good tutorial about it. Mainly you override the pre-commit hook and run StyleCop from before the code is submitted. The problem with this is that you have to maintain a separate copy of rules and StyleCop files so when you update your rules you have to remember to update it on the server as well.

04. Use symbolic links to maintain one global rule set Windows Vista (and above) comes with a handy utility called mklink. By entering the following command you can create a symbolic link to Settings.StyleCop file anywhere you please.

mklink /H Settings.StyleCop {Path for the actual file}

This way all projects are going use the same settings file. The problem is it’s a tad cumbersome especially if your project involves lots of projects.

05. A better approach for one rule set to rule them all I was pondering for minimizing user efforts to deploy StyleCop and it hit me! Our beloved NuGet could take care of it as well. StyleCop has already a package in the official NuGet repository but the problem with it is that it comes with its own StyleCop rule file so it’s not quite suitable for a team environment. Even not for a single developer because all projects will have different rules and it can quickly become a maintenance nightmare. The idea of using NuGet is creating a package that contains StyleCop rules and libraries. When the package is installed it copies the libraries, rules and targets file under the project. Also an install script can be used to add the import project and treat warnings as errors settings mentioned in tips 1 & 2. The advantages of this method are:

  • All projects installing the package will be using the same rule set downloaded from server
  • MSBuild integration is done automatically
  • Treat warnings as errors update is done automatically
  • No configuration needed (i.e: Mapping drives, creating symbolic links etc)

The disadvantage is if rules are updated the package needs to be re-installed for he projects. It’s still not perfect but compared to other methods I think it’s a neat way of distributing and enforcing StyleCop rules.

Tips & Tricks comments edit

I like Windows Live Writer and I use it for blogging. The problem is I start multiple posts at once, take some notes on them and save them as drafts. Sometimes when I’m on a different machine I want to add some notes on the existing drafts but (you guessed it) the drafts are saved locally on a different machine. I already have Dropbox installed almost on my machines so I decided to harness it to the task.

STEP 01: Delete the My Blog Posts folder in the destination machine. The local folder is created automatically under %UserProfile%\Documents\My Weblog Posts. Delete this folder. Make sure LiveWriter is closed before deleting it.

STEP 02: Create a directory junction A directory junction is a mapping to another folder. In Windows 7 you can use mklink command to create directory junctions (as well as symbolic and hard links)

mklink /D "%UserProfile%\Documents\My Weblog Posts" {PATH_TO_DROPBOX_ROOT}\My Weblog Posts"

Enter the correct path to your dropbox folder and that’s it. Now you can enjoy the ease of synchronized blog drafts.

Security comments edit

I learned a neat trick to force Windows check a USB device plugged in to be able to log on the system. The tool to use for that is syskey, an ancient tool introduced to Windows with Windows NT SP3. Here’s how to do it:

  1. Insert your USB drive. As syskey only supports floopy disk change the drive letter to A.

  2. Run syskey (From command prompt or by pressing WinKey + R then entering syskey)

  3. Select Store Startup Key on Floppy Disk

SysKey

After you restart the machine, Windows will check your “floppy” USB drive and if it is not there it will display the error message: “This computer is configured to use a floppy disk during startup. Please insert the disk and click OK”. After you insert the disk you can logon by entering your password.