NOSQL comments edit

In this post, I’ll talk about some technical details and terminology of Couchbase. The official documentation is very comprehensive and I highly recommend taking a look at it:


First of all I recommend you check the supported OS list here. I tried to install it on Windows 8 but turns out it’s not supported yet. Then I installed it on Windows Server 2008 R2 and a Ubuntu Server 12.10. You can find Linux installation instructions here.

Installation is quite easy. There are a few things that need to paid attention though.

  1. File locations: Actually this step is very easy, just accept the default location. But Couchbase recommends storing document and index data on different disks to get the best performance,
  2. Memory Size: First node in the cluster determines the quota and that value is inherited to the following nodes. To update it, on the management console, select Data Buckets and click on the arrow on the left of the bucket name. Then by clicking on Edit you can change this value.
  3. Bucket Type: memcached and Couchbase bucket types are significantly different so you have to choose carefully. memcached buckets don’t support persistence nor replication. They are meant to be an in-memory caching solution.
  4. Bucket Name: During setup you cannot change the name of the default bucket. Couchbase recommends to use it for testing purposes only. So it’s best to create your own bucket  for the actual data once the installation is over.
  5. Flush: This is a very dangerous operation. It allows you delete all the data in a bucket. Default is disabled and I’d recommend to keep it that way.

Basic concepts

  • A couch database is called a bucket.
  • A document is a self-contained piece of data. It is a JSON object. A row in a RDBMS would be stored in a document with all the data it’s related to. (i.e: A customer record may contain a list of orders). This approach is called Single-Document approach and the document is called an aggregate. More about it in Modeling Documents section later in this post. A new feature that came with v2.0 is these records can be indexed and queried.
  • vBucket is short for “Virtual Bucket” and they work functionally equivalent to database shards in traditional relational databases. Good news is that Couchbase will automatically manage vBuckets.
  • XDCR stands for Cross Data Center Replication. It’s a very cool feature that can be used in a multiple of scenarios such as spreading data geographically or creating an active offsite backup.

Modelling Documents: has-many vs. belongs-to

The way we model data should depend on the structure and nature of the data. There are two approaches when modelling the data. has-many means storing all the child records with the parent. For example a standard Customer – Order relation could be expressed like this:

    "id" : 123,
    "name": Valued,
    "surname": "Customer"
    "orders": [ "order1", "order2", "order3" ]
    "id": "order1",
    "orderDate": "2012-12-20",
    "status": "sent"

The Customer stores the IDs of the orders. This method can be problematic if the parent (Customer in this example) is updated frequently. As orders can be accessed via customer this will effect the overall query performance. belongs-to approach suggests approaching it from the other direction. If we modeled the above example with belongs-to approach we would come up with something like this:

    "id" : 123,
    "name": Valued,
    "surname": "Customer"
    "id": "order1",
    "orderDate": "2012-12-20",
    "status": "sent",
    "customerId": 123
    "id": "order2",
    "orderDate": "2012-12-10",
    "status": "pending",
    "customerId": 123

This is preferable to avoid contention. With this method we need to use indexing to be able query all orders by customerId. has-many approach performs better because a multiple-retrieve query is faster than indexing and querying.

Backup and Restore

Before diving into playing with the data it’s always a good practice to backup the original data. Couchbase provides 2 options to accomplish these tasks:

  1. Good ol’ file copy Copy the data files stored under the default path (which is “C:\Program Files\couchbase\server\var\lib\couchbase\data” for Windows). The disadvantage of this method is that it can only be restored to offline nodes in an identical cluster environment. Also database is not compressed.

  2. cbbackup / cbrestore These tools can be found in the bin folder.


I think a slight disadvantage is that you have to specify password in clear text in the command line. I was expecting just providing –p parameter would end up it asking me the password after I enter the command. Instead I got an error saying the password cannot be empty.


Advantages are that it allows a backup to be restored onto a different size and configuration. Also it compresses the data so it’s disk-space friendly.

Tip: When specifying the backup path to cbrestore make sure to remove the trailing backslash from the path.   In the next instalment of this series I’ll post a sample application using the Beer sample database that is shipped with Couchbase 2.0

Gadget, Hardware comments edit

It is world famous now. It is a dirt cheap ARM-based computer running Linux. Just bought one for myself. I installed Raspbian Wheezy which can be downloaded from here: It is the recommended download for newbies so I went straight to it. I used Win32DiskImager and formatted an SD card. Installed it on the Raspberry Pi and it was good to go.

I definitely recommend buying a case which makes it a lot more fun to play with it. I also bought a 3.5” display. I think small screen goes well with the small device. If I’m going to plug something in to my 23” LED monitor, I’d prefer it to be my desktop. The display I bought can be found on Amazon. It doesn’t come with a power supply so you also have to buy a 12V – 2A DC power supply. I also needed a male – male RCA cable to connect the display to the Pi.

The result is the smallest computer I have ever had:

Raspberry Pi

I hope I can do something useful with it too.

NOSQL comments edit

Couch is one of move most popular databases in NOSQL movement. When I first started playing around with Couch I was a bit confused by the naming. I had thought there was one product but turns out there are two actually.

CouchDB vs. Couchbase

Apache CouchDB was created by Damien Katz who then started a company called CouchOne Inc. After some time they decided to merge their company with Membase Inc. which developed another open-source distributed key-value database called Membase. They merged the two products so that it would use Membase as storage backend and some portions were rewritten. The end result was called Couchbase. So even though it’s based on Apache CouchDB it’s a different product and is being developed by a different company. But it’s still open source and licensed under the Apache 2.0 license.

Which one to use?

They serve different needs. Couchbase has a built-in memcached-based caching technology whereas Apache CouchDB is a disk-based database. Therefore Couchbase is better suited low latency requirements. Couchbase has built-in replication which allows data to be spread across all the nodes in the cluster automatically. Apache CouchDB supports peer-to-peer replication. I find auto-replication feature of Couchbase marvellous and it’s extremely easy to manage. When you create a new node it can be a new cluster on its own or it can be added to an existing cluster. Adding it to a cluster consists of just providing the IP address/hostname and administrator credentials of a machine in that cluster and the rest is automagically taken care of. I’m using Couchbase in my test applications.

What’s new in Couchbase 2.0

Couchbase released a new major version recently. Highlights of the new features are:

  • Cross Data-Center Replication (XDCR) enhancements
  • 2 cool sample buckets (beer-sample and gamesim-sample)
  • A new REST-API
  • New command-line tools
  • Querying views during rebalance

In the next post I’ll go into more technical details.

Networking, Security comments edit

When I saw this gadget, I knew I had to have it. Didn’t exactly know what to use it for but it looked and sounded cool. So I ordered one along with a pro version. Unfortunately only the pro version arrived as the other one was out of stock. It would be more fun to build it myself but just seeing it in action is fun too. Of course it’s not as cool as a throwing star but functionality is exactly the same.

LAN Tap Throwing Star

The idea is instead of directly connecting your computer to a switch, you connect the machine to this gizmo and connect the port across to the switch. So essentially getting between the target machine and its final destination for network traffic. The other 2 ports are for monitoring. One of them is for received packets and the other is for the transmitted. Connect a monitoring device to one of these ports and it’s done. The rest is firing WireShark in the monitoring machine and watching the traffic of the other machine. A few cool things about it:

  • It doesn’t require any power source
  • It’s unobtrusive and undetectable

If you want to learn more, here is a nice video about it from Hak5:

Hak5–Throwing Star LAN Tap

I learned that it is commonly used for Intrusion Detection Systems (IDS) so it would be nice to one handy if I can start using one finally. The limitation is of course it only can be used to monitor one target device only. To listen to whole network I’ll need a switch with port mirroring or SPAN support. But for now let’s make sure this device is working properly first. The problem with the pro version is that it doesn’t have any indicators of which ports are for monitoring. So I randomly selected one, connected it between my desktop and the router, connected the laptop to one of the remaining ports. To test it I’m simply pinging With this confiugration I got nothing, Let’s change the ports and give in another try.. and voila! I filter the packets by my desktop’s IP and ICMP protocol so it’s easy to observe the sniffed packets.


But as you can in the above screenshot there’s a problem: This is only one-way traffic. Let’s use the other monitoring port to see what’s going to change. Another ping to Google and this is what we get:


Now we receive only ping reply packets.As Darren Kitchen mentioned in the Hak5 video we can overcome this problem by using a USB Ethernet adapter with multiple ports. I don’t have one of those so I’ll just take his word for it. Verdict: Only monitoring one machine in one direction makes it a bit useless for me. I was planning to use something to see everything in both directions but overall it was a valuable  experience. After all, before I heard about LAN tapping in a TWIET episode ( I didn’t even know such thing existed. Hearing about it in a podcast is nice but nothing beats hands-on experience.

System Administration comments edit

When you have Windows Services you must also implement a monitoring solution to make sure that they are running at all times. Some time ago I needed a quick and dirty solution to notify myself when one of the services stopped. The solution I depict here is by no means an ideal one. The only advantage of it is it’s very fast to implement if you don’t already have a monitoring system. Disclaimer aside let’s get to work!

The tools we need come with Windows so no need to install anything. The idea is simple: Create a task scheduler that is triggered on an event. The triggering even will be the stopping of the monitored service and action that will be taken will be sending the notification email.

STEP 01: Create a new filter a. Launch Task Scheduler. b. Right click Task Scheduler Library and select Create Task c. Select the Triggers tab. d. Click New… e. In the Begin the task list select “On an event” f. In the Settings section select Custom and click New Event Filter g. In the New Event Filter dialog, select XML tab and check “Edit query manually” h. As the query text type in the following:

 <Query Id="0">
 <Select Path="Application">
 *[EventData[Data and (Data='Service stopped successfully.')]]


Change the name of the service name and the message it displays when it stops. Note that service name is not what you see in the services list. You have to right –click and view properties. For example, as shown in the picture below, service name for DNS client is “Dnscache” where as display name is “DNS Client”.

Service Name

STEP 02: Create action to send mail a. Select the Actions tab and click on New b. From the Action list select “Send an e-mail” c. Fill in the details for the notification email. At this point we are good to go. An email will be fired when the service stops and logs the text we are looking for. Keep in my mind that it’s quite fragile because it will stop working if the text the service logs changes. Having a built-in send mail capability is great but if you need more features, like adding Cc/Bcc recipients or setting the priority of the mail this option would not be enough for you. In that case, playing around with PowerShell would do the trick.

STEP 03: [Optional] Create a script to send mails PowerShell is built on top of .NET framework so with a few lines of code we can send mails just like we can in C#:

$email = New-Object System.Net.Mail.MailMessage
$email.From = ""
$email.Priority = [System.Net.Mail.MailPriority]::High
$email.Subject = "Your notification subject"
$email.Body = "A bleak and gloomy text to drive the recipient into panic"
$smtpClient = New-Object Net.Mail.SmtpClient("SMTP hostname or IP address", 587)
$smtpClient.EnableSsl = $true
$smtpClient.Credentials = New-Object System.Net.NetworkCredential("username", "password");

This example uses port 587 and SSL, your configuration may vary. That’s all there is to it to send a mail with PowerShell and you have full control over it.

To run this script in the actions list select “Start a program” from the actions list. In the Program/script textbox enter “powershell” and enter the full path of the script in the arguments textbox. Don’t forget to save it with a ps1 extension.

Virtualization comments edit

VMWare is one of my favourite IT companies. They are specialized in one area and they create very nice products. And they mind their own business. I mean you don’t read about them in patent dispute related news. As virtualization is the key technology behind cloud computing in a way VMWare is one of the pioneers to make it happen. They say Microsoft is advancing with HyperV 3.0 but currently I’ll stick to VMWare Workstation for now. As of version 8.0 VMWare Workstation comes with a cool feature called VM Sharing. As the name implies, you can sharing a whole machine and connect to it from another workstation application and manage that machine as if it was a local machine. So if you need to access a virtual machine from multiple computers you can accomplish it without creating multiple copies of the machine. All you have to do is open the VM you want to share and select VM -–> Manage –-> Share. Keep in mind that the machine must be powered off.


Sharing wizard is very simple. It asks if you want to clone the machine and move it under the shared VM folder. I like moving it because I don’t want to deal with multiple copies. Then from the client side select File –-> Connect to server.


Then provide the hostname / IP address along with administrator credentials and you can see the shared VMs under (not surprisingly) Shared VMs menu at the bottom of the left menu.


The rest is exactly the same as the regular process. You can manage the remote virtual machine as if it resides in your local environment.


Amazon Web Services, Cloud Computing, Development, Tips & Tricks comments edit

AWS must be short for awesome! I love using it. It makes managing virtual machines so much easier yet provides full power to the user through its API. Thanks to vision of Jeff Bezos, every function you see on the management console can be accessed via API as well. Back in 2002 Jeff Bezos mandated that all teams will expose their data and functionality through service interfaces. This approach make complete sense. It makes separation of layers much more easier, makes the code testable. That’s why I’m currently big on ServiceStack and WebAPI but that’s a discussion for another post. In this post I’d like to share some of the tips & tricks that I picked during my involvement with AWS. Of course, as many IT related things, this is an ongoing process and I may post sequel to this one in the future. Currently my tips are as follows:

TIP 01: Always create production servers with termination protection on If there is one thing I don’t like about AWS is that in the management console there is no way of separating the production and test/staging machines. So first use a clear naming convention to distinguish them but sometimes that’s not enough. In the heat of the moment you can attempt to stop or terminate a production instance. If you don’t have termination protection enabled this attempt would become a tragedy but if you have it on simply nothing happens and you get to keep your job. If you forgot to turn it on while creating an instance you can always change it by right-clicking on the instance and selecting Change Termination Protection.

AWS Termination Protection

TIP 02: You can change instance type in a few minutes One of my favourite features is that you can stop the instance and change it’s type. This way you can upgrade or downgrade a machine within minutes. So don’t worry if you are not sure what instance size you would need for a specific job. Just ballpark it, observe and upgrade/downgrade at an idle time.

TIP 03: Use auto-scaling This feature is not available via management console but it’s possible with API. You can write your application but it’s even easier by using command line developer tools. Basically you create a scaling policy for scaling up and one for scaling down. You define the alarm conditions and when these conditions are met the policy you specify is executed. This way if your web servers are under heavy load, for example, you can automatically launch another machine. They all have to be under the same load balancer of course. You can find more about auto-scaling here:

TIP 04: Use Multi-AZ (Availability Zone) deployment Regions have several availability zones in them. Although you cannot create cross-datacentre systems, you can create instances using different AZs. So  if one data centre goes down other instances can still be responsive. It’s the simple principle of not putting all the eggs in the same basket.

TIP 05: Customize management console AWS management console comes with a cool feature: It enables you to pin your favourite services on top of the page for easy access. There are a bunch of them but most likely you’ll need EC2 and S3 available at all times. At least I do. You can pin them by simply dragging the service name and dropping it onto the top bar. After pinning them on top, they are always one click away.

AWS Customize Menu

TIP 06: Change disk size while creating the image This is especially handy for Windows instances as they demand more space than Linux ones. The default size for a Windows Server is 35GB. It’s actually quite enough for a standard Windows installation but I guess Amazon is reserving some of the space for some reason because when you launch the machine you only get around 3GB free disk space which to me sounds terrifying. If a log file gets out of hand a little bit it can bring down the whole machine. So it’s best to get some free space upfront. At least for the peace of mind if nothing else.

AWS Change Disk Size

AWS Change Disk Size

TIP 07: Don’t forget to delete manually attached EBS volumes When you terminate an instance make sure you delete all the attached EBS volumes that are not set to auto-delete. The default volume that comes with the instance has Delete on termination option checked in the wizard so they are automatically cleaned up. But if you create a volume manually and attach it to an instance there is no option to set this flag. So you have to delete them manually. AWS is kind enough to warn you to delete them when deleting the instance. If you don’t take care of them immediately and you have auto-scaling you may end up with terminating lots of instances that leaves unused disks that you keep paying for.

AWS Delete Instance

TIP 08: Reserve as early as you can This is another budget tip. If you are certain about the size of an instance then buy a reserved instance for that type. Reserved instance is not a technical concept. When you buy one you start paying less by the hour for an instance of that type. For a comparison to see how much you can save check out here:

Development comments edit

It’s been a while since I’ve started using StyleCop in my projects. Last year I managed to sneak it in to my company’s projects as well. Applying it to existing projects and fixing all the errors was a tiring process at first but I believe it was worth it. It really helps for consistency. Regardless of the developer of a certain block of code it’s very easy to read it because everybody has to adhere to same rules across the company. Here are a few tips to manage this:

01. Force StyleCop warnings to be treated as errors. I hate warnings completely actually. That’s why I set treat warnings as errors to All on the projects I work on. This helps to eliminate many potential bugs before they become an issue.

Treat warnings as errors

Unfortunately, StyleCop errors are not included in this. But with a little tweak we can turn on this feature for StyleCop warnings as well. Just add the following line to your project’s .csproj file inside the first PropertyGroup tag:


The wording is the opposite of Visual Studio’s (treat errors as warnings instead of the other way around) so we have to set this to false. After reloading the project, you won’t be able to successfully build your project without fixing all the StyleCop rule violations (which is a good thing!)

02. Integrate StyleCop to MSBuild Naturally if the process is not automatic it won’t work. If, as a developer, it’s left to me to right-click on the project and run StyleCop manually I’d forget it after a few times. The easiest way to integrate it with MSBuild is adding StyleCop.MSBuild NuGet package to your project. Alternatively if you have installed full StyleCop application you have StyleCop.Targets file under your installation directory. By adding that file to the project you can achieve MSBuild integration.

For multiple developer environment it’s best to use a fixed path so that when someone new starts working on the project they can still build the project. To accomplish that, we mapped R drive to a folder that contains the targets file so that the build doesn’t break. Of course needless to say new developers have to do the mapping to make this work.

03. Run StyleCop on the server as well The problem with manually enabling treating warnings as errors feature on the developer system is that it can be easily forgotten or can be temporarily disabled for some reason. When the developer forgets to re-enable it,he/she can check-in code that violates code convention rules. To avoid that we should reject code on the source control during the check-in process.This is where StyleCopSVN comes in. Of course as the name implies this solution works only for SVN. I haven’t yet looked into other source control systems like Git or TFS for this feature yet. You can get SVNStyleCop here:

The way it works is quite simple and the official page has a good tutorial about it. Mainly you override the pre-commit hook and run StyleCop from before the code is submitted. The problem with this is that you have to maintain a separate copy of rules and StyleCop files so when you update your rules you have to remember to update it on the server as well.

04. Use symbolic links to maintain one global rule set Windows Vista (and above) comes with a handy utility called mklink. By entering the following command you can create a symbolic link to Settings.StyleCop file anywhere you please.

mklink /H Settings.StyleCop {Path for the actual file}

This way all projects are going use the same settings file. The problem is it’s a tad cumbersome especially if your project involves lots of projects.

05. A better approach for one rule set to rule them all I was pondering for minimizing user efforts to deploy StyleCop and it hit me! Our beloved NuGet could take care of it as well. StyleCop has already a package in the official NuGet repository but the problem with it is that it comes with its own StyleCop rule file so it’s not quite suitable for a team environment. Even not for a single developer because all projects will have different rules and it can quickly become a maintenance nightmare. The idea of using NuGet is creating a package that contains StyleCop rules and libraries. When the package is installed it copies the libraries, rules and targets file under the project. Also an install script can be used to add the import project and treat warnings as errors settings mentioned in tips 1 & 2. The advantages of this method are:

  • All projects installing the package will be using the same rule set downloaded from server
  • MSBuild integration is done automatically
  • Treat warnings as errors update is done automatically
  • No configuration needed (i.e: Mapping drives, creating symbolic links etc)

The disadvantage is if rules are updated the package needs to be re-installed for he projects. It’s still not perfect but compared to other methods I think it’s a neat way of distributing and enforcing StyleCop rules.

Tips & Tricks comments edit

I like Windows Live Writer and I use it for blogging. The problem is I start multiple posts at once, take some notes on them and save them as drafts. Sometimes when I’m on a different machine I want to add some notes on the existing drafts but (you guessed it) the drafts are saved locally on a different machine. I already have Dropbox installed almost on my machines so I decided to harness it to the task.

STEP 01: Delete the My Blog Posts folder in the destination machine. The local folder is created automatically under %UserProfile%\Documents\My Weblog Posts. Delete this folder. Make sure LiveWriter is closed before deleting it.

STEP 02: Create a directory junction A directory junction is a mapping to another folder. In Windows 7 you can use mklink command to create directory junctions (as well as symbolic and hard links)

mklink /D "%UserProfile%\Documents\My Weblog Posts" {PATH_TO_DROPBOX_ROOT}\My Weblog Posts"

Enter the correct path to your dropbox folder and that’s it. Now you can enjoy the ease of synchronized blog drafts.

Security comments edit

I learned a neat trick to force Windows check a USB device plugged in to be able to log on the system. The tool to use for that is syskey, an ancient tool introduced to Windows with Windows NT SP3. Here’s how to do it:

  1. Insert your USB drive. As syskey only supports floopy disk change the drive letter to A.

  2. Run syskey (From command prompt or by pressing WinKey + R then entering syskey)

  3. Select Store Startup Key on Floppy Disk


After you restart the machine, Windows will check your “floppy” USB drive and if it is not there it will display the error message: “This computer is configured to use a floppy disk during startup. Please insert the disk and click OK”. After you insert the disk you can logon by entering your password.

Testing comments edit

Know thy limits! This is especially important when you’re developing a system that expects a high traffic. Moving systems onto the cloud makes it easier to adapt and scale out to match the load but you have to prepare for node failures and instant spikes in the traffic. Also you have to make sure that your system is responsive under long heavy load. Below I recommend 3 tools to test your system against such situations:

01. JMeter

Apache JMeter, is a Java-based open-source desktop application. I submitted a basic introduction to JMeter here. But it has many advanced feature which I’m planning to cover in a post in the near future.

02. Siege

Siege is an HTTP load testing tool. It’s not complex as JMeter but works the job well and it is very simple to use. It supports UNIX variants but not on Windows. It can obtained from here.

03. Chaos Monkey

Chaos Monkey

Originally developed by Netflix and open-sourced later. It is AWS specific tool. What is does is connect to your AWS system and terminate instances randomly. So that you can observe your system in such worst case scenarios. The good thing about it is, it selects its “victims” by looking at a tag you provide. So if you don’t want a single node such as a database server, you can easily exclude it. Source code can be downloaded from here.

Security comments edit


I love my Yubikey so much that I recently bought another one. I couldn’t find a good use for it yet but I’m sure I will someday :-)

If you don’t know what a Yubikey is, check out its vendor here. Basically it is a one-time password (OTP) generator. It has a USB input device. It doesn’t require batteries to operate so you can use one everywhere without having to worry about such issues. I’m trying to incorporate using it into my daily life so that I can leverage two-factor authentication as much as possible.

Today, I found another usage. Yubikey Wordpress plugin. By using this plugin now I can login to my blog using my password and OTP generated by Yubikey. Yubikey has a web API and the plugin calls the API to authenticate your device. To learn more about the settings visit the plugin’s site:

Testing comments edit

One of the key goals when developing a web application is to make it scalable. Meaning that it should handle lots of traffic without hindering the performance. But most of the time we only care about performance when it becomes a problem and generally it’s then too late to make radical design changes. Therefore, an upfront automated load testing is very helpful to gauge your application’s performance and being aware of its limits. One popular tool used for load testing is JMeter.

JMeter Basics

  • Thread Group: Each thread acts like a single user. All elements must be under a thread group.
  • Listener: Allows access to the information gathered by JMeter. Some listener examples are Aggregate Report, Graph Report and Summary Report
  • Logical Controller: They allow you to add construct to control the flow of your tests such as If, While, ForEach
  • Sampler: They tell jMeter to send requests to server and wait for a response.

When you launch JMeter there are 2 items on the left menu: Test Plan and Workbench. Test Plan is the real deal. That is the actual sequence of events that are fired. Workbench is where you can store test elements.

Creating a load test plan can be accomplished in 2 simple steps:

  1. Create a thread group: Everything runs under a thread group. Think of each thread as a user.


  2. Insert an HTTP request: Set the host name and page you want to call.


That’s it! If all you need is to create some heavy load you can create a few different HTTP requests and start bombarding your server right away.

A trivia about the JMeter is that it is mentioned in the book titled “We Are Anonymous”. Apparently it can also be used as a DDoS tool!



Online Education comments edit

Last week another online course started at Stanford University called An Introduction to Computer Networks. It started on 8th of October and they released a good deal of materials for the first week. I hope I’ll follow it until the end. If you’re interested you better hurry up because it’s not easy to catch once the videos pile up!

Here’s the link to access the site:

UPDATE: The above link seems to have stopped working. This should be the current one now: Stanford CS144 Networking Class

Productivity comments edit

Focusing on tasks and getting them done is crucial. This simple fact is so easy to comprehend it’s a no brainer. But the reality is it’s much harder to accomplish than it sounds like. To organize my tasks I’ve been seeking methods for a long time. My favourite approach is J.D. Meier’s system which he describes in his book Getting Results the Agile Way ( It’s freely available on this site. You can also purchase a hard-copy here.

The key of this system is choosing 3 most important tasks every day. This is very simple and easy to apply to real life. Another important aspect is 3 weekly goals. These are of course more complex tasks than the daily ones. Also an important point is reviewing the progress on Fridays. He calls this pattern Monday Vision, Daily Outcomes and Friday Reflection. This approach is very effective and realistic. The key to succeed with this system is being realistic. Know your limitations and don’t stuff your lists with everything you want to get done. It’s impossible to complete everything at once. If you pick 3 and actually get them done you feel better about yourself as the progress would be visible.

I also recommend an excellent episode of Hanselminutes which can be found here. That episode introduced me to Pomodoro technique which is a simple and effective way to boost performance and focus. The idea is dividing your tasks into small work units called Pomodoros which are 25 minutes (This is the default value. You can change it to your preference). During this 25 minutes you sever all your ties with the outside world (no email, no twitter, no nothing!) and focus on one task only. As focus-impaired people like me would know this is not very easy to do. I tried setting it an hour hoping that I’d get a big task done without any disruptions but ended up with getting lost along the way. So I believe the default value is quite realistic. After you complete the Pomodoro, you have a 5 minute break. I try to consider tasks in terms of Pomodoros. It helps me planning my daily outcomes too.

Site news comments edit

I’ve been postponing this for a long time but finally I did it: I started using AWS (Amazon Web Services). My blog doesn’t get too much traffic so actually I don’t need the scalability which is EC2’s strongest point but I wanted to play with the cloud so decided to move my blog first. It’s quite easy to do so and I’ll explain how to migrate your existing blog to cloud within minutes:

STEP 01: Backup your existing blog

mysqldump -u root –p {database name} > blog_backup.sql

Download your wordpress installation folder and the backup you just created from your existing server. I Used WinSCP to get my files. It can be downloaded here.

STEP 02: Create AWS EC2 instance

Fun part starts now. Login to your AWS account and go to EC2 console. Click Launch Instance and follow the wizard. For my blog, I selected a Micro instance but it depends on your needs. I selected a 64-bit Amazon Linux AMI for the instance.

STEP 03: Assign an IP to your server*</span>

Our machine has just started running but to update our domain’s DNS records we need an IP. On the left menu, click the Elastic IPs link and allocate a new IP address. The IPs assigned to EC2 instances are free.


Right click on the IP and select Associate and choose your instance. After this step we have a running machine with a public IP.

STEP 04: SSH into the machine

By default SSH is enabled you must have created a keypair to access your machine during the Step 2. I use putty to as my SSH client which can be downloaded from here. It’s best to switch to root during the installations. So type:

sudo su

STEP 05: Install required programs

First install Apache:

yum install httpd

Then PHP:

yum install php php-mysql

Then MySQL:

yum install mysql-server

If you use SSL like me you also need to install SSL module for Apache:

yum install mod_ssl

Start Apache and MySQL

service httpd start

service mysqld start

STEP 06: Customize MySQL and import your blog

  1. Run the following command to set root password and harden the default installation


  2. Login to MySQL

    mysql -u root –p

  3. Create your database, user and grant access to that user

    create database {database name};

    create user ‘{user name}’@’localhost’ identified by ‘{password}’;

    grant all privileges on {database name}.* to ‘{user name}’@’localhost’ with grant option;

  4. Switch to database and import data

    use {database name}’;

    source {path/to/mysqldump file you uploaded}

STEP 07: Copy blog files

Copy the Wordpress files you uploaded under /var/www/html/{directory name}

STEP 08: Configure Apache

Enter the command to edit configration file:

vi /etc/httpd/conf/httpd.conf

Go down to the end of the file and create a new virtual host by this:

NameVirtualHost *:80
<VirtualHost *:80>
	ServerAdmin webmaster@localhost
	DocumentRoot /var/www/html/{directory name}
	ServerName {Your domain name}
	ServerAlias www.{Your domain name}

And restart the Apache service:

service httpd restart

STEP 09: Enable access to FTP, HTTP and HTTPS

One last step before testing your blog is opening port 21 (for installing themes, plugins etc.) 80 (for viewing!), and 443 (if you’re going to use SSL) on the AWS EC2 console. For this, click on the Security Groups on the left menu. Add the ports and press Add Rule and then Apply Rule Changes.

STEP 10: Install FTP server

  1. Enter the following command and install the FTP server:

    yum install vsftpd

  2. Create a certificate to be used with FTPS connections:

    openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /etc/vsftpd/vsftpd.pem -out /etc/vsftpd/vsftpd.pem

  3. Edit the configuration file:

    vi /etc/vsftpd/vsftpd.conf

    Disable anonymous access and add these lines to the end of the file. Then save the file and exit:

  4. Start the service

    service vsftpd start

  5. Create a user for FTP access and set the password

    useradd {user name} passwd {user name}

STEP 11 (Optional):

If you don’t want to enter the FTP credentials every time, you can
set them in the wp-config.php file:

 define('FTP_BASE', '/var/www/html/{folder name}/');
 define('FTP_CONTENT_DIR', '/var/www/html/{folder name}/wp-content/');
 define('FTP_PLUGIN_DIR ', '/var/www/html/{folder name}/wp-content/plugins/');
 define('FTP_USER', '{user name}');
 define('FTP_PASS', '{password});
 define('FTP_HOST', {hostname]');
 define('FTP_SSL', true);

STEP 12 (Optional): Install SSL certificate for the blog

This step is optional, but using SSL is strongly recommended when connecting to your blog as administrator

Upload your private key and certificate files to your server and copy them under SSL folder:

mkdir /etc/ssl/private<br />
mv filename.key /etc/ssl/private/<br />
mv filename.crt /etc/ssl/certs/<br />
mv CARootCert.crt /etc/ssl/certs/<br />

Modify Apache configuration file for SSL:

NameVirtualHost *:443
<VirtualHost *:443>
 DocumentRoot /var/www/html/{folder name}
 ServerName {domain name}
 ServerAlias www.{domain name}
 SSLEngine on
 SSLProtocol all –SSLv2
 SSLCertificateFile /etc/ssl/certs/filename.crt
 SSLCertificateKeyFile /etc/ssl/private/filename.key
 SSLCACertificateFile /etc/ssl/certs/CARootCert.crt

STEP 13 (Optional):

If you get an “Unable to locate Themes directory” error add the following snippet to wp-config.php.

if(is_admin()) {
 add_filter('filesystem_method', create_function('$a', 'return &quot;direct&quot;;' ));
 define( 'FS_CHMOD_DIR', 0751 );

STEP 14: Enjoy!

That’s it! You installed a Wordpress application, imported your old posts, secured your blog with FTPS and HTTPS access. Time to celebrate!

Online Education, Review comments edit

Recently there is an explosion in the online courses for higher education. It’s a great chance for people looking for comprehensive academic classes. I started with a bunch of these classes but time proved that I should have selected wisely. Because they are not fluffy little tutorials. You really have to take the time to watch the videos, take the exams and hand in the assignments. So here are the ones that I find useful mixed with some of the older resources I used for similar purposes.


Udacity is one of my favourites. It’s very forgiving about the deadlines. It uses Youtube extensively. You can even take the quizes and answer questions right on the video and submit your results. It has very nice courses including Applied Cryptography and Artificial Intelligence.


Coursera has a ton of Stanford classes like Cryptography, Machine Learning and Computer Networks. I had started Machine Learning course when it was in sort of a beta phase and was running on Princeton university offers nice computer science classes as well under this site.


I don’t know if this was a beta site but it offers only one class by MIT: Circuits and Electronics. More about MIT courses below…


edX has a small but quality selection of courses from MIT, Harvard and Berkeley. Software as a Service and Artificial Intelligence courses from Berkeley look delicious.

Academic Earth

I personally haven’t tried this site but heard from a friend. From the looks of it I think they have mostly old material. They have some video courses from Stanford which I had seen a few years ago from iTunesU. Still it may worth keeping an eye on.


iTunesU was a gold mine in my eyes when I first discovered it. Watching course from Stanford while commuting was a great way making good use of time. Now the more interactive courses overshadowed it but it’s not dead yet. Far from it. Apple released an iTunesU app which allows to download your favourite courses very easily.

Khan Academy

Frankly I’m not a big fan it. Compared to the rest it has rather simple and entry-level tutorials. But still shoulders an important mission and helps a lot of people get some valuable resources so I thought it’s worth mentioning.


Heavy Metal comments edit

This was my second festival experience after Wacken ‘10. I was quite satisfied and hopefully I’ll be back next year. Here are my notes about the festival. Hopefully I can remember to check back this post next year to apply todo list.


After a 3 hour drive, we arrived at the festival area via coach. After about 15 minutes of waiting I got my wristband and got in. First order of business was finding a spot to pitch my tent. This was a daunting task because I’ve never done it before. After struggling with the instructions for 10 minutes I started setting it up. It was easier than I was expecting and looks like all the tents work the same way, so I guess next time will be a breeze. After the tent was setup, the boring hours began! It was 2 P.M. and there was absolutely nothing to do around as the festival area was going to open at 5. I turned to my book and read a few hours (only reading session I had throughout my stay). The rest of the time passed with eating junk food and drinking beer while watching some “warm-up” bands.


Friday morning. After a very uncomfortable sleep, started a nice shiny day with a ton of nice gigs ahead. After getting an airbed from the festival area, I started patrolling the festival area. There are a number of choices for food but they were triple the normal price. I don’t understand why they try to rip off people at festivals. You already have thousands of people in a confined space who has nothing to do all day. Even with regular prices they can make a fortune but they are greedy as always. Saturday and Sunday were exact copies of Friday actually. Only the bands changed but my activities between the gigs were pretty much the same.


Here are my notes on the bands I’ve seen:

Friday – Ronnie James Dio Stage

Malefice : The opening band. I hadn’t heard about them before but they sounded strong and is definitely worth having a look at their albums.

Moonsorrow: I’m not too big on black metal but I’ve been enjoying their music for a decade now. I haven’t seen them live before so I was wondering how it’s going to be. They didn’t communicate with the audience (as none of the black metal bands). Overall it was a nice watch.


Iced Earth: I don’t know anything about their work after Something Wicked This Way Comes. But I’m happy to see them for the first time. Apparently they have a new lead singer. One funny moment was him dropping the microphone in the middle of a song. That was a unique incident thinking about all the concerts I’ve seen so far. He was quite chatty and I think he’s a good frontman. May worth checking out their new stuff.

Iced Earth

Sepultura: Another legend and another first live show for me. I can’t believe how I managed to miss them over all these years but finally caught them. They have enough material for a much longer playlist but they were only allowed for 30 minutes. They killed with Roots as the closing song!


Watain: Never had seen them before and never will again! Classic black metal with similar songs. One interesting scene was the singer bringing a cup full of some liquid looked like blood and spilling it over the crowd in the front row.


Behemoth: I love these guys! They deserved being an headliner with a great show. Flames and burning crosses were nice visual elements of the show.


Saturday – Ronnie James Dio Stage

Benediction: I wonder why they got the earliest spot of the day. I think they are a great band. Even at 11 in the morning, they managed to create mosh pits in the crowd. It was my first time seeing them live, so it felt good to be able to mark another band as seen.


I am I: With a lead singer resembling Captain Jack Sparrow, it seemed to be a good band but power metal was never my thing!

I am I

Cthonic: A band from Taiwan. Definitely not a country on metal map IMO. Didn’t like the band except for the beautiful bass player. It was funny when lead singer tried to make the crowd shout “Cheers” in Taiwanese. Cthonic

Crowbar: I used to have an album of this band (in cassette format!) that was one of the most boring albums I’ve ever listened to. The show was very boring too. Good thing I watched it from the beer tent!

Mayhem: I cannot understand how this band is still around. It was horrible! Worst metal band ever. They don’t deserve that spot in the middle of the day in the main stage. They should open the New Blood stage! Boring pointless screams with no melody or lyrics.


Sanctuary: Apparently they are an old band. Singer mentioned they were touring in the UK 20 years ago with Megadeth. I didn’t like their music much, wasn’t surprise why I never heard of them but rather how they survived all these years with this music. I hated them after the singer asked for the crowd to chant their name and said “Thank you for being so obedient” afterwards. I hate arrogant people!


Hatebreed: One of my top bands ever. I’ve seen them before and knew about the brutal pits but couldn’t resist again. Classical playlist. It was fun to watch when Jamie invited a 7 year old kid and his dad to the stage. (I think the organization tried to ban swearing on the stage after this. Jamie was swearing as usual with the boy on the stage)


Testament: Classic testament. I’ve seen them a couple of times before. Chuck Billy was awesome again with his stupid grin on his face :-). During the show they put a banner saying “Randy Free” which meant nothing to me at the moment. Later I learned about his incident here.


Machine Head: Another first show for me. I’m glad I finally saw them. I think they are the new Metallica (during their career peak – Master of Puppets and And Justice For All era). Beautiful show. I hope I can catch them some time soon.

Machine Head

Sunday – Ronnie James Dio Stage

Corrosion of Conformity: Pure rock ‘n roll. They have no crew so the lead singer/bass player had to deal with the technical issues during the show.

Evile: A very nice thrash metal band. Definitely under my watch now. I hope I can catch them around.

Nile: Pure death metal. I’m not a big fan of them. It wasn’t too bad for passing time but nothing special about it.

The Black Dahlia Murder: Not one of my favourite bands but the frontman had a nice dialog with the audience. One interesting detail was one guy trying to crowd-surf completely naked!

Paradies Lost: I used to like them in the old days, not so much anymore but still I enjoy their music. I didn’t understand the Nick Holmes’ attitude though. Making fun of festival name, trying to undermine common concert acts like asking the crows a question and make them make some noise. A dying band!

Dimmu Borgir: Another first show. And a great one. It wasn’t dark when they took the stage but still they managed to get the crowd in the mood. Good dialog with the audience which I didn’t expect. Alice Cooper: Amazing rock show. I’ve seen him in Wacken but this time I was much closer to the stage and enjoyed it a lot more. Best show I’ve ever seen. Even though I’m not a fan of his music, I enjoyed the classics.

Friday - Sunday – Other stages


During the breaks at the main stage, I sneaked into the other stages. Couldn’t see many interesting events except Sa-Da-Ko. I really enjoyed their music and got hem under my radar now. I’ll follow their albums.

After the live shows, there was a party in Sophie Lancaster stage with The 5 DJs of the Apocalypse. They played almost all the classic metal hits. The dancer girls on the stage on Sunday night were very hot!

Bloodstock 2013

I don’t know if I can make it there, but if I do here is my todo list based on my current experiences:

  • Try to get a closer camping spot to the showers.
  • Gear to buy:
    • Same tent
    • A better/thicker sleeping bag
    • Airbed (from the festival area the first day)
    • Suntan cream
  • Check the weather first, bring rubber boots if it may rain (This year I was lucky)
  • Get more info about the bands before the festival
  • Don’t bring a camera. Use phone or just Google for the photos and videos. There are plenty of them all around.

Off the Top of My Head comments edit

I’ve been using my own domain in my home network for a few years now but I hadn’t tried Windows Server Update Services (WSUS) until recently. After re-organizing my network, I ended up having almost 20 virtual machines. Since most of them are running various flavours of Windows, keeping them up-to-date became an issue. That’s when I recalled the existence of WSUS. What it does is basically allowing you to download Windows updates on one of your own servers and then distributing them to other machines over LAN. So no more downloading 300M of service packs over and over again.

Installing WSUS is quite easy. You just have to open server manager and add the Windows Server Update Services role.

SQL Server Roles

Then you can select for which products you want to download updates. And also what types of updates you want to download.

SQL Server SQL Server

The most tricky part is enforcing the client machines in the domain to download the updates from your server. That goal is achieved by Group Policy Management. TechNet has a nice article describing the necessary actions:

Normally, its target is not home users obviously. But after I’ve seen the benefits of it, I strongly recommend it to anyone running his own domain.

Development comments edit

I was planning to play around with Visual Studio Lightswitch for a while. Finally, I could spare some time. For me the best way of learning is by doing so before I started playing I had to imagine a project first.

To find an appropriate project of course we have analyse what Lightswitch is and is it good for: In a nutshell, Visual Studio Lightswitch is a fast and easy way to develop data-driven Line-of-Business (LOB) applications. As a developer, I am generally not very fond of such tools because they impose many limitations whereas while writing code we have unlimited freedom. But I also hate wasting time with boiler-plate code. Creating add/edit/search screens for some entities is such a trivial and boring task. Such forms should always be  generated by a tool to sustain consistency. Otherwise, especially in large applications and organizations. By the way, this reminded me the one of the worst forms I have ever seen in a Microsoft application. It’s the reporting form in Team Foundation Server 2010 as shown below. Even the tabs in the same form are inconsistent. But, I digress! Let’s move on.

TFS Reporting

For a long time I was looking for an nice open-source software to manage my movie collection. I tried a few but couldn’t exactly find what I wanted. So while trying to find a project idea I decided to create a simple movie manager application. It’s mainly data entry and search so it sounded an ideal project type for Lightswitch.

The result was amazing! Not that I created a complex and fully-functioning application but within minutes I had a simple database and two forms to enter movie and director information and a movie search form.

Movie Entry

Search Entry

The screens are customizable but the even the default templates create very satisfying results. They cover all the basic needs for data entry, validation and search. So having a such tool in my arsenal and always preferring to develop my own software instead of using someone else’s I decided to develop my movie and TV show management program with Lightswitch. Thinking this is only version 1.0, I think there’s great potential in it for there are so many applications to develop but not enough time.