Couch is one of move most popular databases in NOSQL movement. When I first started playing around with Couch I was a bit confused by the naming. I had thought there was one product but turns out there are two actually.

CouchDB vs. Couchbase

Apache CouchDB was created by Damien Katz who then started a company called CouchOne Inc. After some time they decided to merge their company with Membase Inc. which developed another open-source distributed key-value database called Membase. They merged the two products so that it would use Membase as storage backend and some portions were rewritten. The end result was called Couchbase. So even though it’s based on Apache CouchDB it’s a different product and is being developed by a different company. But it’s still open source and licensed under the Apache 2.0 license.

Which one to use?

They serve different needs. Couchbase has a built-in memcached-based caching technology whereas Apache CouchDB is a disk-based database. Therefore Couchbase is better suited low latency requirements. Couchbase has built-in replication which allows data to be spread across all the nodes in the cluster automatically. Apache CouchDB supports peer-to-peer replication. I find auto-replication feature of Couchbase marvellous and it’s extremely easy to manage. When you create a new node it can be a new cluster on its own or it can be added to an existing cluster. Adding it to a cluster consists of just providing the IP address/hostname and administrator credentials of a machine in that cluster and the rest is automagically taken care of. I’m using Couchbase in my test applications.

What’s new in Couchbase 2.0

Couchbase released a new major version recently. Highlights of the new features are:

  • Cross Data-Center Replication (XDCR) enhancements
  • 2 cool sample buckets (beer-sample and gamesim-sample)
  • A new REST-API
  • New command-line tools
  • Querying views during rebalance

In the next post I’ll go into more technical details.


When I saw this gadget, I knew I had to have it. Didn’t exactly know what to use it for but it looked and sounded cool. So I ordered one along with a pro version. Unfortunately only the pro version arrived as the other one was out of stock. It would be more fun to build it myself but just seeing it in action is fun too. Of course it’s not as cool as a throwing star but functionality is exactly the same.

LAN Tap Throwing Star

The idea is instead of directly connecting your computer to a switch, you connect the machine to this gizmo and connect the port across to the switch. So essentially getting between the target machine and its final destination for network traffic. The other 2 ports are for monitoring. One of them is for received packets and the other is for the transmitted. Connect a monitoring device to one of these ports and it’s done. The rest is firing WireShark in the monitoring machine and watching the traffic of the other machine. A few cool things about it:

  • It doesn’t require any power source
  • It’s unobtrusive and undetectable

If you want to learn more, here is a nice video about it from Hak5:

Hak5–Throwing Star LAN Tap

I learned that it is commonly used for Intrusion Detection Systems (IDS) so it would be nice to one handy if I can start using one finally. The limitation is of course it only can be used to monitor one target device only. To listen to whole network I’ll need a switch with port mirroring or SPAN support. But for now let’s make sure this device is working properly first. The problem with the pro version is that it doesn’t have any indicators of which ports are for monitoring. So I randomly selected one, connected it between my desktop and the router, connected the laptop to one of the remaining ports. To test it I’m simply pinging With this confiugration I got nothing, Let’s change the ports and give in another try.. and voila! I filter the packets by my desktop’s IP and ICMP protocol so it’s easy to observe the sniffed packets.


But as you can in the above screenshot there’s a problem: This is only one-way traffic. Let’s use the other monitoring port to see what’s going to change. Another ping to Google and this is what we get:


Now we receive only ping reply packets.As Darren Kitchen mentioned in the Hak5 video we can overcome this problem by using a USB Ethernet adapter with multiple ports. I don’t have one of those so I’ll just take his word for it. Verdict: Only monitoring one machine in one direction makes it a bit useless for me. I was planning to use something to see everything in both directions but overall it was a valuable  experience. After all, before I heard about LAN tapping in a TWIET episode ( I didn’t even know such thing existed. Hearing about it in a podcast is nice but nothing beats hands-on experience.

System Administration

When you have Windows Services you must also implement a monitoring solution to make sure that they are running at all times. Some time ago I needed a quick and dirty solution to notify myself when one of the services stopped. The solution I depict here is by no means an ideal one. The only advantage of it is it’s very fast to implement if you don’t already have a monitoring system. Disclaimer aside let’s get to work!

The tools we need come with Windows so no need to install anything. The idea is simple: Create a task scheduler that is triggered on an event. The triggering even will be the stopping of the monitored service and action that will be taken will be sending the notification email.

STEP 01: Create a new filter a. Launch Task Scheduler. b. Right click Task Scheduler Library and select Create Task c. Select the Triggers tab. d. Click New… e. In the Begin the task list select “On an event” f. In the Settings section select Custom and click New Event Filter g. In the New Event Filter dialog, select XML tab and check “Edit query manually” h. As the query text type in the following:

 <Query Id="0">
 <Select Path="Application">
 *[EventData[Data and (Data='Service stopped successfully.')]]


Change the name of the service name and the message it displays when it stops. Note that service name is not what you see in the services list. You have to right –click and view properties. For example, as shown in the picture below, service name for DNS client is “Dnscache” where as display name is “DNS Client”.

Service Name

STEP 02: Create action to send mail a. Select the Actions tab and click on New b. From the Action list select “Send an e-mail” c. Fill in the details for the notification email. At this point we are good to go. An email will be fired when the service stops and logs the text we are looking for. Keep in my mind that it’s quite fragile because it will stop working if the text the service logs changes. Having a built-in send mail capability is great but if you need more features, like adding Cc/Bcc recipients or setting the priority of the mail this option would not be enough for you. In that case, playing around with PowerShell would do the trick.

STEP 03: [Optional] Create a script to send mails PowerShell is built on top of .NET framework so with a few lines of code we can send mails just like we can in C#:

$email = New-Object System.Net.Mail.MailMessage
$email.From = ""
$email.Priority = [System.Net.Mail.MailPriority]::High
$email.Subject = "Your notification subject"
$email.Body = "A bleak and gloomy text to drive the recipient into panic"
$smtpClient = New-Object Net.Mail.SmtpClient("SMTP hostname or IP address", 587)
$smtpClient.EnableSsl = $true
$smtpClient.Credentials = New-Object System.Net.NetworkCredential("username", "password");

This example uses port 587 and SSL, your configuration may vary. That’s all there is to it to send a mail with PowerShell and you have full control over it.

To run this script in the actions list select “Start a program” from the actions list. In the Program/script textbox enter “powershell” and enter the full path of the script in the arguments textbox. Don’t forget to save it with a ps1 extension.


VMWare is one of my favourite IT companies. They are specialized in one area and they create very nice products. And they mind their own business. I mean you don’t read about them in patent dispute related news. As virtualization is the key technology behind cloud computing in a way VMWare is one of the pioneers to make it happen. They say Microsoft is advancing with HyperV 3.0 but currently I’ll stick to VMWare Workstation for now. As of version 8.0 VMWare Workstation comes with a cool feature called VM Sharing. As the name implies, you can sharing a whole machine and connect to it from another workstation application and manage that machine as if it was a local machine. So if you need to access a virtual machine from multiple computers you can accomplish it without creating multiple copies of the machine. All you have to do is open the VM you want to share and select VM -–> Manage –-> Share. Keep in mind that the machine must be powered off.


Sharing wizard is very simple. It asks if you want to clone the machine and move it under the shared VM folder. I like moving it because I don’t want to deal with multiple copies. Then from the client side select File –-> Connect to server.


Then provide the hostname / IP address along with administrator credentials and you can see the shared VMs under (not surprisingly) Shared VMs menu at the bottom of the left menu.


The rest is exactly the same as the regular process. You can manage the remote virtual machine as if it resides in your local environment.


AWS must be short for awesome! I love using it. It makes managing virtual machines so much easier yet provides full power to the user through its API. Thanks to vision of Jeff Bezos, every function you see on the management console can be accessed via API as well. Back in 2002 Jeff Bezos mandated that all teams will expose their data and functionality through service interfaces. This approach make complete sense. It makes separation of layers much more easier, makes the code testable. That’s why I’m currently big on ServiceStack and WebAPI but that’s a discussion for another post. In this post I’d like to share some of the tips & tricks that I picked during my involvement with AWS. Of course, as many IT related things, this is an ongoing process and I may post sequel to this one in the future. Currently my tips are as follows:

TIP 01: Always create production servers with termination protection on If there is one thing I don’t like about AWS is that in the management console there is no way of separating the production and test/staging machines. So first use a clear naming convention to distinguish them but sometimes that’s not enough. In the heat of the moment you can attempt to stop or terminate a production instance. If you don’t have termination protection enabled this attempt would become a tragedy but if you have it on simply nothing happens and you get to keep your job. If you forgot to turn it on while creating an instance you can always change it by right-clicking on the instance and selecting Change Termination Protection.

AWS Termination Protection

TIP 02: You can change instance type in a few minutes One of my favourite features is that you can stop the instance and change it’s type. This way you can upgrade or downgrade a machine within minutes. So don’t worry if you are not sure what instance size you would need for a specific job. Just ballpark it, observe and upgrade/downgrade at an idle time.

TIP 03: Use auto-scaling This feature is not available via management console but it’s possible with API. You can write your application but it’s even easier by using command line developer tools. Basically you create a scaling policy for scaling up and one for scaling down. You define the alarm conditions and when these conditions are met the policy you specify is executed. This way if your web servers are under heavy load, for example, you can automatically launch another machine. They all have to be under the same load balancer of course. You can find more about auto-scaling here:

TIP 04: Use Multi-AZ (Availability Zone) deployment Regions have several availability zones in them. Although you cannot create cross-datacentre systems, you can create instances using different AZs. So  if one data centre goes down other instances can still be responsive. It’s the simple principle of not putting all the eggs in the same basket.

TIP 05: Customize management console AWS management console comes with a cool feature: It enables you to pin your favourite services on top of the page for easy access. There are a bunch of them but most likely you’ll need EC2 and S3 available at all times. At least I do. You can pin them by simply dragging the service name and dropping it onto the top bar. After pinning them on top, they are always one click away.

AWS Customize Menu

TIP 06: Change disk size while creating the image This is especially handy for Windows instances as they demand more space than Linux ones. The default size for a Windows Server is 35GB. It’s actually quite enough for a standard Windows installation but I guess Amazon is reserving some of the space for some reason because when you launch the machine you only get around 3GB free disk space which to me sounds terrifying. If a log file gets out of hand a little bit it can bring down the whole machine. So it’s best to get some free space upfront. At least for the peace of mind if nothing else.

AWS Change Disk Size

AWS Change Disk Size

TIP 07: Don’t forget to delete manually attached EBS volumes When you terminate an instance make sure you delete all the attached EBS volumes that are not set to auto-delete. The default volume that comes with the instance has Delete on termination option checked in the wizard so they are automatically cleaned up. But if you create a volume manually and attach it to an instance there is no option to set this flag. So you have to delete them manually. AWS is kind enough to warn you to delete them when deleting the instance. If you don’t take care of them immediately and you have auto-scaling you may end up with terminating lots of instances that leaves unused disks that you keep paying for.

AWS Delete Instance

TIP 08: Reserve as early as you can This is another budget tip. If you are certain about the size of an instance then buy a reserved instance for that type. Reserved instance is not a technical concept. When you buy one you start paying less by the hour for an instance of that type. For a comparison to see how much you can save check out here:


It’s been a while since I’ve started using StyleCop in my projects. Last year I managed to sneak it in to my company’s projects as well. Applying it to existing projects and fixing all the errors was a tiring process at first but I believe it was worth it. It really helps for consistency. Regardless of the developer of a certain block of code it’s very easy to read it because everybody has to adhere to same rules across the company. Here are a few tips to manage this:

01. Force StyleCop warnings to be treated as errors. I hate warnings completely actually. That’s why I set treat warnings as errors to All on the projects I work on. This helps to eliminate many potential bugs before they become an issue.

Treat warnings as errors

Unfortunately, StyleCop errors are not included in this. But with a little tweak we can turn on this feature for StyleCop warnings as well. Just add the following line to your project’s .csproj file inside the first PropertyGroup tag:


The wording is the opposite of Visual Studio’s (treat errors as warnings instead of the other way around) so we have to set this to false. After reloading the project, you won’t be able to successfully build your project without fixing all the StyleCop rule violations (which is a good thing!)

02. Integrate StyleCop to MSBuild Naturally if the process is not automatic it won’t work. If, as a developer, it’s left to me to right-click on the project and run StyleCop manually I’d forget it after a few times. The easiest way to integrate it with MSBuild is adding StyleCop.MSBuild NuGet package to your project. Alternatively if you have installed full StyleCop application you have StyleCop.Targets file under your installation directory. By adding that file to the project you can achieve MSBuild integration.

For multiple developer environment it’s best to use a fixed path so that when someone new starts working on the project they can still build the project. To accomplish that, we mapped R drive to a folder that contains the targets file so that the build doesn’t break. Of course needless to say new developers have to do the mapping to make this work.

03. Run StyleCop on the server as well The problem with manually enabling treating warnings as errors feature on the developer system is that it can be easily forgotten or can be temporarily disabled for some reason. When the developer forgets to re-enable it,he/she can check-in code that violates code convention rules. To avoid that we should reject code on the source control during the check-in process.This is where StyleCopSVN comes in. Of course as the name implies this solution works only for SVN. I haven’t yet looked into other source control systems like Git or TFS for this feature yet. You can get SVNStyleCop here:

The way it works is quite simple and the official page has a good tutorial about it. Mainly you override the pre-commit hook and run StyleCop from before the code is submitted. The problem with this is that you have to maintain a separate copy of rules and StyleCop files so when you update your rules you have to remember to update it on the server as well.

04. Use symbolic links to maintain one global rule set Windows Vista (and above) comes with a handy utility called mklink. By entering the following command you can create a symbolic link to Settings.StyleCop file anywhere you please.

mklink /H Settings.StyleCop {Path for the actual file}

This way all projects are going use the same settings file. The problem is it’s a tad cumbersome especially if your project involves lots of projects.

05. A better approach for one rule set to rule them all I was pondering for minimizing user efforts to deploy StyleCop and it hit me! Our beloved NuGet could take care of it as well. StyleCop has already a package in the official NuGet repository but the problem with it is that it comes with its own StyleCop rule file so it’s not quite suitable for a team environment. Even not for a single developer because all projects will have different rules and it can quickly become a maintenance nightmare. The idea of using NuGet is creating a package that contains StyleCop rules and libraries. When the package is installed it copies the libraries, rules and targets file under the project. Also an install script can be used to add the import project and treat warnings as errors settings mentioned in tips 1 & 2. The advantages of this method are:

  • All projects installing the package will be using the same rule set downloaded from server
  • MSBuild integration is done automatically
  • Treat warnings as errors update is done automatically
  • No configuration needed (i.e: Mapping drives, creating symbolic links etc)

The disadvantage is if rules are updated the package needs to be re-installed for he projects. It’s still not perfect but compared to other methods I think it’s a neat way of distributing and enforcing StyleCop rules.

Tips & Tricks

I like Windows Live Writer and I use it for blogging. The problem is I start multiple posts at once, take some notes on them and save them as drafts. Sometimes when I’m on a different machine I want to add some notes on the existing drafts but (you guessed it) the drafts are saved locally on a different machine. I already have Dropbox installed almost on my machines so I decided to harness it to the task.

STEP 01: Delete the My Blog Posts folder in the destination machine. The local folder is created automatically under %UserProfile%\Documents\My Weblog Posts. Delete this folder. Make sure LiveWriter is closed before deleting it.

STEP 02: Create a directory junction A directory junction is a mapping to another folder. In Windows 7 you can use mklink command to create directory junctions (as well as symbolic and hard links)

mklink /D "%UserProfile%\Documents\My Weblog Posts" {PATH_TO_DROPBOX_ROOT}\My Weblog Posts"

Enter the correct path to your dropbox folder and that’s it. Now you can enjoy the ease of synchronized blog drafts.


I learned a neat trick to force Windows check a USB device plugged in to be able to log on the system. The tool to use for that is syskey, an ancient tool introduced to Windows with Windows NT SP3. Here’s how to do it:

  1. Insert your USB drive. As syskey only supports floopy disk change the drive letter to A.

  2. Run syskey (From command prompt or by pressing WinKey + R then entering syskey)

  3. Select Store Startup Key on Floppy Disk


After you restart the machine, Windows will check your “floppy” USB drive and if it is not there it will display the error message: “This computer is configured to use a floppy disk during startup. Please insert the disk and click OK”. After you insert the disk you can logon by entering your password.


Know thy limits! This is especially important when you’re developing a system that expects a high traffic. Moving systems onto the cloud makes it easier to adapt and scale out to match the load but you have to prepare for node failures and instant spikes in the traffic. Also you have to make sure that your system is responsive under long heavy load. Below I recommend 3 tools to test your system against such situations:

01. JMeter

Apache JMeter, is a Java-based open-source desktop application. I submitted a basic introduction to JMeter here. But it has many advanced feature which I’m planning to cover in a post in the near future.

02. Siege

Siege is an HTTP load testing tool. It’s not complex as JMeter but works the job well and it is very simple to use. It supports UNIX variants but not on Windows. It can obtained from here.

03. Chaos Monkey

Chaos Monkey

Originally developed by Netflix and open-sourced later. It is AWS specific tool. What is does is connect to your AWS system and terminate instances randomly. So that you can observe your system in such worst case scenarios. The good thing about it is, it selects its “victims” by looking at a tag you provide. So if you don’t want a single node such as a database server, you can easily exclude it. Source code can be downloaded from here.



I love my Yubikey so much that I recently bought another one. I couldn’t find a good use for it yet but I’m sure I will someday :-)

If you don’t know what a Yubikey is, check out its vendor here. Basically it is a one-time password (OTP) generator. It has a USB input device. It doesn’t require batteries to operate so you can use one everywhere without having to worry about such issues. I’m trying to incorporate using it into my daily life so that I can leverage two-factor authentication as much as possible.

Today, I found another usage. Yubikey Wordpress plugin. By using this plugin now I can login to my blog using my password and OTP generated by Yubikey. Yubikey has a web API and the plugin calls the API to authenticate your device. To learn more about the settings visit the plugin’s site:


One of the key goals when developing a web application is to make it scalable. Meaning that it should handle lots of traffic without hindering the performance. But most of the time we only care about performance when it becomes a problem and generally it’s then too late to make radical design changes. Therefore, an upfront automated load testing is very helpful to gauge your application’s performance and being aware of its limits. One popular tool used for load testing is JMeter.

JMeter Basics

  • Thread Group: Each thread acts like a single user. All elements must be under a thread group.
  • Listener: Allows access to the information gathered by JMeter. Some listener examples are Aggregate Report, Graph Report and Summary Report
  • Logical Controller: They allow you to add construct to control the flow of your tests such as If, While, ForEach
  • Sampler: They tell jMeter to send requests to server and wait for a response.

When you launch JMeter there are 2 items on the left menu: Test Plan and Workbench. Test Plan is the real deal. That is the actual sequence of events that are fired. Workbench is where you can store test elements.

Creating a load test plan can be accomplished in 2 simple steps:

  1. Create a thread group: Everything runs under a thread group. Think of each thread as a user.


  2. Insert an HTTP request: Set the host name and page you want to call.


That’s it! If all you need is to create some heavy load you can create a few different HTTP requests and start bombarding your server right away.

A trivia about the JMeter is that it is mentioned in the book titled “We Are Anonymous”. Apparently it can also be used as a DDoS tool!



Online Education

Last week another online course started at Stanford University called An Introduction to Computer Networks. It started on 8th of October and they released a good deal of materials for the first week. I hope I’ll follow it until the end. If you’re interested you better hurry up because it’s not easy to catch once the videos pile up!

Here’s the link to access the site:

UPDATE: The above link seems to have stopped working. This should be the current one now: Stanford CS144 Networking Class

Site news

I’ve been postponing this for a long time but finally I did it: I started using AWS (Amazon Web Services). My blog doesn’t get too much traffic so actually I don’t need the scalability which is EC2’s strongest point but I wanted to play with the cloud so decided to move my blog first. It’s quite easy to do so and I’ll explain how to migrate your existing blog to cloud within minutes:

STEP 01: Backup your existing blog

mysqldump -u root –p {database name} > blog_backup.sql

Download your wordpress installation folder and the backup you just created from your existing server. I Used WinSCP to get my files. It can be downloaded here.

STEP 02: Create AWS EC2 instance

Fun part starts now. Login to your AWS account and go to EC2 console. Click Launch Instance and follow the wizard. For my blog, I selected a Micro instance but it depends on your needs. I selected a 64-bit Amazon Linux AMI for the instance.

STEP 03: Assign an IP to your server*</span>

Our machine has just started running but to update our domain’s DNS records we need an IP. On the left menu, click the Elastic IPs link and allocate a new IP address. The IPs assigned to EC2 instances are free.


Right click on the IP and select Associate and choose your instance. After this step we have a running machine with a public IP.

STEP 04: SSH into the machine

By default SSH is enabled you must have created a keypair to access your machine during the Step 2. I use putty to as my SSH client which can be downloaded from here. It’s best to switch to root during the installations. So type:

sudo su

STEP 05: Install required programs

First install Apache:

yum install httpd

Then PHP:

yum install php php-mysql

Then MySQL:

yum install mysql-server

If you use SSL like me you also need to install SSL module for Apache:

yum install mod_ssl

Start Apache and MySQL

service httpd start

service mysqld start

STEP 06: Customize MySQL and import your blog

  1. Run the following command to set root password and harden the default installation


  2. Login to MySQL

    mysql -u root –p

  3. Create your database, user and grant access to that user

    create database {database name};

    create user ‘{user name}’@’localhost’ identified by ‘{password}’;

    grant all privileges on {database name}.* to ‘{user name}’@’localhost’ with grant option;

  4. Switch to database and import data

    use {database name}’;

    source {path/to/mysqldump file you uploaded}

STEP 07: Copy blog files

Copy the Wordpress files you uploaded under /var/www/html/{directory name}

STEP 08: Configure Apache

Enter the command to edit configration file:

vi /etc/httpd/conf/httpd.conf

Go down to the end of the file and create a new virtual host by this:

NameVirtualHost *:80
<VirtualHost *:80>
	ServerAdmin webmaster@localhost
	DocumentRoot /var/www/html/{directory name}
	ServerName {Your domain name}
	ServerAlias www.{Your domain name}

And restart the Apache service:

service httpd restart

STEP 09: Enable access to FTP, HTTP and HTTPS

One last step before testing your blog is opening port 21 (for installing themes, plugins etc.) 80 (for viewing!), and 443 (if you’re going to use SSL) on the AWS EC2 console. For this, click on the Security Groups on the left menu. Add the ports and press Add Rule and then Apply Rule Changes.

STEP 10: Install FTP server

  1. Enter the following command and install the FTP server:

    yum install vsftpd

  2. Create a certificate to be used with FTPS connections:

    openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /etc/vsftpd/vsftpd.pem -out /etc/vsftpd/vsftpd.pem

  3. Edit the configuration file:

    vi /etc/vsftpd/vsftpd.conf

    Disable anonymous access and add these lines to the end of the file. Then save the file and exit:

  4. Start the service

    service vsftpd start

  5. Create a user for FTP access and set the password

    useradd {user name} passwd {user name}

STEP 11 (Optional):

If you don’t want to enter the FTP credentials every time, you can
set them in the wp-config.php file:

 define('FTP_BASE', '/var/www/html/{folder name}/');
 define('FTP_CONTENT_DIR', '/var/www/html/{folder name}/wp-content/');
 define('FTP_PLUGIN_DIR ', '/var/www/html/{folder name}/wp-content/plugins/');
 define('FTP_USER', '{user name}');
 define('FTP_PASS', '{password});
 define('FTP_HOST', {hostname]');
 define('FTP_SSL', true);

STEP 12 (Optional): Install SSL certificate for the blog

This step is optional, but using SSL is strongly recommended when connecting to your blog as administrator

Upload your private key and certificate files to your server and copy them under SSL folder:

mkdir /etc/ssl/private<br />
mv filename.key /etc/ssl/private/<br />
mv filename.crt /etc/ssl/certs/<br />
mv CARootCert.crt /etc/ssl/certs/<br />

Modify Apache configuration file for SSL:

NameVirtualHost *:443
<VirtualHost *:443>
 DocumentRoot /var/www/html/{folder name}
 ServerName {domain name}
 ServerAlias www.{domain name}
 SSLEngine on
 SSLProtocol all –SSLv2
 SSLCertificateFile /etc/ssl/certs/filename.crt
 SSLCertificateKeyFile /etc/ssl/private/filename.key
 SSLCACertificateFile /etc/ssl/certs/CARootCert.crt

STEP 13 (Optional):

If you get an “Unable to locate Themes directory” error add the following snippet to wp-config.php.

if(is_admin()) {
 add_filter('filesystem_method', create_function('$a', 'return &quot;direct&quot;;' ));
 define( 'FS_CHMOD_DIR', 0751 );

STEP 14: Enjoy!

That’s it! You installed a Wordpress application, imported your old posts, secured your blog with FTPS and HTTPS access. Time to celebrate!

Online EducationReview

Recently there is an explosion in the online courses for higher education. It’s a great chance for people looking for comprehensive academic classes. I started with a bunch of these classes but time proved that I should have selected wisely. Because they are not fluffy little tutorials. You really have to take the time to watch the videos, take the exams and hand in the assignments. So here are the ones that I find useful mixed with some of the older resources I used for similar purposes.


Udacity is one of my favourites. It’s very forgiving about the deadlines. It uses Youtube extensively. You can even take the quizes and answer questions right on the video and submit your results. It has very nice courses including Applied Cryptography and Artificial Intelligence.


Coursera has a ton of Stanford classes like Cryptography, Machine Learning and Computer Networks. I had started Machine Learning course when it was in sort of a beta phase and was running on Princeton university offers nice computer science classes as well under this site.


I don’t know if this was a beta site but it offers only one class by MIT: Circuits and Electronics. More about MIT courses below…


edX has a small but quality selection of courses from MIT, Harvard and Berkeley. Software as a Service and Artificial Intelligence courses from Berkeley look delicious.

Academic Earth

I personally haven’t tried this site but heard from a friend. From the looks of it I think they have mostly old material. They have some video courses from Stanford which I had seen a few years ago from iTunesU. Still it may worth keeping an eye on.


iTunesU was a gold mine in my eyes when I first discovered it. Watching course from Stanford while commuting was a great way making good use of time. Now the more interactive courses overshadowed it but it’s not dead yet. Far from it. Apple released an iTunesU app which allows to download your favourite courses very easily.

Khan Academy

Frankly I’m not a big fan it. Compared to the rest it has rather simple and entry-level tutorials. But still shoulders an important mission and helps a lot of people get some valuable resources so I thought it’s worth mentioning.


Off the Top of My Head

I’ve been using my own domain in my home network for a few years now but I hadn’t tried Windows Server Update Services (WSUS) until recently. After re-organizing my network, I ended up having almost 20 virtual machines. Since most of them are running various flavours of Windows, keeping them up-to-date became an issue. That’s when I recalled the existence of WSUS. What it does is basically allowing you to download Windows updates on one of your own servers and then distributing them to other machines over LAN. So no more downloading 300M of service packs over and over again.

Installing WSUS is quite easy. You just have to open server manager and add the Windows Server Update Services role.

SQL Server Roles

Then you can select for which products you want to download updates. And also what types of updates you want to download.

SQL Server SQL Server

The most tricky part is enforcing the client machines in the domain to download the updates from your server. That goal is achieved by Group Policy Management. TechNet has a nice article describing the necessary actions:

Normally, its target is not home users obviously. But after I’ve seen the benefits of it, I strongly recommend it to anyone running his own domain.


I was planning to play around with Visual Studio Lightswitch for a while. Finally, I could spare some time. For me the best way of learning is by doing so before I started playing I had to imagine a project first.

To find an appropriate project of course we have analyse what Lightswitch is and is it good for: In a nutshell, Visual Studio Lightswitch is a fast and easy way to develop data-driven Line-of-Business (LOB) applications. As a developer, I am generally not very fond of such tools because they impose many limitations whereas while writing code we have unlimited freedom. But I also hate wasting time with boiler-plate code. Creating add/edit/search screens for some entities is such a trivial and boring task. Such forms should always be  generated by a tool to sustain consistency. Otherwise, especially in large applications and organizations. By the way, this reminded me the one of the worst forms I have ever seen in a Microsoft application. It’s the reporting form in Team Foundation Server 2010 as shown below. Even the tabs in the same form are inconsistent. But, I digress! Let’s move on.

TFS Reporting

For a long time I was looking for an nice open-source software to manage my movie collection. I tried a few but couldn’t exactly find what I wanted. So while trying to find a project idea I decided to create a simple movie manager application. It’s mainly data entry and search so it sounded an ideal project type for Lightswitch.

The result was amazing! Not that I created a complex and fully-functioning application but within minutes I had a simple database and two forms to enter movie and director information and a movie search form.

Movie Entry

Search Entry

The screens are customizable but the even the default templates create very satisfying results. They cover all the basic needs for data entry, validation and search. So having a such tool in my arsenal and always preferring to develop my own software instead of using someone else’s I decided to develop my movie and TV show management program with Lightswitch. Thinking this is only version 1.0, I think there’s great potential in it for there are so many applications to develop but not enough time.

Off the Top of My Head

After I downloaded the bits and I tried to install it for the first time on a Virtual Machine, I sadly discovered that there was something wrong with the ISO image because it gave an error while extracting files at %62. A quick hash-check (which should have been done right after the download) showed that the file was corrupt and there went the 5GB of my bandwidth down the drain! Lesson learned: Always perform a hash-check and verify the file if a hash is provided by the source.

Anyway, after downloading it successfully, I installed it on a VMware virtual machine. I followed the step-by-step guide at, but there aren’t any tricks actually. I started using it a on the virtual machine a little bit and as everybody else, I hated it! Obviously, the tile interface is designed for tablets or touch-enabled screens to be more generic.

So, my first interaction with it was negative for 2 reasons:

  1. It’s always a bit off-putting testing out a new OS on VM. Because you always revert back to your original OS all the time and you don’t have the entire experience.
  2. In this case, it’s clear that without a touch screen I wouldn’t enjoy using it much. (I’m not sure how well people will react when it RTMs)

Then I suddenly remembered that I already had a touch-screen notebook! 3 years ago, i bought an 12.1” screen notebook which back then was called a tablet PC because it had a rotating screen. Until this day I only used its touch capabilities a few times for experimental purposes. My initial intention to use it as an e-book reader soon proved to be preposterous as it weighs a solid 2.2 kg.! (An iPad 2 is 600 grams by the way)

But my long wait to find a legitimate use for it is over! Finally I can use it’s tabletish functionalities!

First I installed it on a VHD and booted it off of it. Scott Hanselman has a great blog post walking you through the steps:

So I immediately installed it but again i was not satisfied! The performance wasn’t so good and it also messed up with the boot loader. Even though I could opt to boot from my old Win 7 installation, it just restarted and couldn’t load it. I tried to repair using Win 7 DVD but to no avail. But using the repair functionality of the Windows 8 did the trick and I could boot it to my Win 7 again. And to fix it once and for all, I changed default OS from system settings


Now I could freely choose the OS I wanted but the performance issues still remained. Then I decided to use the hard drive of my broken PS3. So I switched disks and installed it for the third time but on its own personal disk this time!

Performance is still not outstanding but then again my notebook never had a outstanding performance regardless of the OS running. Finally, here’s a little video I took. Screen is resistive so it’s a bit hard to use with finger but still it’s the closest thing to a tablet I have at the moment running Windows 8.

I will be playing around with it now and hopefully post more on this subject later.

Off the Top of My Head

Recently i decided to buy an surveillance camera to setup in my room. For two reasons mainly:

  1. Security: Shocking but true!
  2. Research: I always wondered how these devices operate, how they are installed what protocols they use etc.

I ordered one but the shipment never arrived. After waiting for two months, and a battle for refund i ended up where i started. (By the way, in this instance I was too cheap to shop at a company called I’m glad i finally could get a refund but I’m never ever going to shop there again. I strongly recommend everyone to stay away.)

In the meantime, it occurred to me that i had 2 laptops with webcams and a external USB webcam that i plug into my desktop PC. With 3 cameras i should be able to setup a small security system. So i started searching for some software to turn my cameras into a security system. Surprisingly i found an open-source one. It’s called iSpyConnect ( Better yet it’s written in C#. It supports cool features like uploading to YouTube. But most features that require server support requires a subscription. In the free version you are allowed to upload pictures via FTP to one server. But since I have the source code I’m planning to make my own changes.

So for now i have webcams and required software. I can’t use MacBook and it’s webcam since it’s not supported but I tested it with two webcams (One facing the door and the other facing the window) When it detected motion it started recording the video. Also uploaded pictures to my FTP server on the Internet. So even if the burglar notices the system and somehow manages to delete the local copy of the video feed, there’s still good evidence safe and sound out in the cloud.

I am planning to improve the system and I will be posting more details about it as I go along.

Off the Top of My Head

I use RSS feeds extensively to follow the tech news. I love Google Reader and i’ve been using it since forever. But lately i realized that i didn’t have much experience in tweaking the settings. I didn’t feel the urge to go into settings and manage my subscriptions. Until 10 ten days ago.

I decided to eliminate some feeds because they seemed to be inactive for a long time. So i clicked on Manage subscriptions link which, by the way has a horrible place from a UI standpoint. It is not even always visible. When you hover on feeds the URL of the feed covers the button.

Google Reader

After fiddling a little with the labels, I made a horrible mistake: I selected all items and clicked Unsubscribe. As one may easily guess, it deleted all my subscriptions.

Google Reader

I had an OPML backup long ago but i don’t even know where it is now. Even if I looked for and found it would probably be out-dated beyond use. Lesson learnt: Start backing up RSS feeds regularly and automatically. While i was desperately pondering what i should do to recover my beloved little messengers, it hit me! I had an application on My iPad called Mr. Reader. It syncs with Google Reader so i also had my entire list of feeds on my iPad. I was hoping the app to support OPML exports so that everything would get back to normal in 5 minutes. Unfortunately, it didn’t! At least i was lucky that iPad was offline at the moment so it couldn’t sync and kept my feeds on the device. (Needless to say, i immediately turned off network access, quarantining my list!) I contacted the support of app’s company, which is the developer himself and he was very kind to respond quickly and offering me a few solutions. One of them was extracting the data from iPad by using a tool called JuicePhone ( It’s a free application. I installed it to my Mac immediately, hooked up my iPad and extracted all my data from it. Lesson learnt: Start backing up iPad regularly via JuicePhone as well as iTunes.

After a quick examination, I found out that the app is using an SQLite database to store its data. I downloaded SQLite Expert (

sqlite expert

It has a free version called Personal Edition and it seems to have a quite nice UI. Browsing through tables and viewing their data I felt quite relieved when I saw that the list of my feeds safe and sound.

sqlite expert

Now that I have all my feeds, I think it’s a great chance to organize and add or remove them controllably. By the way, after I completed getting my list I sent an email to the author of the app thanking and telling him that i managed to extract my data. A few days later the software updated itself mentioning some change about database. Then i added a new feed and applied the same steps above, to use if it still works, but the database seemed to be the same. I mean the app synced and deleted all my subscriptions and added the new test feed. But the list on the table is the same as before. Maybe he decided to keep its data privately somewhere else to keep it from people like me. Anyway, his advice worked out for me perfectly so I thank him again from here.

Off the Top of My Head

Recently I was looking for a software to manage my backups. I came across GoodSync. ( It is very effective and supports a wide variety of channels. (I will try to review GoodSync and my other favorite tools in detail in another blog post.)

After the trial period, it started to impose limitations. Since I was happy with the tool I decided to purchase it. It’s not very pricey. I think it well deserves $30 but they also provide another option which is called pay by TrialPay.

I vaguely remembered the term when I saw it. But i had never tried or examined it thoroughly before. Basically there are a list if options to select from such as subscribing a service or buying a product. After you select one and complete the required steps you wait until TrialPay confirms it. And after that, voila! They send you your product key and that’s it. Of course, if the TrialPay offers don’t tickle your fancy you might find it wasteful but the for me the list was quite attractive. For example, one offer was to try free 14-day trial. I subscribed for free and i got a license for GoodSync now. Also, another nice offer is registering at GoDaddy and making a purchase of at least $5. I used this offer too to buy another software. Since i was already planning to buy a few domain names, the timing couldn’t be better. And it didn’t take much to convince my brother to signup as long as i will be paying

So, from now on whenever I see a TrialPay option, I will jump right in to see the available offers at the moment. If you’re interested in purchasing GoodSync via TrialPay here’s the link:

UPDATE: Link above is removed as it was broken