productivity comments edit

I’ll keep rambling about my opinions, observations and experiences about productivity. Here goes:

01. Don’t try to cut back sleep to gain time

We all fee like we need more time to accomplish things but sleep is not the right item for savings. I tried it in the past. Tried to get away with sleeping 3-4 hours every night. At first it felt like the days were longer but towards the end of the week I was feeling so tired that I couldn’t perform well during the day. So I was up for more hours but my productivity had reduced significantly. I strongly suggest to have a good night’s sleep around 6 – 8 hours.

02. Two pomodoros of reading a day makes you (feel) smarter

In my last post I talked a bit about the Pomodoro technique. I love using it to divide small tasks into short work units. The idea of spending two pomodoros for reading is similar to saving money in a piggy bank. It’s easy to use 50 minutes a day for reading no matter how busy your day is. You wouldn’t notice and it wouldn’t become a huge burden once you make it a habit. But over time it would amount to a bunch finished books that would be gathering dust on your desk otherwise (or iPad or Kindle or phablet or.. anyway, you get the point)

03. Review your processes often

Setting goals for the day and week is a good practice but to make it work you have to review the outcomes periodically. In the excellent book called 30 days of Getting Results, J.D.Meier names is as “Friday Reflection”. Every week the status of the tasks that were targeted to be completed must be reviewed and more importantly the process must be updated the following week based on lessons learned from the previous week.

04. Keep learning

Everyday we are wasting a tremendous amount of time doing chores, commuting etc. At this day and age we have access to so many different means of accessing technology. We have small computers in our podcasts that are almost always connected to Internet. It’s a great chance we have to make use of the idle time by watching a PluralSight video for example. There are similar online courses of course but it just happens to be my favourite with apps for all mobile platforms. Another way it listening to a podcast. There are a lot of podcasts about technology and development. A few of my favourites are:

  • .NET Rocks
  • Run As Radio
  • Hanselminutes
  • Yet Another Podcast by Jesse Liberty

I’m planning to compile a list of my favourite shows so I hope this list will look more comprehensive.

The reason I added this item is using these learning mechanisms help you keep sharp and on track. Generally it’s not possible to really learn something by just listening to it but the idea here is to keep an eye of the latest to research about it.

05. Don’t multitask!

Trying to do multiple things at the same time creates an illusion of going fast and accomplishing a lot but I don’t believe that’s the case. Of course I’m not talking about doing something meaningful along with a chore. Like as in #4, it’s perfectly acceptable to watch a technical video on your way to work. The type of multitasking I’m against is doing it while coding for example. My idea is divide such tasks into pomodoros, focus on one thing at a time and nothing else. This would improve the quality of that unit of work dramatically and will eventually lead to accomplishing more.

Resources

review, book comments edit

Framework Design Guidelines

Fitbit Flex

Having a common framework is quite important to reduce code reuse. But designing that framework properly and consistently is a huge challenge. Even though we are living in a RESTful world now, I think having a framework or a set of common libraries for personal or commercial projects is still relevant. A well-designed well-tested framework would significantly improve any application built on top of it.

I had referred to this book partially before but this time decided to read it from cover to cover and make sure I “digest” all. It contains countless gems that every developer should know. Anyone developing even small libraries can benefit from this book a lot. You don’t need to design the .NET framework (like the authors). Also it comes with a DVD full of presentations of authors.

Companion DVD

Unfortunately I lost my DVD. Probably it’s inside one of the many CD cake boxes. I was hoping to check it out as I went along the book. But luckily I found out that it is freely available from the publisher. Check you the download link in the resources section. One thing to beware about the download is that you can come across another link in the Brad Abrams’s blog here. The download works fine but one of the presentations inside it is corrupted so I suggest you download each section separately from the site in the resources.

Some notes

The book is full of gems and very useful tips. Here are just a few:

  • Keep it simple: “You can always add, you cannot ever remove”
  • There is no perfect design: You always have to make some sacrifices and consider the trade-offs
  • Well-designed frameworks are consistent
  • Scenario-driven design: Imagine scenarios to visualize the real-world use of the API When in doubt leave the feature out, add it later. Conduct usability studies to get developers’ opinions
  • Keep type initialization as simple as possible. It lowers the barrier of entry.
  • Throw exceptions to communicate the proper usage of API. It makes the API self-documenting and supports the learning-by-doing approach.
  • Going overboard with abstractions may deteriorate the performance and usability of the framework.

Even though it’s been a few years since this books was released it is still a very helpful resource.

Resources

fitness, game comments edit

Yet another horrible term coined by inaptly fusion of two words! I was going to call dibs on it but a quick Google search showed other people used it before. Someone even registered the .com domain for it! Anyway, obviously the term is an abomination of words “Ingress” and “Exercise”.

What is Ingress

Ingress Logo

Ingress is an augmented reality game developed by Google (Well, actually by a company called Niantic Labs that is owned by Google). To play the game you need to install the free app on an Android device. (I saw iOS support is coming in 2014 here)

For me the beauty of this game is having exercise without even noticing it. Even having a desire to do it. Last Sunday, for example, I walked around at least 30 minutes hacking a bunch of portals. To an outsider I might have looked like I was enjoying a nice walk in the park but actually I was trying to keep to world free from the unreliable Shapers! I didn’t even notice I walked for that long.

The story

An alien race called Shapers use Portals to teleport to earth. There are two teams: The Resistance and The Enlightened. The resistance are opposed to Shapers’ whereas Enlightened believe they should be embraced.

The goal

The ultimate goal is to control as many portals as you can. You can create links between the portals your team controls and the triangular area between the portals forming the link is called “control field” and any player located inside this area is considered to be under your team’s control. They are called “mind units”. That’s essentially how the current control rates on the intel page is calculated. (As a proud Resistance agent it just saddened me to see we are down to 43% of global control)

The gameplay

Portals can be found anywhere. They can be controlled by either of the teams or they can be unclaimed. To claim a portal at least one resonator must be placed. To capture a portal controlled by the opponent team their resonators must be destroyed by firing XMP. You progress to higher levels by collecting AP (Action Point). After you start hacking the portals you will notice some items in your inventory.

Basic strategy

Okay here’s a guide to the absolute beginner: Just walk or run around and hack every portal you see. Don’t attack powerful enemy portals, you will just end up using all your weapons and energy. If you find unclaimed portals (the gray ones) jump on them instantly. You will gain a lot of APs by deploying resonators and linking them to other portals. Before attacking enemy portals check out the deployment and mod status. Once you have enough weapons in your arsenal and move up a few levels try to find fairly weak portals and zap the resonators. You can earn some sweet APs by creating a control field and don’t let enemy portals surround you because links cannot intersect and if they have a link going through your portals you cannot establish a link between them. So in that sense best defense is offense. Just pick your fights carefully.

Glossary

  • Portal: Places where Shapers can teleport to earth. The main goal is finding and hacking them.

  • Hacking: You hack the portals to get Action Points and add items to your inventory. You can hack any portal but you only get AP by hacking other people’s portals. You get more items by hacking your own portals.

  • Resonator: Every portal has 8 resonators. The resonators deplete over time and eventually the portal becomes unclaimed unless they are recharged. This helps your team to keep control of the portal for a longer period of time.

  • Portal Key: When you hack a portal you might get portal key (there is 15% chance you will). They are very important because you need them to create links between portals.

  • Mod: There are various mod items that let you upgrade a portal. Every portal have 4 mod slots. The mods are portal shields (to defends portals against XMP attacks), force amplifier (increases force of the portal), link amplifier (increases the portal link range), multi-hack (increases the number of times a player can hack a portal), heat sink (decreases the duration between hacks) and turret (increases the portal attack rate).

  • XMP Burster: Weapons to fire XMP on enemy portals. You can destroy resonators, links and control fields with them

  • Power Cube: Used to store XM so you can recharge your XM reserves.

Conclusion

It’s an enjoyable game for me but I can see myself getting bored of it pretty soon. But I have a great motivation to keep playing. It makes exercise just go by. I cannot wait for spring when I can start using it while running which would allow me to hack so many more portals that I can by walking.

Resources

Amazon Web Services, Cloud Computing comments edit

Amazon S3

I have two AWS accounts and I made a mistake by using mixing the usage of services. More specifically, I hosted an application on one account but used S3 on the other. So I perpetually had to switch back and forth between accounts to access all services I used. First I thought fixing it would be a non-issue but it proved to be a rather daunting task.

Bucket naming in S3

In S3, all buckets must have unique names. You cannot use a name if it’s already taken (much like domain names). Since I was using the bucket already, creating the same bucket in the other account and copying its contents was not an option. The second idea was to create the target bucket with a temporary name, copy the contents, delete the first one and rename the target bucket. Well, guess what? You cannot rename a bucket either! Another problem is when you delete a bucket you can create a new one with the same name right away. I’m guessing this is because of the redundancy S3 provides. It takes time to propagate the operation to all the nodes. My tests showed that I could re-create the bucket in the other account only after 45 – 50 minutes.

To develop or not to develop

My initial instinct was to develop a tool to handle this operation but I decided to check out the what’s already available. I was occasionally using Cloudberry but wanted to check its competitors hoping one of the tools would support the functionality I need.

Cloudberry Explorer for Amazon S3

I find this tool quite handy. It has lots of functions and a nice intuitive. It comes in flavours: Free and Pro version. I used free version so far and unless you are a big enterprise it seems sufficient. It allows you to manage multiple AWS account. It allows copying objects among accounts but not moving a bucket (actually after my findings above I wasn’t very hopeful anyway)

AmazonS3 CloudBerry Main

As you can see in the menu bar, it supports lots of features.

S3 Browser

This one comes with a free version too as well as a paid version. The free version is limited to 2 accounts and you can only see one account at a time.

S3 Browser

I tried to copy a file and paste to another but it got an Access Denied error. I could do the same thing with Cloudberry in seconds by simply dragging and dropping to the target folder.

Bucket Explorer

Third candidate only has a 30-day trial version as opposed to a free one. The second I installed it I knew it was a loser for me because it doesn’t support multiple accounts. Also as you can see below the UI is hideous so this is not a tool for me.

Bucket explorer

..and the winner is

Cloudberry won by a landslide! It looks much more superior than both of the other tools combined.

Operation Bucket Migration

So I backed up everything locally and deleted the source bucket so that I could create the same one in the new account. After periodically checking for 45 minutes I finally created the bucket and uploaded the files. Set the permissions and the operation was completed without any casualties.. Well, at least I thought that was the case..

Nobody is perfect!

After I uploaded the images I reloaded my blog. The first image re-appeared and I was ready for the celebrations which were abruptly interrupted by the missing images in the second post. The images were nowhere to be found locally in none of the two backups I took. I think Cloudberry has a bug when handling filenames with hyphens. I’m still not certain that is the case but that’s the only characteristic that differs from the other files. Anyway, the moral of the story is triple-check everything before you’re initiating a destructive process and don’t trust external tools blindly.

Resources

book review comments edit

SLAAAAYERRR!

This book had been on my queue for quite a long time. Luckily Slayer never gets old so this book is still relevant! It’s been 30 years since they released their debut album (Show No Mercy). I’ve listened to all of their albums countless times yet whenever I need something fast and heavy the name Slayer comes to mind first. Now that Jeff Hanneman is dead and Dave Lombardo has left the band (yet again), they are close to the end of their careers at least in terms of releasing new material. So I’m glad I got to finish the book before they disbanded!

The Book

The Bloody Reign of Slayer

This book was written in 2008 and it covers all but the last album (World Painted Blood). Honestly I finished the book just because it is about one of my favourite bands, not because it was a page-turner. In the beginning, the sections about the early years of the band are rather interesting but later on it doesn’t offer much.

It’s like a collection of album reviews. I liked the album review parts, analyzing all songs helps understand them better. But I think there is just too much boring stuff. The stuff that might make sense if you read it in an article about a recent article. But pointless and irrelevant after the album is released 15 years ago.

It was also a bit disappointment to see how boring the lives of the band members are! They are just 4 regular guys that come together to make the records and then live their separate lives. Of course, as the author mentions, this is not an official book endorsed by the band. So maybe it’s coming across boring because of the lack of information released by the members.

Anyway, I’d not recommend this book unless you are a diehard Slayer fan.  Even in that case I’d recommend to buy it just because it would look cool on your bookshelf. It’s a nice-looking hardcover book but the contents are just too mundane. There are not much interesting notes I remember from the book but here’s some trivia that I found enjoyable:

  • Tom Araya’s birthday is on June 6 (Sixth day of the sixth month!)
  • In the early years they stole lights from nearby houses to use them in their shows
  • The producers of the debut album are Brian SLAgel and Bill MetoYER (get it?)
  • Lombardo recorded Show No Mercy without cymbals because of the studio’s inadequacy. Later he overdubbed the cymbals.
  • Slayer name was owned by a San Antonio-based band. They even supported LA Slayer one night.
  • Kerry King played for Dave Mustaine’s Megadeth in some shows. Mustaine tried to convince him to join his band permanently (but failed in the end obviously)
  • Dave Lombardo keft the band in 1986 because he wasn’t allowed to bring his wife on tour. A few months later he joined back only to quit again in 1992.
  • They toured with Megadeth, Anthrax, Suicial Tendencies in Tge Big Four of Thrash and had personal issues with Dave Mustaine. Turns out Mustaine had issues with everybody else too in the industry!
  • The idea of Wall of Blood (Sprinkler for fake blood at the beginning of Raining Blood) came up to Hanneman while watching the movie Blade

Resources

show review comments edit

Ed Byrne is one of my favourite comedians. I like his jokes and his especially his style. He doesn’t offend people with his jokes. Being offensive is not necessarily a bad comedy in my opinion. I do like Jim Jefferies, for example, and he is offensive as it gets. But most of them don’t have his ability to present their jokes in a funny and not-so-annoying way. Anyway, back to Ed Byrne..

Row X!

Unfortunately, my seat was way back in the stalls so I can’t say I clearly saw the stage. That was a bummer but maybe it’s not as important in comedy shows as it’s more about listening. There is not much action on the stage anyway.

Ed Byrne Ticket

The Show

The show had a very interesting setup. First, Ed Byrne came to stage at 20:10 to warm up the crowd for his support act! The Ben Norris came and warmed us up for another 20 minutes. Then there was an intermission followed by a 5 minute set by Ben Norris. Finally, the one-hour long main attraction started. I liked Ben Norris and will try to keep an eye on his tours. Especially his jokes about the Post Office were killer, probably because we could relate to them! There is not much to say about Ed Byrne. I saw his DVDs before and this show was very much like them in general. About half of the show was about marriage jokes which were quite funny even though I didn’t relate.

The Venue

It’s right across the Hammersmith tube station so it’s hard to miss. It’s generally a nice place but he seat was not too comfortable. As the seat being an important factor I’m afraid I’ll have to lower its grade in my eyes.

Resources

game, minecraft comments edit

I recently decided to give Minecraft a shot as there are a few people playing it already (and by a few I mean 12,995,53 as of now, according to its official site). The problem is I’m a casual gamer and I don’t want to spend too much time on it. Anyway, we’ll see what I can do with the amount of time I can spare.

Enter Minecraft

As it is a game encouraging creativity (well, I hope it does at least) when you first start it looks like just a blank slate. There is just some wood and grass around you. The problem is I didn’t have any goals! After spending a few minutes wandering around pointlessly I closed it and forgot about it some time. Until tonight!

Youtube to the rescue

Sometimes I forget that little thing called the Internet. I found a great tutorial in Youtube called “How to survive your first night in Minecraft”. First I thought it was referring to the actual night you are playing the game. And I thought by surviving it meant getting the hang of it and keep playing instead of being overwhelmed by it and leaving it for good as I was almost doing. But turns out there are days and nights in the game! And at night time some creatures emerge and they attack and kill you if you are not prepared for it. The tutorial I watched was only 15 minute long but it gave great insight about the game. As you can see it even gathered a lot of resources and built some tools using them already.

Surviving the Minecraft - Inventory view

Didn’t make it through the night :-(

Well, apparently pausing the game doesn’t stop time. When I was watching the tutorial in paused state the night fell and I was surrounded with all sorts of creatures like skeletons and zombies. Out in the open, without any tools I didn’t stand a chance!

Conclusion

Even though I couldn’t survive the night a learned a lot about the game. It looks like a lot of fun. So maybe I should invest some time on it during the weekends. I know if I play at all nights I will be addicted very quickly. So I guess if I just limit it to weekends from the get-go I can stay loyal to my not spending too much time on games rule. Also I know there is an interest in Raspberry Pi community about running Minecraft servers on RPis. It sounds like it would be a nice use case for my idle Pi.

Resources

Amazon Web Services, Cloud Computing, System Administration comments edit

Amazon Web Services (AWS) Auto-Scaling

Auto-scaling has always been a feature of Amazon Web Services (AWS). Until today, it could be done in 2 ways:

  • Using command line tool (See resources section for the link)
  • Using Elastic BeanStalk to deploy your application

Yesterday (10/12/2013) they announced they added Auto-Scaling support to AWS console. I was planning to create auto-scaling my blog anyway so I cannot think of a better time to apply this.

Auto-scaling using AWS Management Console

Step 01: Launch Configuration

First we tell AWS what we want to launch. This step is a lot like creating a new EC2 instance. First you select an AMI. So before I started I created an AMI of my current blog and selected that one for the launch configuration. Then we select the instance properties. In this wizard we have the option for using spot instances. They are not suitable for Internet-facing applications so I’ll skip that part.

Step 02: Auto Scaling Group

At the end of Launch Configuration wizard we can select create auto-scaling group with that launch configuration and jump right into Step 2. First we specify the name and the initial instance count for the group. Also we need to choose at least 1 availability zone. I always select all of them, I’m not sure if there is any trade-off with narrowing down your selection.

An important point to pay attention here is to expand Advanced Details section because it contains the load balancer selection. For web applications auto-scaling makes sense when the instances are behind a load-balancer. Otherwise new instances could not be reached anyway. Once you create the auto-scaling group you cannot associate it with an ELB so make sure you select your load balancer at this step.

Create Auto Scaling Group

After comes another important step: Specifying scaling policies. Basically, telling AWS the action to take when it needs to scale up or down and when to do it. “When” is defined by CloudWatch alarms. For scaling up, I added an alarm for average CPU utilization over 80% for 5 minutes and for scaling up CPU utilization under 20% for 5 minutes. When high CPU alarm goes off it will take the action we select, which in my case is adding 1 more instance. And scaling down is just the opposite: remove 1 instance from the existing machine farm.

Create Auto Scaling Group

On next step we define the notifications we want to receive when an AS event is triggered. I would definitely would like to know everything that happens to my machines so I requested an email for all events.

Create Auto Scaling Group

That’s all it takes to create an AS group using the wizard.

Testing the scaling

The easiest way to test auto-scaling group is to terminate the instance it just launched. As you can see below once I killed the instance it immediately launched another one to match the minimum number requirement of AS group. So auto-scaling group is working but how can I be sure that it will launch a new instance when I need it most. Time to make it sweat a little! But first we have to setup an environment to create load on the system:

Installing Siege

The easiest and simplest load testing tool I know is a Linux-based one called Siege. To prepare my simple load testing environment I quickly downloaded siege:

wget http://www.joedog.org/pub/siege/siege-latest.tar.gz

tar -xzvf siege-latest.tar.gz

It requires a C compiler which doesn’t come out-of-the-box with an Amazon Linux AMI. So first we need to install that:

yum install gcc*

And configure it by

./configure

At the end of the configuration it instructs us to run the following commands:

Siege configuration

So after running make Siege is ready to go. We can check the configuration by

/usr/local/bin/siege -C

It should display the current version and other details about the tool.

Siege Configuration

Ready to go

Now, we have a micro instance running Siege and a small instance launched by auto-scaling.

AWS Instances

The auto-scaling is supposed to launch another instance and add it to load balancer if the CPU usage is too high on the existing one. Let’s see if it’s really working.

Under Siege!

I first created a URL file from my sitemap so that the load can be more realistic. I fired up 20 threads and it started to bombard my site:

Siege in Action

When I try to load my site it was incredibly slow. The CPU usage kept rising on the single instance until the CloudWatch alarm went off. It triggered auto-scale to launch a new instance.

AWS Instances

Now, I had 2 instances to share the load but that could only happen if the new instance was added to the Elastic Load Balancer (ELB) automatically. After a few minutes it passed the health checks and went in service.

Auto-scaling using AWS Management Console - Elastic Load Balancer Overview

At this point I had 2 instances and when I tried to load posts from my blog I noticed it was quite fast again. The CPU usage graph below tells how it all went down:

Auto-scaling using AWS Management Console - CPU utilization

My first instance (orange) was running silently and peacefully until it was attacked by Siege. After a few minutes of hard times the cavalry came to rescue (blue instance) and started getting its fair share of the load. Then ELB distributed load as evenly as possible making the system running smoothly again. OK, so the system can withstand a spike and scale itself but it costs money. What’s going to happen after the storm. So I stopped Siege and sure enough, as we’d expect, after a few minutes Low CPU alarm kicked off and set the instance count back to 1 by terminating one of the instances.

AWS Instances

Also, I was notified in every step of this process. So that I could be able to keep track of my instances at all times.

Auto-scaling using AWS Management Console - Notifications

Architecture of the system

So at this point the architecture of the system looks like this:

Auto-scaling using AWS Management Console - System Architecture

I’m planning to cover some basics (EC2, RDS, S3) in more detail in a later post. Also I’ll try to add more AWS services and enhance this architecture as I go along.

Final Words

  • If you are planning to use auto-scaling in production environment make sure to backup all your stuff externally. Also create snapshots for all the volumes.
  • Even though network traffic is cheap it still costs. So for extended tests I suggest you keep an eye on your billing statement
  • In Amazon Linux AMI Apache and MySQL don’t start automatically so you may need to update your configuration like I did. I used the script I found here.

Resources

Cloud Computing, Development, System Administration comments edit

DevOps (Development + Operations) is one of most popular terms in the IT world recently. From what I’ve read and listened to so far, my understanding is it is all about continuous deployment (or delivery). Basically, you have to automate everything from development to deployment to practice DevOps.

Current problem

Traditionally, successful deployment is a huge challenge. It is mostly a manual and cumbersome process. Because of its sensitive nature the system admins are not huge fans of deployments. Also, another challenge is the miscommunication (or no communication in some cases) between system admin and development teams. They are generally run by different high-level executives and their priorities conflict most of the time.

Solution

On the philosophical side, DevOps is bringing these teams together and work in harmony. Having social events with both teams’ attendance is a key to build confidence among team members. As Richard Campbell (from RunAsRadio and .NET Rocks podcasts) says “Pizza and beer is a global lubricant”.

Dev…

On the development side, the key requirement is continuous integration. You have to able to run unit tests and acceptance tests automatically on build servers. This means development has to be done in short sprints in an agile way with frequent check-ins. One step further of this stage is continuous deployment.

…Ops

This is where the IT team comes into play. When the whole system is automated, deploying to production frequently and without much headache becomes possible. Cloud computing is one of the core technologies that makes DevOps possible. Ability to manage virtual machines programmatically (i.e. AWS, OpenStack) leads to a whole bunch of possibilities.

This is a fairly complex topic encompassing many disciplines and technologies. Also it’s quite dynamic and open to innovation. Definitely worth keeping an eye on.

Resources

Productivity comments edit

I’ve been trying to improve my time management techniques perpetually. This post is a small compilation of notes, tips and tools I came up with. This is an on-going process and I will keep updating these. I strongly recommend reading Getting Results the Agile Way and visiting the sites in the resources section. I learned most of the stuff below from that book. And there is plenty of more good advice in there. So here is my list:

01. Be realistic!

I think this is the most important thing I have learned so far. Yes, we all want to do more tasks and write more code, develop the killer apps we were thinking of but the reality is we have a limited time we can spare. This is not about aiming low but just don’t start everything at the same time because you are too excited about them at the time. It’s very likely that you will end up with a bunch of unfinished projects. Over time they will keep haunting you. First you will never feel the sense of accomplishment even though you spent a ton of time and effort. Also they will remain somewhere in your to-do list only making everything worse.

02. Rule of 3 rocks!

I think if there is one thing to take away from the book Getting Results the Agile Way, it is the Rule of 3. It is a very simple and effective idea. Don’t put too many things on your plate. Every month pick 3 major tasks to accomplish. Then divide them into smaller weekly tasks and even smaller daily tasks. If you only aim to complete 3 tasks every day you can steadily move toward accomplishing the greater goals at the end of the month. For me, it’s not always to plan things so clearly. Generally monthly and weekly tasks are left unfulfilled. But it is easy and very achievable to plan the next day every night and pick only 3 things. At least that one works pretty well.

03. Determine your categories (hot spots)

I have to admit I tweaked rule of 3 a little bit! I wanted to have a more balanced plan so I determined main categories in my life like health, work, career, chores etc. I’m picking 1 task per category each day. This way I believe I can make progress in every aspect of life.

04. Keep it simple

Simplify your process as much as possible. When I first started keeping lists of outcomes I created templates so detailed that it took half an hour to fill out completely. Instead of overly complicating things and spending too much time on managing just try to get rid of as much as possible. In the end the ultimate goal is to use time for things that matter. Even a blank note with 3 items written on it can be a very good way to plan the day. Just do the 3 things on the list, don’t think about anything else.

05. Keep your backlog clean and short

Learn how to let things go. This is another crucial thing. Let’s be honest we will never be realistic and we will put a ton of items to our to-do list turning it into a bottomless pit. There were times I found items in my list that didn’t even mean anything anymore. Or so outdated that it had no value anymore. The result is you keep those in your list all that time, they keep taking your time to manage also they weigh on your shoulders as they pile up. And you end up not doing them anyway. Essentially you stall and stall then delete them once they make no sense anymore. The golden rule to keep in mind is that if it’s important it will surface again! So don’t worry about forgetting a very important task. You will be reminded it in one way or another.

06. Pen & paper with colour coding helps

Keeping notes on Evernote is great but sometimes the ease of use makes it also easier to neglect or forget tasks. After all they become invisible once you minimize the window. I recently started using coloured post-its.They are coded in their order of importance. So for example I cannot do green tasks before the purple ones are completed. Also it is annoying to have a bunch of notes sticking around so trying to get rid of them is an extra motivation!

07. Use Pomodoro on small tasks

I love this technique since the day I heard about it. It’s quite simple: Just divide tasks small unbreakable units of time. The default is 25 minute work and 5 minute rest. I use Focus Booster on my desktop computer

Focus Booster

On iPad I use an app which doesn’t have a specific name and I cannot find it in the app store anymore. It looks like this:

iPad Pomodoro App

It works fine and looks stylish so I’m not planning to find another one. There are a ton of similar apps in the app store though.

I cannot use this method for large tasks. Also I cannot work non-stop with 25-5 periods. I get distracted by something else eventually. So I just use this for short 2-4 pomodoro-long tasks. For example spending 2 pomodoros a day for reading is a great way to spare time to read.

Resources

Encryption, Security comments edit

I used to wonder what different key sizes meant when dealing with SSL. Also, I noticed that SSL certificate I had purchased said “128/256 bit encryption” in its feature list which only made me more confused. What does it actually mean and why should it use 128-bit if it supports 256 anyway? I checked the website that’s running on a Linux machine and saw that it used 256-bit encryption whereas another website of mine was running with 128-bit encryption. And I bought both certificates from the same vendor so it has to do something with the server.

What’s with the naming?

For the uninitiated, TLS is the new name for the protocol. SSL name was discontinued after version 3 and after that TLS 1.0 was released. As of this writing the latest version is TLS 1.2 which was released in 2008. So technically the name of the protocol is Transport Layer Security (TLS) but many people, including me, still refer to it as SSL.

Key Sizes

SSL Key Sizes

Basically the key size (2048 bit in the image) is the public/private key pair size. This size is determined when CSR is created for the certificate. This is what determines how vulnerable the key is to brute-force attacks. Currently 2048-bit is considered to be very strong.

128/256-bit is the length of the session key. A session key is generated during the handshake. A random data (of length 128 or 256 bit) is generated by the client and encrypted using the server’s public key. The server decrypts the message with its private key. Afterwards, server and client use this session key and use symmetric encryption. RSA keys are just used in the beginning of the communication.

Let’s see it in action

I might have had a better understanding after the research but I still I had to resolve my issue. I needed to see 256-bit encryption. Since this is a rather sensitive operation I wanted to test it on a completely expandable machine. So I created two new small instances running Windows 2008 and Windows 2012. I quickly installed the IIS to both instances and checked what they looked like. As I suspected they were using 128-bit out of the box.

SSL_Key_Sizes_Win2008_Before

SSL_Key_Sizes_Win2012_Before

The problem is AES-256 option is not high in the list in the cipher suite that the server supports. This requires some registry update and group policy changes. Normally all these have to be done manually. You can find a resource below that explains how to do it (I haven’t tested it myself). Instead, I decided to use a tool which makes the whole process a lot easier and less error-prone. It’s called IISCrypto.

IIS Crypto

I just downloaded the tool and ran the best practices option. Restarted the server and here are the results:

SSL_Key_Sizes_Win2008_After

SSL_Key_Sizes_Win2012_After

Windows 2012 version prioritize TLS 1.2 over TLS 1.0 so it uses the newer version of the protocol even the browser I used was the same for both tests.

Resources

Development, NOSQL, Programming comments edit

I updated my toy project. You can find the source code and live demo for the final version below:

Source Code: https://github.com/volkanx/BeerExplorer

Live demo URL: http://beerexplorer.me

If you don’t want to bother deploying it without first seeing what it looks like, here’s a screenshot:

Beer Explorer

It’s just a simple exercise to browse Couchbase repositories. It was helpful for me and I hope you find it helpful too.

Cloud Computing comments edit

**It’s been a while since I’ve started using Amazon Web Services (AWS) to host my sites. I think it’s a great platform as you only pay for what you use and there are lots of options. And the best part is anything you can do via their user interface (and more) can be done programmatically via their API. I’m extremely happy using AWS but still I wanted to see what its competitors are doing.

Enter RackSpace

RackSpace

So I decided to test RackSpace first. One reason for selecting it is that it has a data centre in London (the closest AWS data centre to UK is in Dublin). Also it is based on OpenStack platform which I wanted to play with for some time. I created my free account but it needs to be activated after you receive a call from a staff member. He just asked basic questions like my username and the reason I created the account. After the call the account was activated and I was ready to explore this new land.

Servers

First Impressions

This is still a work in progress actually, I cannot say I have fully covered everything about it. Here are just my first impressions and comparisons with AWS:

Pricing & Billing

Maybe I’m cheap but my first order of business was compare the prices! The cheapest Linux configuration starts from £0.030/hr. You can find the entire list here. As the site I’m planning to migrate didn’t need much resources I decided to go with the cheapest one: 1GB RAM, 1vCPU, 20GB SSD. After the migration I’m quite happy with its performance.

One interesting thing I noticed is, unlike AWS, you pay for the machine even if you stop it. Excerpt from a documentation says “Shutting down a server will NOT stop billing, since the virtual hard drives are persistent, server resources are always in use whether the servers is powered on or not.” Now that’s not cool! Actually if you are running web-based systems you never stop the machines anyway. But there are many times I preferred to keep the old machine stopped for a period until the new machine proves to functioning fully for example. It’s nice to have the chance to rollback easily if need be. Of course you can do it here too, but you just have to pay twice as much during that period.

Features

When trying to configure the machine I noticed there isn’t a feature like Security Groups of AWS. I had to update the iptables configuration on the machine. Which would make it hard to manage firewall rules in a multi-machine environment. In AWS you just add the new machine to an existing security group and forget about it because all the existing rules are applied to the new one automatically.

Programmability and API

OpenStack

Even though I haven’t developed anything for it yet, I just wanted to see what are our capabilities and how would I develop something when I needed. All I needed to do was get the NuGet package and I was ready to get the list of my machines in a a few minutes. Basically you can manage machines, images, volumes pretty much like AWS. I’ll put a pin to it for now and develop some tools for myself later.

Program

Conclusion

I think the best thing about RackSpace is that it is built on top of OpenStack. This means if you your system to another vendor your applications using the API can remain intact. Also as it is open source software you can build your own data centre if you wanted to. Of course it sounds good to geek ears but I guess in real world it doesn’t have much value as such migration of systems are quite often. Other than that I didn’t see any advantages over AWS but I’ll keep the machine running for a while and see how it goes.

Resources

Site news comments edit

I decided to switch to FeedBurner to keep better track of my RSS feed. The new address is http://feeds.feedburner.com/PlaygroundForTheMind.

Hopefully current link will be redirected automatically. (Well not exactly automatically, I installed FD FeedBurner plugin to take care of that).

If it doesn’t work it’s likely that current subscribers are not going to receive this update via RSS but I thought a notification post wouldn’t hurt anyway.

Big Data, Certification, NOSQL comments edit

Online education sites have around for some time now. One of my favourites, Udacity, has recently started a new series of courses: Data Science and Big Data Track. Big Data is a fascinating subject and I’ve been wanting to learn more about it. But so far my introductions were generally short lived. This time I intend to finish all these courses and have at least a guided tutorial. Their first course in this track is Introduction to Hadoop and MapReduce.

Hadoop

Hadoop Logo

Named after the main developer’s child’s toy’s name, Hadoop is an open-source framework based on MapReduce that can run distributed data-intensive tasks. It has its own file system called Hadoop distributed file system (HDFS). It handles data redundancy by dividing the data into 64MB chunks and storing several copies of them (3 copies by default).

MapReduce

A programming model first developed at Google. It consists of 2 steps: Map and Reduce. Map function takes the input data and divides it into smaller datasets. In Reduce function takes the sub-problems as input and calculates the final output.

Udacity Course

The course they are offering is very concise and to-the-point. It doesn’t take too long to finish. It’s instructors are employees of Cloudera and they do a very good job in explaining the basic concepts in simple terms. Also, in the course they provide a download to a virtual machine fully loaded with Hadoop and tools. It also contains the example datasets and code they use throughout the course so it makes it quite easy to practice on your own.

Final Project

Final project was fun to implement. It’s based on the examples so you can develop on top of the code shown in the class. I submitted my answers to GitHub Gist. If you’re interested they’re available here. Files are named with “_xy” prefix where x is the project number (there are two parts for the final project) and y is the quesiton number.

Udacity Certification

I’m also curious about their new certification model. I haven’t enrolled to any of their paid programs. Basically the courses are still free to enroll but with paid program you have a dedicated tutor who reviews your code and gives you feedback. Also there is an exit interview and if you pass you get a verified certification. I’m not sure how that interviews is going to be conducted though. It’s not cheap ($150/month) though. You still work at your own pace but since you’re paying for it probably you’d want to finish it as soon as possible.

Resources

Book Review, Review comments edit

I’m not an early Twitter adopter but I love using it since day one. It is an fast and easy way to skim through the news of the day, get a few random tips and be notified by new articles or blog posts. I still use RSS feeds as they are not ephemeral like Twitter feed therefore more reliable to get the latest news but Twitter is a great companion to that source now.

Hatching Twitter

I heard about this book in a podcast a few days ago and Twitter being my favourite “social network” I immediately dug in. One funny thing is it makes you think about how fast things go in the Internet age. The book reads like a history of a company which, in earth years, was founded 7 years ago. Obviously we know the plot and ending of the book so there is nothing exciting about it maybe. But the storytelling is very good and riveting. My intention was to spare 2 pomodoros a day but most days I found myself extending it to 3 or 4. Long story short, it’s a very well written book about one of most fascinating tech companies of the day.

Resources

Development, Gadget, Leap Motion comments edit

TicTacToe

Like most people I got my hopes high when ordering this gizmo and again like most people I was disappointed by it. It’s not quite the mouse-replacement as I hoped it would be. Anyway, I mostly bought it to develop applications using it. It comes with an SDK and libraries for .NET so I cannot complain much about that. I wanted to develop something simple just to get the grasp of it. Recently PluralSight published a course for Leap Motion development and I thought it was a great chance to start my own little app: Tic-Tac-Toe. The course was very helpful and I’d recommend it as a starting point for Leap Motion development.

So there is still work needed on my TicTacToe but you can find below a sneak preview of the current version.

Basically it does what it’s supposed to do at the moment: draw things on screen using your finger! So I think I accomplished what I set out for. What I want to add is a custom gesture for X. Circle gesture is built-in to SDK so drawing circles is easy. But I implemented ScreenTap gesture for playing Xs which is not intuitive obviously. Also it requires precision because it’s not quite easy to target a cell while tapping. If you watched the video you may have noticed I missed the cell for Xs second move for example. So that would be the most improvement I can make apart from the basic things like player info, statistics, undo moves etc. But as they are not directly related to Leap Motion development they are not very important in this context.

Resources

Book Review, Review comments edit

The Power of Habit

I titled this post a book review but in reality I only read the 1/3 of the book! Because it was the part pertinent to my needs. Part 1 is about the habits of individuals, part 2 is about the habits of organizations and part 3 about the habits of societies. The reason I started reading this book was acquiring a few tips and tricks on managing habits and it was very helpful to analyse the nature of habits and altering them if needed. The book covers thoroughly the nature of habits. Basically it’s formulated as cue – routine and reward. So I realized if I want to develop a new habit I have to stick to this pattern. Like to have exercise daily the best approach is to do it always at the same time if possible. Right after waking up for example. And after it I have to reward myself. A little snack maybe. Small enough, of course, not to nullify all that exercise. Anyway, the book has lots of examples of success stories and scientific studies and I think it’s quite helpful to help ourselves automate mundane but necessary stuff and getting rid of some unwanted habits. I have no interest in habits of organizations or societies but judging by the first part I’m sure it must be attractive to some. Even just for Part 1 I’d recommend this book.

Resources

Gadget, Game comments edit

I used to love my Commodore 64 when I was a child. Now that we have the ability to emulate old machines and memories I decided to give it a go. Apparently creating a MAME is a popular subject. I’ve found this C64 emulator: http://www.mascal.it/rpi64_e.html.

It’s pretty straightforward. Download the rom, burn it to an SD card using a tool (I used Win32DiskImager). Then upload your ROMs to RPi and let the good times roll!

One of my favourite games was Donkey Kong so I decided to start with that.

Donkey Kong

Donkey Kong

It loaded nice and dandy but couldn’t play it with the keyboard. So either I’m going to buy an old Joystick and figure out a way to connect it to the RPi or try to dig a little deeper to find out the key mapping.

Amazon Web Services, Cloud Computing comments edit

We all know backups are good but most of the time you won’t need a backup from a year ago. Just keep enough copies to recover from a possible failure and get rid of the rest. The other day I was working on cleaning up old security camera images which become meaningless very quickly. The images are uploaded to Amazon S3. My first approach was to delete the older ones by a scheduled script but then I discovered an easier and more effective way.

Let AWS do the work!

It’s possible to loop through thousands of objects and delete them but the alternative is to set an expiration date for each object. To activate this select the folder and make sure the properties panel is visible. Expand the Lifecycle section and click Add rule. Add a number of days for the expiration. Make sure “Apply to Entire Bucket” is checked so that any newly uploaded files comply with this rule. It’s easy as that!

S3 Lifecycle

One thing to note is that this process runs once a day. So don’t expect to get your bucket cleaned up immediately. But also don’t forget to check the next to ensure everything is working as expected!

Resources