securityawsdev raspberry_pi, gadget, powershell, s3

I know there are very cheap security cameras that you can setup in a few minutes. They may provide security but they cannot provide the satisfaction you get after a DIY project! So let’s dig in just for the fun of it.

Ingredients

Component Price Where to buy?
Raspberry Pi Starter Kit £36 Amazon
Camera module £17 Amazon
Protective case for the camera module £4 Amazon
Wireless adaptor £6 Amazon

Once all put together this is what you are going to end up with:

Raspberry Pi Security Camera

Bring it to life

  1. Download a standard distro for Raspberry Pi. I used Rasbian.
  2. Write the image to the SD card. I use Win32 Disk Imager on Windows.

Main course: Motion

There is a great tutorial here for converting your Pi into a security camera which I mostly followed. Basically you enable WiFi, install Motion software and tweak the configuration a bit (image size, framerate etc) and it’s (hopefully) good to go.

The video didn’t work for me unfortunately. It was recording something but only the first frame was visible so it wasn’t any better than a still image. So I decided to ignore videos completely.

Instead of using a network share I decided to upload footage to AWS S3 directly using Amazon S3 Tools. Also don’t forget to clear old footage. Otherwise you can run out of space very quickly. I added a few cron jobs to carry out these tasks for me:

* * * * * s3cmd sync /var/surv/*.jpg s3://{BUCKET NAME}/
* */2 * * * sudo rm /var/surv/*.avi
* */6 * * *  find /var/surv/* -mtime +1 -exec rm {} \;

It syncs the local folder with S3 bucket, deletes all local video files and files older than a day. I delete the video files more frequently as they take up a lot of space.

Monitoring and Notifications

No system is complete without proper monitoring and notifications. It’s especially important for systems like this that’s supposed to run quietly in the background.

Unfortunately in my case it stopped working a few times which made monitoring even more important. I don’t know what’s causing the issue. Maybe it’s because I’m using an older version of Raspberry Pi and it’s not capable of handling all the motion software and S3 uploads etc.

To keep an eye on it, I decided to create a small PowerShell script to check S3 for incoming files and send me a notification if it seems to have stopped uploading.

PowerShell as the glue

Built on .NET framework PowerShell is a very powerful (no pun intended) tool to write quick and dirty solutions. So first here’s the Send-Mail function:

I created a separate function for it as it’s a general-purpose feature which can be used in many places. To make it even more generic you can take out the from and to email addresses and add them as parameters to the function.

And here’s the actual notification logic:

It finds the latest image by sorting them by LastModified field and compares this date with the current date. If it’s been more than 1 day it sends an email. Depending on how often you expect images to be uploaded you can change the alert condition.

To use these scripts you’ll AWS accounts with S3 and SES privileges. Also you have to change the path of the send-mail.ps1 in the line it’s included.

Resources

awssecurity powershell, ec2

Here’s the scenario:

  • You use AWS
  • You don’t have a static IP
  • You connect to your EC2 instances via SSH and/or RDP only from your IP
  • You are too lazy to update the security groups manually when your IP changes!

You’ve come to the right place: I’ve got the solution.

Let’s have a look how it’s built step-by-step:

Step 1: Create an IAM account to access security groups

As a general rule of thumb always grant the minimum privileges possible to the accounts you use. Create a new user and go to the user’s details. Select Attach User Policy and then Policy Generator. Select AWS EC2 from the services list. For our script to run we need 3 privileges: List security groups (DescribeSecurityGroups), delete old IP permissions (RevokeSecurityGroupIngress) and add new IP permissions (AuthorizeSecurityGroupIngress).

Alternatively you can just attach the following policy to your user:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1418339796000",
      "Effect": "Allow",
      "Action": [
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:DescribeSecurityGroups",
        "ec2:RevokeSecurityGroupIngress"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

IAM has a very nice simulation feature. Before you proceed I recommend you run it and verify only 3 actions are allowed.

Step 2: Modify the PowerShell script

The script I created is on Gist as seen below. Before you can use it you have to update it with the access and secret keys of the user created above.

Also since fiddling with security groups can become messy very quickly I’d strongly recommend you perform a dry run first. By default $dryRun is set to true. Unless you set it to $false it will only display what it is going to do but will not take any action. So make sure you know what you’re doing before you give it a go. I don’t think this script will be a ready-made script for anyone. Probably would need some tweaking here and there to tailor to your needs. But this version works for me so here it is:

First it gets a list of security groups that have SSH and RDP permissions in them. Then loops through these permissions and compares the IP address with the current one. I used my own external IP checker service that I’ve recently developed as I blogged here. You can use other services as well. Just make sure you change the URL in the script. My service returns a JSON object so if the method you use returns a different format you need to modify the parsing code as well.

If the IP addresses are different, it revokes the old permission and creates a new one with your current IP. Protocol and ports remain intact.

This is the output of the script:

If the IP addresses for port 22 an 3389 are up-to-date it just displays “Security group is up-to-date” so it can be run consecutively. So you can schedule it to run as often as you want.

Resources

dev nodejs, javascript

Enter Node.js

Node.js is a popular platform to develop JavaScript applications. Internally it uses Google’s V8 JavaScript Engine and enables to develop fast, scalable and event-driven JavaScript applications. I’ve recently developed my first REST API using Node and in this post I’ll talk about the steps required to develop and deploy a Node application.

Learn

Needless to say there are many resources to learn Node (as with any such popular environment). I found the Microsoft Virtual Academy’s training quite useful. It’s very enjoyable and free. I’ll provide more recommendations as I go deeper.

Setup

I used my main dev machine for this project which is running Windows 8.1. A nice thing about Node.js is that it’s cross-platform so you can use Linux, Mac or Windows.

On Windows, simply download and run installer on this page. Alternativaly if you like Chocolatey you can run the following command in a command-prompt (you may need to run it as administrator)

choco install nodejs.install

Develop

There is a module that makes life a lot easier if are planning to develop an API with Node and it’s called Express

We’re going to use express in our API so first we need to install it using Node Package Manager (NPM)

npm install express 

Now that we have everything we need, we can write the actual code for the endpoint:

var express = require('express');
var app = express();

app.get('/', function (req, res) {
    var remoteAddress = req.headers['x-forwarded-for'] || 
    				  req.connection.remoteAddress;
    res.json({ "ipAddress": remoteAddress });
});

app.listen(process.env.PORT || 80);

Just getting remoteAddress wouldn’t work when your applications is not accepting connections from the users directly –which is generally the case in large applications where load balancers face the clients.

For example, in the sample output above it’s obviously getting a 10.x.x.x IP from remoteAddress. So we check for the x-forwarded-for HTTP header and use it if present.

Deploy

First I installed Node on Linux box on AWS by simple running

curl -sL https://deb.nodesource.com/setup | sudo bash -
sudo apt-get install -y nodejs

but then I recalled I could use Heroku to host node.js applications for free. Why would I keep an eye on the service when the nice guys at Heroku are volunteering to do it for me for free, right? So I created an application on Heroku by following this guide.

Basically you can deploy by just following these simple steps:

  1. Download and install Heroku Toolbelt from here
  2. Clone your git repository and navigate to that folder using a command prompt.
  3. Create an app on heroku by running

     heroku create --http-git
    

    This will add a remote to your git repository

    Heroku remote

  4. Now as you have the remote you can push your code to that

     git push heroku master
    

    Heroku deploy app

  5. [Optional] Add a custom domain and change the application name

    If you don’t want to use Heroku domain and/or the application name is assigns automatically you can change them. First go to Dashboard –> Personal Apps and click on the application. Then click Settings. You can rename the application there directly but it breaks your git remote. So I suggest doing it via the command line by running

     heroku apps:rename newname --app oldname
    

    On the same page you can add your custom domain. Of course you have to redirect your domain/subdomain to heroku’s URL by adding a CNAME record to your zone file.

    Heroku custom domain

Test

So we have an up-and-running REST API and let’s see it in action. One way to test it with several IP addresses is using free proxy servers that float around the net (for some reason). I visited HideMyAss to quickly get a few. It may take a few tries to find a working proxy but here is what we’re looking for:

My external IP is the same as the proxy address I just set. While testing with a few different proxy servers I came across this rather unexpected result:

I deployed a version with more detailed output to understand what was going on. Looks like the X-Forwarded-For header was set to “127.0.0.1, 177.99.93.214” My service currently doesn’t support extracting the actual external IP from such comma-separated lists. First I need to look into it and see whether it’s a common and acceptable practice conforming with the standards or just a weird behaviour from a random server. But the solution is obviously simple to implement: Just split the string from commas and get the last entry.

Enjoy!

So now instead of whatismyip.com I can use the service I built with my own two hands: http://check-ip.herokuapp.com/. It’s open-source so feel free to fiddle at will!

It’s a simple API anyway but it feels nice to see it running so quickly. Also no maintenance is required thanks to Heroku. For me it was a nice introduction to Node.js world. I hope this post comes in handy to other people as well.

UPDATE

As I pointed out in one example above during my tests I got comma separated IP addresses in x-forwarded-for field. Turns out it’s not a weird behaviour and perfectly legitimate as described in the RFC for the header:

The “for” parameter is used to disclose information about the client that initiated the request and subsequent proxies in a chain of proxies. When proxies choose to use the “for” parameter, its default configuration SHOULD contain an obfuscated identifier as described in Section 6.3.

So I updated the application to split the proxy chain from comma and return the last entry in the list.

As shown above there are two entries in the forwarded for header but the result IP address is the one I set in Firefox.

Resources