fsharpdevelopment comments

I’m not a big fan of New Year’s resolutions. I was meaning to start learning F# and since it’s the new year’s 2nd day it might a good time to finally give it a shot!

Where to start

It’s always hard to find the best resource when you are starting. Some time ago I heard about a Microsoft Research project called TryFSharp.org. It’s a tutorial website geared towards the absolute beginners. It comes with a REPL editor so no extra tools are needed to start.

From now on I’m planning to spend 2 pomodoros (around 1 hour) every day to learn F#. After my first 2 pomodoros I completed the first 3 sections and below are my notes for today’s training.

Lecture Notes

  • let keyword to bind names to values. These bindings are immutable. If you try to assign a value to a same name twice you get the following error:
let duplicated = "original value"
let duplicated = "new value"

causes the following error:

stdin(8,5): error FS0037: Duplicate definition of value 'duplicated'
  • Mutable variables can be created by explicitly specifiying mutable keyword but it should be used cautiously.
  • F# is a statically typed language like C#
  • printfn can be used to display messages. Strings can be formatted by using special characters such %d for int, %s for string such as
printfn "The answer is %d" 42
  • let can also be used bind a name to a function. The following code
let square x =
    x * x

square 4

produces this result in the output window:

> let square x =
      x * x
  
  square 4

val square : x:int -> int
val it : int = 16

> 
  • F# is whitespace-sensitive. In the function above the body of the function was denoted by indenting it 4 spaces and return values is the last line of the function.
  • In times when F# cannot determine the type on itw own, it can specified explicitly bu using type annotations. For example:
let toLeetSpeak (phrase:string) =
    phrase.Replace('t', '7').Replace('o', '0')

toLeetSpeak "root"

In the example above it needs to be specified that phrase if of type string before String.Replace method can be called.

  • Functions can be defined inside other functions:
let quadruple x =    
    let double x =
        x * 2

    double(double(x))
  • A function can be used as an argument to another function to create what’s called a higher order function.
  • Inline functions can be created such as
let square = (fun x -> x * x)

Theres are called lambda functions or lambdas.

  • Lists can be created by semi-colon separated single values or a range values with .. in between such as
let evens = [2; 4; 6; 8]
let firstHundred = [0..100]
  • Higher-order functions can be combined with other functions such as
let firstHundred = [0..100]
List.map (fun x -> x * 2) 
    (List.filter (fun x -> x % 2 = 0) firstHundred)

which produces the following output

val it : int list =
  [0; 4; 8; 12; 16; 20; 24; 28; 32; 36; 40; 44; 48; 52; 56; 60; 64; 68; 72; 76;
   80; 84; 88; 92; 96; 100; 104; 108; 112; 116; 120; 124; 128; 132; 136; 140;
   144; 148; 152; 156; 160; 164; 168; 172; 176; 180; 184; 188; 192; 196; 200]

It first filters the odd numbers out of firstHundred list and send the result to map function to double all the values.

  • Forward-pipe operator can be used to make the code easier to read when functions are chained:
[0..100]
|> List.filter (fun x -> x % 2 = 0)
|> List.map (fun x -> x * 2)
|> List.sum
  • Array indexing is zero-based.

Resources

securitynetworkraspberry piaws comments

One of the Nmap’s many usages is for asset management as it is very good at discovering devices in a network. I’m going to use it to develop a simple IDS (Intrusion Detection System). Of course IDS software is much more complex and I hope I will look into installing a proper one, like Snort, when I have the time but for now I’ll just roll out my own. My goals in this project are:

  1. Utilise the idle old Raspberry Pi: I used one to build a media server another one as a security camera. The 3rd one is one of the first releases. It has 256MB memory and failed to run various projects I tried in the past. Looks like it’s at least good enough to run Nmap so it may have a use after all.
  2. Practice more Powershell and discover XML parsing capabilities.
  3. Writing a Python version of the same script so that everything can run on Pi but I’ll defer that to a later date.

Basics

So like every Raspberry Pi project, first step is to download a distro suitable for Raspberry Pi and write to an SD/miroSD card. There is a Pi version of the (in)famous security distro Kali Linux. It’s convenient as it comes with all security tools pre-installed but for my purposes I just used a plain Raspbian as I only need Nmap.

Nmap that is installed from Linux repositories was a bit outdated (v6.00) so I decided to download the latest version (v6.47) and build it from source. Even though all I need is a simple command I like to keep my tools current!

How Does It Work

I placed the Pi near the switch so that it can use Ethernet. It gets results much faster so I recommend wired over wireless. So the initial version will work like this:

  1. A cron job runs on Pi every n minutes. Executes Nmap and uploads the results in XML format to AWS S3.
  2. A scheduled task runs on a Windows Server running Powershell. It gets the latest XML from S3 and gets the active hosts on the network.
  3. Compares the results to a device list and sends a notification if an intruder is detected.

Of course in order this to work first I need to assign static IPs to all my devices and record these addresses along with MAC addresses in the configuration.

Let’s get cracking!

I covered Nmap basics here. In this project all I need is host discovery so I’m not interested in the services, machine names, operating systems etc. I’ll just run this command:

sudo nmap -sn 172.16.1.0/24 -oX output.xml

Also I need my old friend s3cmd so I ran this

sudo apt-get install s3cmd

Then

s3cmd --configure

and entered the credentials for the new IAM user I created who has access only to a single S3 bucket.

So to put it together in a shell script I created the simple script below.

#!/bin/bash

echo "Running Nmap"
sudo nmap -sn 172.16.1.0/24 -oX /home/pi/output.xml

timestamp=$(date +%Y%m%d_%H%M%S)
s3FileName=nmap_output_$timestamp.xml

echo "Uploading the output to S3"
sudo s3cmd put /home/pi/output.xml s3://{BUCKET}/$s3FileName

sudo rm /home/pi/output.xml

Also to make the script executable so I ran this:

sudo chmod 755 my_script

Analyze and alert

So now I have a list of devices running on the network. The Nmap XML output looks something like this:

<?xml version="1.0"?>
<?xml-stylesheet href="file:///usr/bin/../share/nmap/nmap.xsl" type="text/xsl"?>
<!-- Nmap 6.47 scan initiated Mon Dec 15 13:30:01 2014 as: nmap -sn -oX /home/pi/output.xml 172.16.1.0/24 -->
<nmaprun scanner="nmap" args="nmap -sn -oX /home/pi/output.xml 172.16.1.0/24" start="1418650201" startstr="Mon Dec 15 13:30:01 2014" version="6.47" xmloutputversion="1.04">
  <verbose level="0"/>
  <debugging level="0"/>
  <host>
    <status state="up" reason="arp-response"/>
    <address addr="172.16.1.10" addrtype="ipv4"/>
    <address addr="AA:BB:CC:DD:EE:FF" addrtype="mac"/>
    <hostnames>
    </hostnames>
    <times srtt="1142" rttvar="5000" to="100000"/>
  </host>
  <runstats>
    <finished time="1418650208" timestr="Mon Dec 15 13:30:08 2014" elapsed="6.40" summary="Nmap done at Mon Dec 15 13:30:08 2014; 256 IP addresses (11 hosts up) scanned in 6.40 seconds" exit="success"/>
    <hosts up="11" down="245" total="256"/>
  </runstats>
</nmaprun>

And my configuration file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<assetList>
  <host name="Dev" ip="172.16.1.10" mac="12:12:12:12:12:12" />
  <host name="Raspberry Pi XBMC" ip="172.16.1.11" mac="AA:AA:AA:AA:AA:AA" />
  <host name="Printer" ip="172.16.1.12" mac="BB:BB:BB:BB:BB:BB" />
  <host name="iPad" ip="172.16.1.13" mac="CC:CC:CC:CC:CC:CC" />
</assetList>

A gem I discovered about Powershell is the Powershell ISE (Integrated Shell Environment). It supports IntelliSense-type discovery so makes it much easier and faster to write and run scripts.

Powershell ISE

Into the Powershell

The script does the following

  1. Load the configuration
  2. Get the latest asset list from S3 and load
  3. Compare and see if there are any unknown devices on the network
  4. If there is send an email notification

Since Powershell is based on .NET framework, working with XML is nothing new. I just used standard XPath queries to match the MAC and IP addresses of the discovered devices to the ones I entered to the configuration file.

Here’s the script:

Time for some action

OK let’s see how we are doing now. After I recorded all the known devices the output of the script was like below:

Script output

One interesting thing to note is Nmap cannot discover its own MAC address. I guess that’s because as it’s using ARP protocol to resolve MAC addresses on the local subnet and it doesn’t have its own MAC in its ARP table it cannot find it. I decided to skip the entry but may be a better choice to compare only the IP address if this is the case. Anyway, I will leave it as is for now.

To test it I turned on my old phone and connected to the network. Within 10 minutes I received the following notification email:

So far so good!

Conclusion

I would never trust such a thing as the ultimate defence mechanism but even so I believe it may come in handy in some situations. More importantly this was a fun little project for me as it involved bash scripting, Powershell, AWS and XML. I’m glad I finally came up with a use for the idle Raspberry Pi also happy to discover Powershell ISE.

Resources

securitynetwork comments

What is Nmap?

Nmap (Network Mapper) is a powerful network scanner that lets you discover the hosts and services on a network. It sends specific packets to remote hosts and analyses the responses to map the network. These packets can be standard ICMP/TCP/UDP packets as well as deliberately malformed packets to observe the hosts’ behaviour.

I believe it is very important to keep an eye on what’s going on in your network. As Nmap is one of the basic tools for this kind of job, I decided to spend some time to cover it and harness it in my own projects.

Specifying Target

First you need to specify your targets. Nmap is very flexible and accepts different notations for specifying targets:

  • Single IP: i.e. 192.168.1.15
  • Name: i.e: www.google.com
  • List: For example, 192.168.1,2.1,10 will scan 192.168.1.1, 192.168.1.10, 192.168.2.1 and 192.168.2.10. Note that the comma separated values are not ranges but single values
  • Range: i.e: 192.168.1.1-10 For ranges hyphen is used as the separator. The start and end values are inclusive. Also one octet can be omitted such as 192.168.-.1. In this case Nmap will scan all IPs from 192.168.0.1 to 192.168.255.1
  • CIDR (Classless Inter-Domain Routing): For example 192.168.1.240/29 will scan 8 IPs from 192.168.1.240 to 192.168.1.247

You can use any combinations of these values as a list separated with spaces such as: 192.168.1.42 www.google.com will scan the single LAN IP and Google. You can use -sL parameter to view the target list without scanning them.

Nmap list hosts

In complex scenarios you can use an input file to load the target list by using the -iL flag and providing the file name.

To exclude specific IP addresses –exclude flag is used with the same notations.

Port Scanning

Ports can be specified in 2 ways:

  • Using -p flag: Single value, comma-separated list, or hyphen-separated values as a range. If just hyphen is specified it scans all ports from 1 to 65535. Also protocol can be specified such as T:80 U:53
  • Using nmap-services file: You can refer to a service by name and this file is used to look it up.

Both methods can be used in combinations such as:

nmap -p http*,25,U:53 192.168.1.15

Output

While a scan is running you can an updated status by hitting enter. Also you can save the output results to a file in different formats with the values below following the -o flag:

* N: Normal
* X: XML
* G: Grepable

such as

nmap -v 172.16.1.0/24 -oG output.gnmap

Also a helpful flag is -v for verbose output

An added bonus about using output files is that you can resume a scan by using the –resume flag and specifying the output file name such as

nmap --resume output.gnmap

Basic scanning options

  • TCP SYN scan (-sS): This is the default option Nmap uses. It’s very fast and can scan thousands of ports per second. It sends a SYN packet to target and if the port is open target sends back a SYN/ACK packet. Thus far it’s just like a normal 3-way TCP handshake but in the final step instead of sending an ACK Nmap sends RST (Reset) packet and cancels the process. Since it has already acquired the information it’s looking for it doesn’t need to establish an actual connection. If the port is closed the target sends a RST packet. SYN scan is very powerful because it’s fast and quiet as it doesn’t create a session. on Linux, it requires root privileges to run it.

  • TCP connect() Scan (-sT): This one uses a full handshake and opens a session. Then sends a RST packet to close the session. The advantage of this method over SYN scan is that it doesn’t require root privileges. As it opens sessions they are logged so it’s noisies than SYN scan.

  • Ping scan (-sn formerly known as -sP): This is the quickest scan method.For local subnets it uses ARP (Address Resolution Protocol) to identify active hosts. ARP only works on local subnets so for remote subnets, it uses ICMP echo requests. It also sends a TCP ACK packet to port 80 which is completely unexpected for the host as it’s just sent out of the blue. So the host sends a RST packet to end connection (as it’s the right thing to do!) but that helps Nmap to identify there is a host up with that address. This scan is only helpful to identify hosts rather than ports and services.

  • UDP scan (-sU): This is the only scan that can identify open UDP ports. Since there’s no handshake the overhead is lower compared to TCP scan. When a port is closed the target returns ICMP port unreachable packet so this may increase the number of packets. Like SYN scan it requires privileged access.

In total there are lots of scanning options. You can find the full list here

OS and Service Version Detection

To detect version -sV flag is used. An intensity level between 0-9 can be specified. Default is 7

	nmap -sV --version-intensity 9 172.16.1.10

Versioning can be useful in some cases but also significantly increases the scan time.

For operating system detection -O flag can be used

	nmap -O -v 172.16.1.10

Nmap OS detection

Timing Categories

If you are concerned about being detected when scanning the network

  1. You might be doing something nasty!
  2. You might consider using timing categories so add some delay between each packets to evade IDSs

There are 6 categories that can be specified by T flag followed by a number from 0 to 5. Alternatively you can use the templates’ names:

  • paranoid (0)
  • sneaky (1)
  • polite (2)
  • normal (3)
  • aggressive (4)
  • insane (5)

With “Paranoid” template Nmap will wait 5 minutes between each probe making the total scan time very very long. “Insane” will speed up the process to a point that the delay is down to 5ms. So be careful which option you use. There are many more flags for tailoring the scans to your performance requirements: http://nmap.org/book/man-performance.html

Scripting engine

Nmap comes with an embedded Lua interpreter which is the core of its scripting engine.

By using -sC flag all scripts in the default category can be executed such as

nmap -sC -p www.google.com

Nmap script output

There are lots of scripts which can be found at NSE(Nmap Scripting Engine) documentation page

For example there is a script to scan OpenSSL Heartbleed vulnerability. It can be executed as follows:

nmap -p 443 --script ssl-heartbleed <target>

On my machine this script was blocked by Norton!

Norton attack block

So be careful which script to run. Your intentions may be misinterpreted if you are running them against systems that you are not authorized.

Conclusion

Nmap is one of the core tools that hackers (white or black hat) use. So it has many more options geared towards attacking and being stealthy. You can spoof your IP address, use idle stations to avoid detection etc. I left out many of those options as my intention for studying Nmap is discovering devices on my network so that I can take action if any unknown devices appear. Based on these notes I will develop a simple script/applicaton to find out if anything fishy is going on in my network. I’ll blog about it when it’s ready. Stay tuned!

Resources

raspberry pigadgetawspowershells3security comments

I know there are very cheap security cameras that you can setup in a few minutes. They may provide security but they cannot provide the satisfaction you get after a DIY project! So let’s dig in just for the fun of it.

Ingredients

Component Price Where to buy?
Raspberry Pi Starter Kit £36 Amazon
Camera module £17 Amazon
Protective case for the camera module £4 Amazon
Wireless adaptor £6 Amazon

Once all put together this is what you are going to end up with:

Raspberry Pi Security Camera

Bring it to life

  1. Download a standard distro for Raspberry Pi. I used Rasbian.
  2. Write the image to the SD card. I use Win32 Disk Imager on Windows.

Main course: Motion

There is a great tutorial here for converting your Pi into a security camera which I mostly followed. Basically you enable WiFi, install Motion software and tweak the configuration a bit (image size, framerate etc) and it’s (hopefully) good to go.

The video didn’t work for me unfortunately. It was recording something but only the first frame was visible so it wasn’t any better than a still image. So I decided to ignore videos completely.

Instead of using a network share I decided to upload footage to AWS S3 directly using Amazon S3 Tools. Also don’t forget to clear old footage. Otherwise you can run out of space very quickly. I added a few cron jobs to carry out these tasks for me:

* * * * * s3cmd sync /var/surv/*.jpg s3://{BUCKET NAME}/
* */2 * * * sudo rm /var/surv/*.avi
* */6 * * *  find /var/surv/* -mtime +1 -exec rm {} \;

It syncs the local folder with S3 bucket, deletes all local video files and files older than a day. I delete the video files more frequently as they take up a lot of space.

Monitoring and Notifications

No system is complete without proper monitoring and notifications. It’s especially important for systems like this that’s supposed to run quietly in the background.

Unfortunately in my case it stopped working a few times which made monitoring even more important. I don’t know what’s causing the issue. Maybe it’s because I’m using an older version of Raspberry Pi and it’s not capable of handling all the motion software and S3 uploads etc.

To keep an eye on it, I decided to create a small PowerShell script to check S3 for incoming files and send me a notification if it seems to have stopped uploading.

PowerShell as the glue

Built on .NET framework PowerShell is a very powerful (no pun intended) tool to write quick and dirty solutions. So first here’s the Send-Mail function:

I created a separate function for it as it’s a general-purpose feature which can be used in many places. To make it even more generic you can take out the from and to email addresses and add them as parameters to the function.

And here’s the actual notification logic:

It finds the latest image by sorting them by LastModified field and compares this date with the current date. If it’s been more than 1 day it sends an email. Depending on how often you expect images to be uploaded you can change the alert condition.

To use these scripts you’ll AWS accounts with S3 and SES privileges. Also you have to change the path of the send-mail.ps1 in the line it’s included.

Resources

awspowershellec2security comments

Here’s the scenario:

  • You use AWS
  • You don’t have a static IP
  • You connect to your EC2 instances via SSH and/or RDP only from your IP
  • You are too lazy to update the security groups manually when your IP changes!

You’ve come to the right place: I’ve got the solution.

Let’s have a look how it’s built step-by-step:

Step 1: Create an IAM account to access security groups

As a general rule of thumb always grant the minimum privileges possible to the accounts you use. Create a new user and go to the user’s details. Select Attach User Policy and then Policy Generator. Select AWS EC2 from the services list. For our script to run we need 3 privileges: List security groups (DescribeSecurityGroups), delete old IP permissions (RevokeSecurityGroupIngress) and add new IP permissions (AuthorizeSecurityGroupIngress).

Alternatively you can just attach the following policy to your user:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1418339796000",
      "Effect": "Allow",
      "Action": [
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:DescribeSecurityGroups",
        "ec2:RevokeSecurityGroupIngress"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

IAM has a very nice simulation feature. Before you proceed I recommend you run it and verify only 3 actions are allowed.

Step 2: Modify the PowerShell script

The script I created is on Gist as seen below. Before you can use it you have to update it with the access and secret keys of the user created above.

Also since fiddling with security groups can become messy very quickly I’d strongly recommend you perform a dry run first. By default $dryRun is set to true. Unless you set it to $false it will only display what it is going to do but will not take any action. So make sure you know what you’re doing before you give it a go. I don’t think this script will be a ready-made script for anyone. Probably would need some tweaking here and there to tailor to your needs. But this version works for me so here it is:

First it gets a list of security groups that have SSH and RDP permissions in them. Then loops through these permissions and compares the IP address with the current one. I used my own external IP checker service that I’ve recently developed as I blogged here. You can use other services as well. Just make sure you change the URL in the script. My service returns a JSON object so if the method you use returns a different format you need to modify the parsing code as well.

If the IP addresses are different, it revokes the old permission and creates a new one with your current IP. Protocol and ports remain intact.

This is the output of the script:

If the IP addresses for port 22 an 3389 are up-to-date it just displays “Security group is up-to-date” so it can be run consecutively. So you can schedule it to run as often as you want.

Resources

developmentnodejsjavascript comments

Enter Node.js

Node.js is a popular platform to develop JavaScript applications. Internally it uses Google’s V8 JavaScript Engine and enables to develop fast, scalable and event-driven JavaScript applications. I’ve recently developed my first REST API using Node and in this post I’ll talk about the steps required to develop and deploy a Node application.

Learn

Needless to say there are many resources to learn Node (as with any such popular environment). I found the Microsoft Virtual Academy’s training quite useful. It’s very enjoyable and free. I’ll provide more recommendations as I go deeper.

Setup

I used my main dev machine for this project which is running Windows 8.1. A nice thing about Node.js is that it’s cross-platform so you can use Linux, Mac or Windows.

On Windows, simply download and run installer on this page. Alternativaly if you like Chocolatey you can run the following command in a command-prompt (you may need to run it as administrator)

choco install nodejs.install

Develop

There is a module that makes life a lot easier if are planning to develop an API with Node and it’s called Express

We’re going to use express in our API so first we need to install it using Node Package Manager (NPM)

npm install express 

Now that we have everything we need, we can write the actual code for the endpoint:

var express = require('express');
var app = express();

app.get('/', function (req, res) {
    var remoteAddress = req.headers['x-forwarded-for'] || 
    				  req.connection.remoteAddress;
    res.json({ "ipAddress": remoteAddress });
});

app.listen(process.env.PORT || 80);

Just getting remoteAddress wouldn’t work when your applications is not accepting connections from the users directly –which is generally the case in large applications where load balancers face the clients.

For example, in the sample output above it’s obviously getting a 10.x.x.x IP from remoteAddress. So we check for the x-forwarded-for HTTP header and use it if present.

Deploy

First I installed Node on Linux box on AWS by simple running

curl -sL https://deb.nodesource.com/setup | sudo bash -
sudo apt-get install -y nodejs

but then I recalled I could use Heroku to host node.js applications for free. Why would I keep an eye on the service when the nice guys at Heroku are volunteering to do it for me for free, right? So I created an application on Heroku by following this guide.

Basically you can deploy by just following these simple steps:

  1. Download and install Heroku Toolbelt from here
  2. Clone your git repository and navigate to that folder using a command prompt.
  3. Create an app on heroku by running

     heroku create --http-git
    

    This will add a remote to your git repository

    Heroku remote

  4. Now as you have the remote you can push your code to that

     git push heroku master
    

    Heroku deploy app

  5. [Optional] Add a custom domain and change the application name

    If you don’t want to use Heroku domain and/or the application name is assigns automatically you can change them. First go to Dashboard –> Personal Apps and click on the application. Then click Settings. You can rename the application there directly but it breaks your git remote. So I suggest doing it via the command line by running

     heroku apps:rename newname --app oldname
    

    On the same page you can add your custom domain. Of course you have to redirect your domain/subdomain to heroku’s URL by adding a CNAME record to your zone file.

    Heroku custom domain

Test

So we have an up-and-running REST API and let’s see it in action. One way to test it with several IP addresses is using free proxy servers that float around the net (for some reason). I visited HideMyAss to quickly get a few. It may take a few tries to find a working proxy but here is what we’re looking for:

My external IP is the same as the proxy address I just set. While testing with a few different proxy servers I came across this rather unexpected result:

I deployed a version with more detailed output to understand what was going on. Looks like the X-Forwarded-For header was set to “127.0.0.1, 177.99.93.214” My service currently doesn’t support extracting the actual external IP from such comma-separated lists. First I need to look into it and see whether it’s a common and acceptable practice conforming with the standards or just a weird behaviour from a random server. But the solution is obviously simple to implement: Just split the string from commas and get the last entry.

Enjoy!

So now instead of whatismyip.com I can use the service I built with my own two hands: http://check-ip.herokuapp.com/. It’s open-source so feel free to fiddle at will!

It’s a simple API anyway but it feels nice to see it running so quickly. Also no maintenance is required thanks to Heroku. For me it was a nice introduction to Node.js world. I hope this post comes in handy to other people as well.

UPDATE

As I pointed out in one example above during my tests I got comma separated IP addresses in x-forwarded-for field. Turns out it’s not a weird behaviour and perfectly legitimate as described in the RFC for the header:

The “for” parameter is used to disclose information about the client that initiated the request and subsequent proxies in a chain of proxies. When proxies choose to use the “for” parameter, its default configuration SHOULD contain an obfuscated identifier as described in Section 6.3.

So I updated the application to split the proxy chain from comma and return the last entry in the list.

As shown above there are two entries in the forwarded for header but the result IP address is the one I set in Firefox.

Resources

developmentcsharp comments

Conclusion and List of Posts

General consensus is that the new features are just small increments to improve productivity. They will help to clean up existing code. Less code is helpful to focus on the actual business logic instead of the clutter caused by the language.

For easy navigation I listed the links for all the previous posts:

Table of contents

  1. C# 6.0 New Features - Introduction
  2. Auto-Properties with Initializers
  3. Using statements for static classes
  4. Expression-bodied methods
  5. String interpolation
  6. Index initializers
  7. Null-conditional operators
  8. nameof operator
  9. Exception-handling improvements

Resources

developmentcsharp comments

There are 2 improvements on exception handling:

  1. Exception filters
  2. Using await in catch and finally blocks

Exception Filters

Visual Basic and F# already have this feature and now C# has it too! the way it works is basically defining a condition for the catch block (example taken from Channel 9 video):

try
{

}
catch(ConfigurationException e) if (e.IsSevere)
{

}

I think it can make exception handling more modular. Also it’s better than catching and rethrowing in terms of we don’t lose information about the original exception.

Using await in catch and finally blocks

Like most people I hadn’t noticed we couldn’t do that already! Apparently it was just a flaw in the current implementation and they closed that gap with this version

try
{

}
catch(ConfigurationException e) if (e.IsSevere)
{
	await LogAsync(e);
}
finally
{
	await CloseAsync();
}

developmentcsharp comments

Personally I think this one is a bit trivial. So the argument is it eliminates the need for using hard-coded strings in the code.

For instance:

public class NameofOperator
{
    public void Run(SomeClass someClass)
    {
        if (someClass == null)
        {
            throw new ArgumentNullException("someClass");
        }
    }
}

public class SomeClass
{
}

Say you refactored the code and changed the parameter name in this example. It is likely to forget changing the name in the exception throwing line since it has no reference to the actual parameter.

By using nameof operator we can avoid such mistakes:

public class NameofOperator
{
    public void Run(SomeClass refactoredName)
    {
        if (refactoredName == null)
        {
            throw new ArgumentNullException(nameof(refactoredName));
        }
    }
}

public class SomeClass
{
}

The results are identical but this way when we change a parameter name all references to that object will be updated automatically.

developmentcsharp comments

This is another handy feature. Checking for null values before accessing them can quickly become cumbersome and yields a lot of boilerplate code. With this new operator checking for nulls and coalescing becomes really short and easy to read.

For example:

public class NullConditionalOperators
{
    public void Run()
    {
        Person person = GetPerson();

        // Current C#
        if (person != null && person.Country != null)
        {
            Console.WriteLine(person.Country.Name);
        }
        else
        {
            Console.WriteLine("Undefined");
        }
    }

    private Person GetPerson()
    {
        return new Person() { Firstname = "Volkan", Lastname = "Paksoy" };
    }
}

public class Person
{
    public string Firstname { get; set; } = "Unknown";
    public string Lastname { get; set; } = "Unknown";
    public Country Country { get; set; }
    
}

public class Country
{
    public string Name { get; set; }
    public string IsoCode { get; set; }
}

In the example above if you need to print the name of the country first you need to ensure both the Person and Country objects are not null. The if block aobe can be reduced to a one-liner with 6.0:

	Console.WriteLine(person?.Country?.Name ?? "Undefined");

They both produce the same results. The more complex the object hierarchy becomes the more useful this feature would be.

developmentcsharp comments

In current C# a collection initialization can be done like this:

var result = new Dictionary<string, string>();
result.Add("index1", "value1");
result.Add("index2", "value2");

or key - value pairs can be added during initialization

var result = new Dictionary<string, string>() 
{
	{"index1", "value1"},
	{"index2", "value2"}
};

With C# 6.0 values at specific indices can be initialized like this:

var result = new Dictionary<string, string>() 
{
	["index1"] = "value1",
	["index2"] = "value2"
};

It’s a shorthand but not so much! I don’t see much value in this notation but I’m sure in time it will prove itself. I don’t think the guys in the language team are just adding random features!

developmentcsharp comments

One of favorite features is the new string formatting using String Interpolation. In the past I encountered a lot of errors while formatting strings especially when preparing log messages. You may need lots of small pieces of data that so after a few iterations you may forget to add new parameters.

For example in the imaginary Log method below only 3 parameters are supplied whereas the string expects 4. It compiles successfully because the string is generated at run-time and it doesn’t check the number curly braces against the number of parameters supplied.

Argument count mismatch error

Using the new feature such errors can be avoided as we can put the values directly in their places in the string:

public class StringInterpolation
{
    public string Log(string timestamp, string application, string error, string status)
    {
        return string.Format("[Timestamp: \{timestamp}], Application: [\{application}], Error: [\{error}], Status [\{status}]");
    }
}

No more parameter mismatch errors!

developmentcsharp comments

It’s a shorthand for writing methods. The body now can be written just like a Lambda expression as shown in Log2 method below:

public string Log(string timestamp, string application, string error, string status)
{
    return string.Format("[Timestamp: \{timestamp}], Application: [\{application}], Error: [\{error}], Status [\{status}]");
}

public string Log2(string timestamp, string application, string error, string status) => string.Format("[Timestamp: \{timestamp}], Application: [\{application}], Error: [\{error}], Status [\{status}]");

It may come in handy for helper methods. The only benefit I can see is getting rid of opening and closing curly braces which generally don’t bother me much. But I know lots of people trying to avoid curly braces as much as possible. I’m sure this feature will be popular among them.

developmentcsharp comments

Currently using statements are for namespaces only. With this new feature they can used for static classes as well. Like this:

using System.IO;
using System.IO.File;

namespace CSharp6Features
{
    class UsingStaticClass
    {
        public class StaticUsing
        {
            public StaticUsing()
            {
                File.WriteAllText("C:\test.txt", "test");
                WriteAllText("C:\test.txt", "test");
            }
        }
    }
}

I don’t think I liked this new feature. If you see a direct method call it feels like it’s a member of that method. But now it’s possible that method can be defined inside a static class somewhere else. I think it would just cause confusion and doesn’t add any benefit.

developmentcsharp comments

Currently, in Visual Studio 2013, if you have a line like this

public int MyProperty { get;  }

you’d get a compilation error like this:

Getter-only auto-property error

But the same code in VS 2015 compiles happily. The reason to add this feature is to not get in the way of immutable data types.

Another new feature about auto-properties is initializers. For example the following code would compile and run with new C#:

public class AutoInit
{
    public string FirstName { get; } = "Unknown";
    public string LastName { get; } = "Unknown";

    public AutoInit()
    {
        Console.WriteLine(string.Format("{0} {1}", FirstName, LastName));
        FirstName = "Volkan";
        LastName = "Paksoy";
        Console.WriteLine(string.Format("{0} {1}", FirstName, LastName));
    }
}		

and the output is unsurprisingly looks like this:

Auto-property initializer output

When I first ran this code successfully I was surprised how I managed to set values without a setter. Looks like under the covers it’s generating a read-only backing field for the property and just assignning the value to the field instead of calling the setter method. It can easily be seen using a decompiler:

Just Decompile output

As it’s a read-only value it can only be set inside the constructor. So if you add the following method it wouldn’t compile:

public void SetValue()
{
    FirstName = "another name";
}

Auto-property set error

It’s a small improvement providing an alternative way to write the same code in less lines.

developmentcsharp comments

A new Microsoft

These are exciting times to work with Microsoft technologies as the company seems to be changing their approach drastically. They are open-sourcing a ton of projects including the .NET Framework itself. Maybe the best of it all is the next version of ASP.NET will be cross-platform. There are already some proof of concept projects that run a ASP.NET vNext application on a Raspberry Pi. I like the direction they are taking so I think it’s a good time to catch up with these new instalments of my favorite IDE and programming language.

New features in a nutshell

Looking at the new features it feels like they are all about improving productivity and reducing the clutter with shorthands and less code overall. (It’s also confirmed by Mads Torgersen in his video on Channel 9)

If you check out the resources at the end of the this post you’ll notice that there is quite a flux in the features mentioned in various sources. I’ll use Channel 9 video as my primary source. It features a PM in the language team and it’s the most recent source so sounds like the most credible among all.

Here’s the list of new features:

  1. Auto-Properties with Initializers
  2. Using statements for static classes
  3. String interpolation
  4. Expression-bodied methods
  5. Index initializers
  6. Null-conditional operators
  7. nameof operator
  8. Exception-handling improvements

I was planning to go over all of these features in this post but with sample code and outputs it quickly became quite lengthy so I decided to create a separate post for each of them. Watch this space for the posts about each feature.

Resources

Cloud computing is a relatively new concept, especially when compared to FTP which dates back to 70s (History of FTP server). So not every device supports S3 uploads. If you cannot force a device to upload directly to S3 and have control over the FTP server machine (and assuming it’s running Windows) you can create a simple PowerShell script to upload files to S3.

FTP to S3

First you need to install AWS Tools for Windows. I tested on a couple of machines and the results were dramatically different. My main development machine is running Windows 8.1 and has PowerShell v4 on it. I had no issues with using AWS commandlets in this environment. The VM I tested has PS v2 on it and I had to make some changes first.

v2 vs. v4

The problem is AWS module is not loaded and you have to do it yourself with this command

import-module "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.psd1"

After this command you can use AWS commandlets but when you close and open another shell it will be completely oblivious and will deny knowing what you’re talking about! To automate this process you need to add it to your profile. AWS documentation tells you to edit your profile straight away but the profile did not exist in my case. So first check if the profile exists:

Test-Path $profile

If you get “False” like I did then you need to create a new profile first.

To create the profile run the following command:

New-Item -path $profile -type file –force

then you can edit the profile by simply running

notepad $profile

Add the import-module command above and save the file. From now on every time you run PowerShell it will be ready to run AWS commandlets.

Time to upload

Now that we got over with troubleshooting we can finally upload our files. The commandlet we need is called Write-S3Object. The parameters it requires are the target bucket name, source file, target path, and the credentials.

Write-S3Object -BucketName my-bucket -File file.txt -Key subfolder/remote-file.txt -CannedACLName Private -AccessKey accesskey -SecretKey secretKey

Most likely you would like to upload a bunch of files under a folder. In order to accomplish that you can create a simple PowerShell script like this one:

$results = Get-ChildItem .\path\to\files -Recurse -Include "*.pdf" 
foreach ($path in $results) {
	Write-Host $path
	$filename = [System.IO.Path]::GetFileName($path)
	Write-S3Object -BucketName my-bucket -File $path -Key subfolder/$filename -CannedACLName Private -AccessKey accessKey -SecretKey secretKey
}

Resources

jekyllsite news comments

As you probably know I’ve migrated to GitHub Pages from WordPress as I blogged here.

It was a fairly easy migration but migrating the actual content proved to be trickier. There are lots of resources on using Jekyll’s importers. I found this one useful. Just export everything to an XML and run the converter to get the posts in markdown. The problem is the YAML Front Matter it generates is a bit messy:

---
layout: post
title: Blind Password Masking
date: 2011-06-14 03:46:18.000000000 +00:00
categories:
- Off the Top of My Head
tags: []
status: publish
type: post
published: true
meta:
  _edit_last: '1'
author:
  login: blogadmin
  email: admin@myvirtualhome.net
  display_name: Volkan
  first_name: ''
  last_name: ''
---

I don’t want or need most of this stuff anyway!. Also I had two main issues:

  • Images didn’t work as it didn’t get the full path. I use S3 to host all images but the imported posts were converted to use a local assets folder. There may be a configuration setting for that but in my case I decided to convert all my posts to markdown from HTML anyway (which was a great way to practice Markdown)
  • Main issue was with Disqus. It’s not like people are racing to submit comments to my ramblings but still I’d like to have Disqus enabled on all my posts. Apparently to enable comments you need to specify it in the front matter like this:
comments: true

Manual vs. programmatical

First I resisted the temptation to write a small application to convert the layouts but manual conversion soon proved to be very time consuming even with 100+ posts. So I developed the simple console application below. It scans the folder you specify (filters *.markdown files) and reads the existing layout and converts it to the format I wanted:

---
layout: post
title: @TITLE
date: @DATE
categories: [@CATEGORIES]
comments: true

I wanted to keep it simple and clean. Also as all my posts are now in pure markdown I can easily loop through and update the elements (like converting H3 to H2 or adding tags to layout etc)

Source Code

Usage

It probably won’t apply to most situations but helped me out so why not publish it just in case, right? To use it you have to change the ROOT_FOLDER value in Program class at the bottom (Do NOT forget to backup your posts first!)

As I wanted to revisit all my posts I wanted to mark them instead of replacing automatically with the original one. So when you run the program it deletes the original post and creates the updated one with “.output” appended. So you can easily find which files are modified by checking the extension. If you want it to replace the original post you can uncomment this line at the end of the ConvertFile method

 // File.Move(outputFilePath, inputFilePath);

Resources

Events comments

On Wednesday I attended this meetup. It was fairly informative. There is a bit uncertainty about the new features of Windows 10, Visual Studio 2015 and C# 6.0 but it was still good to cover most of them in a few hours.

It was divided into two main sections: Windows 10 and VS 2015/C# 6.0. I’m planning to review these myself and blog about them separately but here are the highlights of the events:

Windows 10

When they first announced the name I immediately thought about Winamp skipping version 4. We all know how that story ended so I hope Windows 10 fares better than that!

A few items discussed in the event about Windows 10:

  • First demonstration was sending a feedback which was painfully slow and it took forever to send a simple feedback message.
  • Some enterprise features are coming like like having store and custom company portals
  • There was a rather long discussion about what the 4 device from the left was in the image below: One Windows

I don’t think it makes any difference though but the idea is having one code base and one store to deploy apps to various devices with different form factors. Sounds cool but I’m a bit cautious about it for the time being. If it sounds to good to be true, it generally isn’t!

  • Looks like they are going to use hamburger icons everywhere which they initially opposed to.
  • There seems to be back buttons in a few screens which is a bit unusual so they may have to deal with some lash back from people like they did for their brilliant(!) charms invention.

Visual Studio 2015

Preparing for Windows 10 Event

  • There will be a free version of VS 2015 and there is already a free version for VS 2013 called “Community Edition”. It’s said to be Professional equivalent so it sounds cool to have it for free.
  • Like all Microsoft products versioning is a bit off and confusing here as well. There was a discussion about the version number vs. release year. So VS 2013 actually is version 12 but VS 2015 is version 14 even though there is no other VS in the middle!
  • A research project called Pex used to generate unit tests made its way to VS 2015 under Smart Unit Tests. It should help to create a lot boilerplate test code.

C# 6.0

  • Not a C# feature but one of the most impressive developments nowadays is that .NET Framework has become open source.
  • New version is coming with a much faster 64-bit compiler called .net RyuJIT
  • .net Native Project N is coming which is supposedly make the applications run faster
  • The compiler has been completely rewritten and its name is now Roslyn. It’s open sources and it exposes APIs that can be consumed by any application. So the compiler doesn’t have be a black box anymore.

Conclusion

A lot of exciting and revolutionary developments are going on in the .NET world these days. Taking a more open-source and multi-platform based approach will definitely help the platform in the future. It’s thrilling to be a developer and experience them first hand as these news come along. I have my virtual machine running Windows 10 and VS 2015 so I’ll play around more and blog about these specific features in detail in the near future.

Resources

Site news comments

On the road again…

Using WordPress for my blog had been bugging me for quite some time. So I’ve started using the popular static site generator Jekyll and host my blog on GitHub pages. Main reasons for this were:

  • Security: Over time you install a lot of plugins and any number of them can come with vulnerabilities. Granted they are optional and they are installed because they provide nice functionality but it would be much better to delegate the security of the system to GitHub.
  • Maintenance: I used to host my blog on AWS which is relatively easy to maintain but still I was responsible for keeping that machine up and running at all times.
  • Database: Using a database is overkill when all I’m doing is generating some static content. Database comes with performance impact, maintenance and backup requirements.
  • Scalability: No need to worry about scaling as GitHub takes care of it all for free!
  • Versioning: Just like any project on GitHub you have full control over the content and you can rollback to a previous version anytime.
  • Performance: All content is served in static pages. So compared to retrieving it from the database and generating the page on the fly it’s obviously much faster and scalable.

Beautifying the content

Compared to HTML, markdown is so elegant and concise. No more pesky attributes and ugly tags mingled with text. Granted WordPress has plugins for writing posts in Markdown but in GitHub it’s a first-class citizen and supported from the get-go.

Drawbacks

  • WordPress has its own merits like having a gazillion of plugins and themes. GitHub Pages supports a limited number of plugins due to security reasons.
  • SEO must be taken care of manually whereas WordPress already has lots of plugins for that purpose too

Resources