dev, aws, route53, angular, dotnet core, dynamic dns comments edit

A few years back I developed a project called DynDns53. I was fed up with the dynamic DNS tools available and thought could easily achieve the same functionality since I had already been using AWS Route53.

Fast forward a few years, due to some neglect on my part and technology moving so fast the project started to feel outdated and abandoned. So I decided to revise it.

Key improvements in this version are:

  • Core library is now available in NuGet so anyone can build their own clients around it
  • A new client built with .NET Core so that it runs on all platforms now
  • A Docker version is available that runs the .NET Core client
  • A new client built with Angular 5 to replace the legacy AngularJS
  • CI integration: Travis is running the unit tests of core library
  • Revised WPF and Windows Service clients and fixed bugs
  • Added more detailed documentation on how to set up the environment for various clients

Also kept the old repository but renamed it to dyndns53-legacy. I might archive it at some point as I’m not planning to support it any longer.

Available on NuGet

NuGet is a great way of installing and updating libraries. I thought it would be a good idea to make use of it in this project so that it can be used without cloning the repository.

With DotNetCore it’s quite easy to create a NuGet package. Just navigate to project folder (where .csproj file is located) and run this:

dotnet pack -c Release

The default configuration it uses is Debug so make sure you’re using the correct build and a matching pack command. You should be able to see a screen similar to this

Then push it to Nuget

dotnet nuget push ./bin/Release/DynDns53.CoreLib.1.0.0.nupkg -k {NUGET.ORG API_KEY} -s

To double-check you can go to your NuGet account page and under Manage Packages you should be able to see your newly published package:

Now we play the waiting game! Becuase it may take some time for the package to be processed by NuGet. For exmaple I saw the warning shown in the screenshot 15 minutes after I pushed the package:

Generally this is a quick process but the first time I published my package, I got my confirmation email about 7 hours later so your mileage may vary.

If you need to update your package after it’s been published, make sure to increment the version number before running dotnet pack. In order to do that, you can simply edit the .csproj file and change the Version value:

    <Authors>Volkan Paksoy</Authors>


  • Regarding the NuGet API Key: They recently changed their approach about keys. Now you only have one chance to save your key somewhere else. If you don’t save it, you won’t be able to access ti via their UI. You can create a new one of course so no big deal. But to avoid key pollution you might wanna save it in a safe place for future reference.

  • If you are publishing packages frequently, you may not be able to get the updates even after they had been published. The reason for that is the packages are cached locally. So make sure to clean your cache before you try to update the packages. On Mac, Visual Studio doesn’t have a Clean Cache option as of this writing (unlike Windows) so you have to go to your user folder and remove the packages under {user}/.nuget/packages folder. After this, you update the packages and you should get the latest validated version from Nuget.

.NET Core Client


First, you’d need an IAM user who has access to Route53. You can use the policy template below to give the minimum possible permissions:

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:route53:::hostedzone/{ZONE ID}"

Only 2 actions are performed so as long as you remmeber to update the policy with the new zone IDs if you need to manage other domains this should work fine work you.


Basic usage is very straightforward. Once compiled you can supply the IAM Access and Secret Keys and the domains to update with their Route53 Zone IDs as shown below:

dotnet DynDns53.Client.DotNetCore.dll --AccessKey {ACCESS KEY} --SecretKey {SECRET KEY} --Domains ZoneId1:Domain1 ZoneId2:Domain2 


  • .NET Core Console Application uses the NuGet package. One difference between .NET Core and classis .NET application is that the packages are no longer stored along with the application. Instead they are downloaded to the user’s folder under .nuget folder (e.g. on a Mac it’s located at /Users/{USERNAME}/.nuget/packages)

Available on Docker Hub

Even though it’s not a complex application I think it’s easier and hassle-free to run it in a self-contained Docker container. Currently it only supports Linux containers. I might need to develop a multi-architecture image in the future in need be but for now Linux only is sufficient for my needs.


You can get the image from Docker hub with the following command:

docker pull volkanx/dyndns53

and running it is very similar to running the .NET Core Client as that’s what’s running inside the container anyway:

docker run -d volkanx/dyndns53 --AccessKey {ACCESS KEY} --SecretKey {SECRET KEY} --Domains ZoneId1:Domain1 ZoneId2:Domain2 --Interval 300

The command above would run the container in daemon mode so that it can keep on updating the DNS every 5 minutes (300 seconds)


  • I had an older Visual Studio 2017 for Mac installation and it didn’t have Docker support. The setup is not very granular to pick specific features. So my solution was to reinstall the whole thing at which point Docker support was available in my project.

  • After adding Docker support the default build configuration becomes docker-compose. But it doesn’t work straight away as it throws an exception saying

      ERROR: for dyndns53.client.dotnetcore  Cannot start service 	dyndns53.client.dotnetcore: Mounts denied: 
      The path /usr/local/share/dotnet/sdk/NuGetFallbackFolder
      is not shared from OS X and is not known to Docker.
      You can configure shared paths from Docker -> Preferences... -> File Sharing.
      See for more info.

I added the folder it mentions in the error message to shared folders as shown below and it worked fine afterwards:

  • Currently it only works on Linux containers. There’s a nice articlehere about creating multi-architecture Docker images. I’ll try to make mine multi-arch as well when I revisit the project or there is an actual need for that.

Angular 5 Client

I’ve updated the web-based client using Angular 5 and Bootstrap 4 (Currently in Beta) which now looks like this:

I kept a copy of the old version which was developed with AngularJS. It’s available at this address:


  • After I added AWS SDK package I started getting a nasty error:

      ERROR in node_modules/aws-sdk/lib/http_response.d.ts(1,25): error TS2307: Cannot find module 'stream'.

    Fortunately the solution is easy as shown in the accepted answer here. Just remove “types: []” line in file. Make sure you’re updating the correct file though as there is similarly named tsconfig.json in the root. What we are after is the under src folder.

  • In this project, I use 3 different IP checkers (AWS, DynDns and a custom one I developed myself a while back and running on Heroku). Calling these from other clients is fine but when in the web application I bumped into CORS issues. There are possible solutions for this:

    1. Create you own API to return the IP address: In the previous version, I created an API with AWS API Gateway which uses a very simple Lambda function to return caller’s IP address

       exports.handler = function(event, context) {
               "ip": event.ip

      I create a GET method for my API and used the above Lambda function. Now that I had full control over it I was able to enable CORS as shown below:

    2. The other solution is “tricking” the browser by injecting CORS headers by using a Chrome extension. There is an umber of them but I use the one aptly named “Allow-Control-Allow-Origin: *”

      After installed you just enable it and the getting external IP works fine.

      It’s a good practice to filter it for your specific needs so that it doesn’t affect other sites (I had some issues with Google Docs when this is turned on)

CI Integration

I created a Travis integration which is free since my project is open-source. It runs the unit tests of the core library automatically. Also added the shiny badge on the project’s readme file that shows the build status.


google cloud platform, dev, speech-to-text comments edit

Just out of curiosity I wanted to play around with Google Cloud Platform. They give $300 free credit for a 12 month trial period so I thought this would be a good chance to try it out.

The APIs I wanted to sample were speech recognition and translation.

Setting Up SDK

I followed the quick start guide which is a step-by-step process so it was quite helpful to get acquainted with the basics.

To be able to follow the instructions I downloaded and installed the GCloud SDK. On Mac it’s quite easy:

curl | bash
exec -l $SHELL
gcloud init

And once it’s complete it requires you to log in to your account and grant access to SDK:

Testing the API

After the initial setup I tried the sample request and it worked just fine:

The example worked but also raised a few questions in my mind:

  1. Sample uses gs protocol. First off, what does it mean?
  2. Can I use good ol’ http instead of it and point to any audio file publicly accessible?
  3. Can I use MP3 as encoding or does it need to be FLAC?

As learned from this SO thread, gs is used for Google Cloud Storage and “” translates to “gs://”.

So the http version of the test file is “”. I was able to verify the file actually exists but when I replaced it with the original value I got this error:

    "error": {
        "code": 400,
        "message": "Request contains an invalid argument.",
        "status": "INVALID_ARGUMENT"

This also answered my second question. According to the documentation it only supports Google Cloud Storage currently:

uri contains a URI pointing to the audio content. 
Currently, this field must contain a Google Cloud Storage URI 
(of format gs://bucket-name path_to_audio_file). 

The answer to my 3rd question wasn’t very promising either. Apparently only the types listed below are supported:

If the authorization token expires, you can generate a new one by using the following commands:

export GOOGLE_APPLICATION_CREDENTIALS="/Path/To/Credentials/Json/File"

gcloud auth application-default print-access-token

So no way of uploading a random MP3 and get text out of it. But I’ll of course try anyway :-)

Test Case: Get lyrics for a Rammstein song and translate

OK, now that I have a free trial at my disposal and have everything setup, let’s create some storage, upload some files and put it to a real test.

Step 01: Get some media

My goal is to extract lyrics of a Rammstein song and translate them to English. For that I chose the song Du Hast. Since I couldn’t find a way to download FLAC version of the song I decided to download the official vide from Rammstein’s YouTube channel.

This is just for experimental purposes and I deleted the video after I’m done testing it so should be fine I guess. To download videos from youtube you can refer to this TechAdvisor article.

I simply used VLC to open the YouTube video. In Window -> Media Information dialog it shows the full path of the raw video file and I copied that path into a browser and downloaded the video.

Step 02: Prepare the media to process

Since all I need is audio I extracted it from video file using VLC. Probably can be done in a number of ways but VLC is quite straightforward to do it:

Click File –> Convert & Stream, drag and drop the video

In the Choose Profile section, select Audio - FLAC.

The important bit here is is that by default VLC converts to stereo audio with 2 channels but Google doesn’t support it which is explained in this documentation:

All encodings support only 1 channel (mono) audio

So make sure to customize it and enter 1 as channel count:

Step 03: Call the API

Now I was ready to call the API with my shiny single-channel FLAC file. I uploaded it to the Google Storage bucket I created, gave public access to it and tried the API.

Apparently, speech:recognize endpoint only supports audio up to a minute. This is the error I got after posting a 03:55 audio.

“Sync input too long. For audio longer than 1 min use LongRunningRecognize with a ‘uri’ parameter.”

The solution is using speech:longrunningrecognize endpoint which only returns a JSON with 1 value: name. This is a unique identifier assigned by Google to the job they created for us.

Once we have this id we can query the result of the process by calling GET operations endpoint.

Fantastic! Some results. It’s utterly disappointing of course as we only got a few words out of it, but still something (I guess!).

Step 04: Compare the results:

Now the following is the actual lyrics of the song:

du hast
du hast mich
du hast mich gefragt
du hast mich gefragt, und ich hab nichts gesagt

Willst du bis der Tod euch scheidet
treu ihr sein für alle Tage


Willst du bis zum Tod, der scheide
sie lieben auch in schlechten Tagen


and this is what I got back from Google:

du hast 
du hast recht 
du hast 
du hast mich 
du hast mich 
du du hast 
du hast mich

du hast mich belogen

du hast 
du hast mich blockiert

It missed most of the lyrics. Maybe it was headbanging too hard that it couldn’t catch those parts!

Test Case: Slow German Podcast

Since my idea of translating German industrial metal lyrics on the fly failed miserably I decided to try with cleaner audio where there is no music. Found a nice looking podcast called Slow German. Nice thing about it is that it provides transcripts as well so I can compare the Speech API results with it.

Obtained a random episode from their site and followed the steps above.

First 4 paragraphs of the actual transcript of the podcast is as follows (The full transcript can be found here:

Denk ich an Deutschland in der Nacht, dann bin ich um den Schlaf gebracht.“ Habt Ihr diesen Satz schon einmal gehört? Er wird immer dann zitiert, wenn es Probleme in Deutschland gibt. Der Satz stammt von Heinrich Heine. Er war einer der wichtigsten deutschen Dichter. Aber keine Angst: Auch wenn er am 13. Dezember 1797 geboren wurde, sind seine Texte sehr aktuell und relativ leicht zu lesen. Ihr werdet ihn mögen!

Harry Heine wuchs in einem jüdischen Haushalt auf. Er war 13 Jahre alt, als Napoleon in Düsseldorf einzog. Schon als Schüler begann er, Gedichte zu schreiben. Beruflich sollte er eigentlich im Bankgeschäft arbeiten, aber dafür hatte er kein Talent. Also versuchte er es erst mit einem eigenen Geschäft für Stoffe, das aber bald pleite war. Dann begann er zu studieren. Er probierte es mit Jura und mit Geschichte, besuchte verschiedene Vorlesungen.

Mit 25 Jahren veröffentlichte er erste Gedichte. Es war eine aufregende Zeit für ihn. Er wechselte die Städte und die Universitäten, er beendete sein Jura- Studium und wurde promoviert. Um seine Chancen als Anwalt zu verbessern, ließ er sich protestantisch taufen, er kehrte also dem Judentum den Rücken und wurde Christ. Daher auch der neue Name: Christian Johann Heinrich Heine. Später hat er die Taufe oft bereut.

Wenn Ihr Heines Werke lest werdet Ihr merken, dass sie etwas Besonderes sind. Sie sind oft kritisch, sehr oft aber auch ironisch und humorvoll. Er spielt mit der Sprache. Er kann aber auch sehr böse sein und herablassend über Menschen schreiben. Seine Kritik auch an politischen Ereignissen und die Zensur, mit der er in Deutschland leben musste, führten Heinrich Heine nach Paris. Er wanderte nach Frankreich aus.

And this is the result I got from Google (Trimmed to match the above):

denk ich an Deutschland in der Nacht dann bin ich um den Schlaf gebracht habt ihr diesen Satz schon einmal gehört er wird immer dann zitiert wenn es Probleme in Deutschland gibt der Satz stammt von Heinrich Heine er war einer der wichtigsten deutschen Dichter aber keine Angst auch wenn er am 13. Dezember 1797 geboren wurde sind seine Texte sehr aktuell und relativ leicht zu lesen ihr werdet ihn mögen Harry Heine wuchs in einem jüdischen Haushalt auf er war 13 Jahre alt als Nappo

hier in Düsseldorf einen Zoo schon als Schüler begann er Gedichte zu schreiben beruflich sollte er eigentlich im Bankgeschäft arbeiten aber dafür hatte er kein Talent also versuchte er es erst mit einem eigenen Geschäft für Stoffe das aber bald pleite war dann begann er zu studieren er probierte es mit Jura und mit Geschichte besuchte verschiedene Vorlesungen mit 25 Jahren veröffentlichte er erste Gedichte es war eine aufregende Zeit für ihn er wechselte die Städte und die Universitäten er beendete sein Jurastudium und wurde Promo

auch an politischen Ereignissen und die Zensur mit der er in Deutschland leben musste führten Heinrich Heine nach Paris er wanderte nach Frankreich aus 

Comparing the translations

Since I don’t speak German I cannot judge how well it did. Clearly it didn’t capture all the words but I wanted to see if what it returned makes any sense anyway. So I put both in Google Translate and this is how they compare:

Translation of the original transcript:

When I think of Germany at night, I'm about to go to sleep. "Have you ever heard that phrase before? He is always quoted when there are problems in Germany. The sentence is by Heinrich Heine. He was one of the most important German poets. But do not worry: even if he was born on December 13, 1797, his lyrics are very up to date and relatively easy to read. You will like him!

Harry Heine grew up in a Jewish household. He was 13 years old when Napoleon moved in Dusseldorf. Even as a student, he began writing poetry. Professionally, he was supposed to work in banking, but he had no talent for that. So he first tried his own business for fabrics, which was soon broke. Then he began to study. He tried law and history, attended various lectures.

At the age of 25 he published his first poems. It was an exciting time for him. He changed cities and universities, he completed his law studies and received his doctorate. To improve his chances as a lawyer, he was baptized Protestant, so he turned his back on Judaism and became a Christian. Hence the new name: Christian Johann Heinrich Heine. Later he often regretted baptism.

When you read Heine's works, you will find that they are special. They are often critical, but often also ironic and humorous. He plays with the language. But he can also be very angry and condescending to write about people. His criticism also of political events and the censorship with which he had to live in Germany led Heinrich Heine to Paris. He emigrated to France.	

Translation of Google’s results:

I think of Germany in the night then I'm about to sleep Did you ever hear this sentence He is always quoted when there are problems in Germany The sentence comes from Heinrich Heine He was one of the most important German poets but do not be afraid he was born on December 13, 1797 his lyrics are very up to date and relatively easy to read you will like him Harry Heine grew up in a Jewish household he was 13 years old as Nappo

Here in Dusseldorf a zoo as a student he began to write poetry professionally he should actually work in the banking business but for that he had no talent so he first tried his own business for fabrics but soon broke and then began to study he tried it with Jura and with history attended various lectures at age 25 he published his first poems it was an exciting time for him he changed the cities and the universities he finished his law studies and became promo

also in political events and the censorship with which he had to live in Germany led Heinrich Heine to Paris he emigrated to France

The translations of the podcast are very close, especially the first part. It missed some sentences and when you read the API output at least you can get a general understanding of what the text is about. It’s not a good read maybe and it’s not good if you’re interested in details but it’s probably good enough


Speech to text can be very useful backed with automated real-time translations. Google Speech API supports real time speech recognition as well so it may be interesting to put Translation API in use as well and develop a tool to get real time translations but that’s for another blog post.


git, devops, powershell, docker comments edit

Having lots of projects and assets stored on GitHub I thought it might be a good idea to create periodical backups of my entire GitHub account (all repositories, both public and private). The beauty of it is since Git is open source, this way I can migrate my account to anywhere and even host it on my own server on AWS.


With the above goal in mind, I started to outline what’s necessary to achieve this task:

  1. Automate calling GitHub API to get all repos including private ones. (Of course one should be aware of GitHub API rate limits which is currently 5000 requests per hour. If you use up all your allowance with your scripts you may not be able to use it yourself. Good thing is they are returning how many calls are left before you exceed your quota in x-ratelimit-remaining HTTP header in their responses.)
  2. Pull all the latest versions for all branches. Overwrite local versions in cases of conflict.
  3. Find a way to easily transfer a git repository (A compressed single file version rather than individual files) if transferring to another medium is required (such as an S3 bucket)

With these challenges ahead, I first started looking into getting the repos from GitHub:

Consuming GitHub API via PowerShell

First, I shopped around for existing libraries for this task (such as PowerShellForGitHub by Microsoft but it didn’t work for me. Basically I couldn’t even manage the samples on their Wiki. It kept giving cmdlet not found error so I gave up.)

Found a nice video on Channel 9 about consuming REST APIs via PowerShell which uses GitHub API as a case study. It was perfect for me as my goal was to use GitHub API anyway. And since this is a generic approach to consume APIs it can come handy in the future as well. It’s quite easy using basic authentication.


First step, is to create a Personal Access Token with repo scope. (Make sure to copy the value before you close the page, there is no way to retrieve it afterwards.)

After the access token has been obtained, I had to generate authorization header as shown in the Channel 9 video:

$base64Token = [System.Convert]::ToBase64String([char[]]$token)
$headers = @{
    Authorization = 'Basic {0}' -f $base64Token

$response = Invoke-RestMethod -Headers $headers -Uri

This way I was able to get the repositories including the private ones but by default it returns 30 records on a page so I had to traverse over the pages .

Handling pagination

GitHub sends the next and the last page URLs in link header:

<>; rel="next", <>; rel="last"

The challenge here is that looks like Invoke-RestMethod response doesn’t allow to access headers which is a huge bummer as there are useful info in the headers as shown in the screenshot:

GitHub response headers in Postman

At this point, I wanted to use PSGitHub mentioned in the video but as of this writing it doesn’t support getting all repositories. In fact in a note it says “We need to figure out how to handle data pagination” which made me think we are on the same page here (no pun intended!)

GitHub supports a page size parameter (e.g. per_page=50) but the documentation says the maximum value is 100. Although it is tempting to use that one as that would bring all my repos and leave some room for the future ones as well I wanted to go with a more permanent solution. So I decided to request more pages as longs as there are objects returning like this

$page = 1

    $response = Invoke-RestMethod -Headers $headers -Uri "$page"
    foreach ($obj in $response)
        Write-Host ($
    $page = $page + 1
While ($response.Count -gt 0)

Now in the foreach loop of course I have to do something with the repo information instead of just printing the id.

Cloning / pulling repositories

At this point I was able to get all my repositories. GitHub API only handles account information so now I needed to able to run actual git commands to get my code.

First I had installed PowerShell on Mac which is quite simple as specified in the documentation:

brew tap caskroom/cask
brew cask install powershell

With Git already installed on my machine, all is left was using Git commands to clone or update repo on PowerShell terminal such as:

git fetch --all
git reset --hard origin/master

Since this is just going to be a backup copy I don’t want to deal with merge conflicts and just overwriting everything local.

Another approach could be deleting the old repo and cloning it from scratch but I think this would be a bit wasteful to do it everytime for each and every repository.

Putting it all together

Now that I have all the bits and pieces I have glue them together in a meaningful script than can be scheduled and here it is:

Conclusion and Future Improvements

This version accomplishes the basic task of backing up an entire GitHub account but it can be improved in a few ways. Maybe I can post a follow up article including those improvements. A few ideas come to mind are:

  • Get Gists (private and public) as well.
  • Add option to exclude repos by name or by type (i.e. get only private ones or get all except repo123)
  • Add an option to export them to a “non-git” medium such as an S3 bucket using git bundle (which turns out to be a great tool to pack everything in a repository in a single file)
  • Create a Docker image that contains all the necessary software (Git, PowerShell, backup script etc) so that it can be distributed without any setup requirements.


aws, s3, devops, mysql, powershell comments edit

I have an application that uses MySQL database. Because of cost concerns it’s running on an EC2 instance instead of RDS. As it’s not a managed environment, the burden of backing up my data falls on me. This is a small step by step guide that details how I’m backing up my MySQL database to AWS S3 with PowerShell


  1. Create a bucket named (e.g. “application-backups”) on AWS S3 using AWS Management Console.
  2. Create a new IAM user (e.g “upload-backup-to-s3”)
  3. Create a new policy using management console. The policy will only give enough permissions to put objects into a single S3 bucket:
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": [

In S3 bucket properties copy ARN and replace the x’s above with that.

  1. Customize S3 bucket LifeCycle settings to determine how long you want the old logs retain in your bucket. In my case I set it to expire at 21 days which I think it’s large enough window for relevant database backups. I probably wouldn’t restore anything older than 21 days anyway.

  2. [Optional - For email notifications] Create an SES user by clicking SES -> SMTP Settings -> Create My SMTP Credentials

Make sure you don’t make the mistake I made which was creating an IAM user with a policy that can send emails. In the SMTP settings page there’s a note right below the button:

Your SMTP user name and password are not the same as your AWS access key ID and secret access key. Do not attempt to use your AWS credentials to authenticate yourself against the SMTP endpoint.

When you click the button it creates an IAM user basically for the secret key is 44 bytes whereas the IAM user I created had a secret key of 40 characters. Anyway, bottom line is in order to be able to send emails via SES create the user as described above and all should be fine.


  1. Download and install AWS Tools for Windows PowerShell (

  2. Create a script as shown below. In a nutshell what the script does is:

a. Execute mysqldump command (Comes with MySQL Server)

b. Zip the backup file (which reduces the size significantly)

c. Upload the zip file to S3 bucket

d. Send a notification email using SES

e. Delete the local files

This is the full script:

  • For SQL Server databases, there’s a PowerShell cmdlet called Backup-SQLDatabase but for MySQL I think the most straightforward way is using mysqldump that comes with MySQL server.

  • For password-protected zip files, you can take a look at this article (I haven’t tried it myself)

Final step: Schedule the script by using Windows Task Scheduler

This is quite straightforward. Just create a task, schedule it to how often you want to backup your database.

In the actions section, enter “powershell” as “program/script” and the path of your PowerShell script as “argument” and that’s it.


synology, music, streaming comments edit

I’ve been a Spotify customer for quite long time but recently realized that I wasn’t using it enough to justify 10 quid per month. Amazon made a great offer for 4 months subscription for only £0.99 and I’m trying that out now but the quality of the service didn’t impress so far. Then it dawned on me: I already have lots of MP3s from my old archives, I have a fast internet connection and I have a Synology. Why not just build my own streaming?

One device to rule them all: Synology

Everyday I’m growing more fond of my Synology and regretting for all that time I haven’t utilized it fully.

For streaming audio, we need the server and client software. The server side comes with Synology: Audio Station

The Server

Using Synology Audio Station is a breeze. You simply connect to Synology over the network and copy your albums into the music folder. Try to have a cover art named as “cover.jpg” so that your albums shows nicely on the user interface.

The Client

Synology has a suite of iOS applications which are available in the Apple App Store. The one I’m using for audio streaming is called DS Audio.

By using Synology’s Control Panel you can use a specific user for listening to music only. This way even if your account is compromised the attacker will only have read-only access to your music library.

Connecting to the server

There are two ways of connecting to your server:

  1. Dynamic DNS
  2. Quick Connect (QC)

Dynamic DNS is a builtin functionality but you’d need a Synology account. Basically your Synology pings their server so that it can detec the IP changes.

QC is the way I chose to go with. It’s a proprietary technology by Synology. The nice thing about QC is when you are connected to your local network it uses the internal IP so it doesn’t use mobile data. When you’re outside it uses the external IP and connects over the Internet.


  • You can download all the music you want from your own library without any limitations. There’s no limit set for manual downloads. For automatic downloads you can choose from no caching to caching everything or choose a fixed size from 250MB to 20GB.
  • When you’re offline you don’t need to login. On login form there’s a link to Downloaded Songs so you can skip logging in and go straight to your local cache.
  • You can pin your favourite albums to home screen.
  • Creating a playlist or adding songs to playlists is cumbersome (on iPhone at least):
    • Select a song and tap on … next to the song
    • Tap Add. This will add your song to the play queue.
    • Tap on Play button on top right corner.
    • Tap playlist icon on top right corner.
    • Tap the same icon again which is now on top left corner to go into edit mode
    • Now tap on the radio buttons on the left of the songs to select.
    • When done, tap on the icon on the bottom left corner. This will open the Add to Playlist screen (finally!)
    • Here you can choose an existing playlist or create a new one by clicking + icon.

Considering how easy this can be done on Spotify client this really needs to be improved.

  • In the library or Downloaded Songs sections, you can organise your music by Album, Artist, Composer, Genre and Folder. Of course in order for Artist/Composer/Genre classification to work you have to have your music properly tagged.
  • The client has Radio featue which has builtin support for SHOUTCast


  • You can rate songs. There’s a built-in Top Rated playlist. By rating them you can play your favourite songs without needing them to be added to playlists which is a neat feature.


I think having full control over my own music is great and even though DS Audio client has some drawbacks it’s worth it as it’s completely free. Also you can just set it up as a secondary streaming service in addition to your favourite paid one just in case so that you have a backup solution.


dev comments edit

I have been a long time Windows user. About 2 years ago I bought a MacBook but it never became my primary machine. Until now! Finally I decided to steer away from Windows and use the Macbook for development and day to day tasks.

Tipping Point

One morning I woke up and found out that Windows restarted itself again, without asking me. At the time I had a ton of open windows and there was a VMWare Virtual Machine running but none of these stopped Windows. It just abruptly shutdown the VM whic was very annoying and this wasn’t even the first time it had happened. So I decided to migrate completely to Mac. Just to give myself a better understanding of what it took and what is missing I decided to compile this post.


I thought it would be a painful process but turns out it was quite straightforward. Here’s comparison of some key applications I use:

Email: Mailbird vs. Mail

On Windows I used to use Mailbird as my email client. It allows managing multiple accounts and has a nice GUI and works fine. I was wondering if there would be an equivalent in Mac for that and how much it would cost me (I paid about £25 for Mailbird for a lifetime license but apparently it’s now free). I didn’t have to look far: The built-in Mail application does the job very well. Adding a new Google account is a breeze.

MarkdownPad 2 vs. MacDown

I like Markdown Pad 2 on Windows but it has its flaws: The live preview constanly crashes and it allows to open only 4 files in the free version. On Mac, I’m using MacDown now which has a beatiful interface and completely free.

Git Extensions vs. SourceTree

I do like Git Extensions and it’s one of the programs I wish I had on Mac but SourceTree by Atlassian seems to do the job.

Storage: Google Drive and Synology

Both have web interfaces and Google Drive has desktop clients for both Mac and Windows so no issues in migrating there.


On Windows, I like Sumatra PDF which is very clean and bloatware-free. On Mac, there is no eed to install anything. The default PDF viewer is perfect. It even handles PDF merge and editing operations.

Virtual Desktops

I love using virtual cesktops on Mac. Switching desktops is so easy and intuitive with a three-finger swipe. Windows 10 has support for virtual desktops now but switching is not as fluent so using them didn’t become a habit.

Visual Studio

Now this is the only application I cannot run on Mac. Microsoft has recently released Visual Studio for Mac and they also have Visual Studio Code which is a nice code editor but they are both stripped down versions. I don’t know if .NET Core will take off but currently I use full-blown .NET Framework which only runs under Windows so for development purposes I need to keep the Windows machine alive.

After the migration

I have absolutely no regrets for switching over. I love the Macbook. the keyboard is much better than my Asus’s and the OS is great. Mac has 16GB but outperforms Asus with 24GB (both have Core i& processors and SSD drives)

Here are some more annoying things that used to bug me in the past about Windows:

  • Quite often I cannot delete a folder that used to have a video in it because of Thumbs.db file being in use.
  • I couldn’t change settings to disable Thumbs.db completely because Windows 10 Home edition didn’t allow me to do that.
  • I couldn’t upgrade to Windows 10 Pro even though I had a license for Windows 8.1 Pro. Trying to resolve the licensing issue I found myself going in circles and nothing worked.

Mac cons

There are a few things that I don’t like about Mac or miss from Windows:

  • On Windows, quite often I need to create a blank text file, then double-click and edit it. In Finder, you can only create a new folder. Apparently some scripting is required to overcome this as shown in the resources section below.
  • iCloud seems to be forced down on me. I don’t want to use it, I don’t want to see it but I cannot get rid of it. Trying to disable is just confusing. I’ve now moved everything to a different folder that it’s not watching be default and trying to ignore it completely
  • Moving windows from dispay to display is hard. Especially in my case as I have 15.4” laptop screen and two external monitors with 27” and 40” sizes. Since the size difference is huge between these, dragging a large window from 40” monitor to 15.4” messes up because it doesn’t auto-resize and I cannot even get to the top window to resize. But now I’m using virtual desktops more frequently and using 40” for multiple applications side by side this is not as big of a problem these days.

Going back?

There’s a lot to learn on Mac but I don’t think I’ll be going back anytime soon. I’m looking into virtualizing the Windows machine now so that I can decommission the laptop. I already converted my old Windows desktop into a Linux server so would have no problem with using the laptop for other purposes.

Microsoft made flop after flop starting with Windows 8 and finally they lost another user but they don’t seem to care. If they did, they wouldn’t disrespectfully keep restarting my machine, killing all my applications and VMs!


dev comments edit

Nowadays many people use their phones as their primary web browsing device. As mobile usage is ubiquitous and increasing even more, testing the web applications on mobile platforms is becoming more important.

Chrome has a great emulator for mobile devices but sometimes it’s best to test your application on an actual phone.

If your application is the default application you can access via IP address you’re fine but the problem is if you have multiple domains you want to test at some point you’d need to enter the domain name in your phone’s browser.

Today I bumped into such an issue and my solution involved one of my favourite devices in my household: Synology DS214Play

Local DNS Server on Synology

Step 01: First, I installed DNS Server package by simply searching DNS and clicking install on Package Center.

Step 02: Then, I opened the DNS Server settings and created a new Master Zone. I simply entered the domain name of my site which is hosted on IIS on my development machine and the local network IP address of the Synology as the Master DNS Server.

Step 03: Next, I needed to point to the actual web server. In order to do that I created an A record with the IP address of the local server a.k.a. my development machine.

Step 04: For all the domains that my DNS server didn’t know about (which is basically everything else!) I needed to forward the requests to “actual” DNS servers. In my case I use Google’s DNS servers so I entered those IPs as forwarders.

Step 05: At this point the Synology DNS server is pointing to the web server and web server is hosting the website. All is left is pointing the client’s (phone or laptop) DNS setting to the local DNS server.

Step 06: Now that it’s all setup I could access to my development machine using a locally-defined domain name from my phone:


Another simple alternative to achieve this on Windows laptops is to edit hosts file under C:\Windows\System32\drivers\etc folder but when you have multiple clients in the network i.e macbooks and phones, it’s simpler just to point to the DNS server rather than editing each and every single device. And also it’s more fun this way!


dev comments edit

I like playing around with PDFs especially when planning my week. I have my daily plans and often need to merge them into a single PDF to print easily. As I decided to migrate to Mac for daily use now I can merge them very easily from command line as I described in my TIL site here.

Mac’s PDF viewer is great as it also allows to simply drag and drop a PDF into another one to merge them. Windows doesn’t have this kind of nicety so I had to develop my own application to achieve this. I was planning to ad more PDF operations but since I’m not using it anymore I don’t think it will happen anytime soon so I decided to open the source.

Very simple application anyway but hope it helps someone save some time.


It uses iTextSharp NuGet package to handle the merge operation:

public class PdfMerger
    public string MergePdfs(List<string> sourceFileList, string outputFilePath)
        using (var stream = new FileStream(outputFilePath, FileMode.Create))
            using (var pdfDoc = new Document())
                var pdf = new PdfCopy(pdfDoc, stream);
                foreach (string file in sourceFileList)
                    pdf.AddDocument(new PdfReader(file));

        return outputFilePath;

Also uses Fluent Command Line Parse, another of my favourite NuGet packages to take care of the input parameters:

var parser = new FluentCommandLineParser<Settings>();
parser.Setup(arg => arg.RootFolder).As('d', "directory");
parser.Setup(arg => arg.FileList).As('f', "files");
parser.Setup(arg => arg.OutputPath).As('o', "output").Required();
parser.Setup(arg => arg.AllInFolder).As('a', "all");
var result = parser.Parse(args);
if (result.HasErrors)

var p = new Program();

The full source code can be found in the GitHub repository (link down below).


dev comments edit

Slack is a great messaging platform and it can integrate very easily with C# applications.

Step 01: Enable incoming webhooks

First go to Incoming Webhooks page and turn on the webhooks if it’s not already turned on.

Step 02. Create a new configuration

You can select an existing channel or user to post messages to. Or you can create a new channel. (May need a refresh for the new one to appear in the list)

Step 03. Install Slack.Webhooks Nuget package

In the package manager console, run

Install-Package Slack.Webhooks

Step 04. Write some code!

var url = "{Webhook URL created in Step 2}";

var slackClient = new SlackClient(url);

var slackMessage = new SlackMessage
    Channel = "#general",
    Text = "New message coming in!",
    IconEmoji = Emoji.CreditCard,
    Username = "any-name-would-do"



That’s it! Very easy and painless integration to get real-time desktop notifications.

Some notes

  • Even though you choose a channel while creating the webhook, in my experience you can use the same one to post to different channels. You don’t need to create a new webhook for each channel.
  • Username can be any text basically. It doesn’t need to correspond to a Slack account.
  • First time you send a message with a username, it uses the emoji you specify in the message. You can leave it null in which case it uses the default. On consequent posts, it uses the same emoji for that user even if you set a different one.


dev comments edit

HTTP/2 is a major update to the HTTP 1.x protocol and I decided to spare some time to have a general idea what it is all about:

Here are my findings:

  • It’s based on SPDY (a protocol developed by Google, currently deprecated)
  • It uses same methods, status codes etc. so it is backwards-compatible and the main focus is on performance
  • The problem it is addressing is HTTP requiring a TCP connection per request.
  • Key differences:
    • It is binary rather than text.
    • It can use one connection for multiple requests
    • Allows servers push responses to browser caches. This way it can automatically start sending assets before the browser parses the HTML and sends a request for each of them (images, JavaScript, CSS etc)
  • The protocol doesn’t have built-in encryption but currently Firefox, Internet Explorer, Safari, and Chrome agree that HTTPS is required.
  • There will be a negotiation process between the client and server to select which version to use
  • WireShark has support for it but Fiddler doesn’t.
  • As the speed is the main focus it’s especially important for CDNs to support it. In September 2016, AWS announced that they now support HTTP/2. For existing distributions it needs to enabled explicitly by updating the settings.

    AWS CloudFront HTTP/2 Support

  • On the client side looks like it’s been widely adopted and supported. also confirms that it’s only allowed over HTTPS on all browsers that support it.

    HTTP/2 Browser Support

What Does It Look Like on the Wire

As it’s binary I was curious, as a developer, to see what the actual bits looked like. Normally it’s easy to inspect HTTP requests/responses because it’s just text.

Apparently the easiest way to do it is WireShark. First, I had to enable session logging by creating a user variable in Windows:

Windows environment variable to capture TLS session keys

and pointing the WireShark to use that log (Edit -> Preferences -> Protocols -> SSL)

This is a very neat trick and it can be used to analyse all encrypted traffic so it serves a broader purpose. After restarting the browser and WireShark I was able to see the captured session keys and by starting a new capture with WireShark I could see the decrypted HTTP/2 traffic.

WireShark HTTP/2 capture

It’s hard to make sense of everything in the packets but I guess it’s a good start to be able to inspect the wire format of the new protocol.


ios, swift comments edit

I have a drawer full of gadgets that I bought at one point in time with hopes and dreams of magnificent projects and never even touched!

Some time ago I started a simple spreadsheet to help myself with the impulse buys. The idea was before I bought something I had to put it to that spreadsheet and it had to wait at least 7 days before I allowed myself to buy it.

After 7 days strange things started to happen: In most cases I realised I had lost appetite for that shiny new thing that I once thought was a definite must-have!

I kept at listing all the stuff but quickly it started to become hard to wield by just a spreadsheet.

Sleep On It

The idea behind the app is to automate and “beautify” that process a little bit. It has one Shopping Cart in which the items have waiting periods.

It seemed wasteful to me doing nothing during the waiting period. After all it’s not just about dissuading myself from buying new items. I should use that time to make informed decisions about the stuff I’m planning to buy. That’s why I added the product comparison feature.

The shopping cart has a limited size. Otherwise you would be able to add anything whenever you think of something to game the system so their waiting period would start (well, at least that’s how my mind works!) if your cart is full you can still add items to the wish list and start reviewing products. It’s basically a backlog of items. This way at least you wouldn’t forget about that thing you saw in your favourite online marketplace. Once you clear up some space in your cart either by waiting to buy or deleting them permanently, you can transfer items from wish list to the cart and officially kick off the waiting period.

I have a lot of ideas to improve it but you gotta release at some point and I think it has enough to get me started. Hope anyone else finds it useful too.

If you’re interested in the app please contact me. I might be able to hook you up with a promo code.


ios, swift comments edit

When I started learning Swift for iOS development I also started to compile some notes along the way. This post is the first instalment of my notes. First some basic concepts (in no particular order):


  • Swift supports REPL (Read Evaluate Print Loop) and you can write code and get feedback very quickly this way by using XCode Playground or command line.

As seen in the screenshot there is no need to explicitly print the values, they are automatically displayed on the right-hand side of the screen.

It can also be executed without specifying the interpreter by adding #!/usr/bin/swift at the top of the file.


Swift supports C-style comments like // for single-line comments and /* */ for multi-line comments.

The great thing about the multi-line comments is that you can nest them. For example the following is a valid comment:

/* This is a 
    /* valid mult-line */
    comment that is not available in C#

Considering how many times Visual Studio punished me while trying to comment out a block of code that had multi-line comments in it, this feature looks fantastic!

It also supports doc comments (///) and supports markdown. It even supports emojis (Ctrl + Cmd + Space for the emoji keyboard)


Standard libraries are imported automatically but the main frameworks such as Foundation, UIKit need to be imported explicitly.

Swift 2 supports a new type of import which is preceded by @testable keyword.

@testable import CustomFramework

It allows to access non-public members of a class. So that you can access them externally from a unit test project. Before this they all needed to be public in order to be testable.


Built-in string type is String. There is also NSString in the Foundation framework. They can be used interchangably sometimes, for example you can assign a String to a NSString but the opposite is not valid. You have cast it explicitly to String first:

import Foundation

var string : String = "swiftString"
var nsString : NSString = "foundationString"

nsString = string // Works fine
string = nsString as String // Wouldn't work without the cast
  • startIndex is not an int but an object. To get the next character


To get the last character


For a specific position s[advance(s.startIndex, 1)]

let vs. var

Values created with let keyword are immutable. So let is used to create constants. Variables can be created with var keyword. If you create a value

let x1 = 7
x1 = 8 // won't compile

var x2 = 10
x2 = 11 // this works

The same principle applies to arrays:

let x3 = [1, 2, 3]
x3.append(4) // no go!

Type conversion

Types are inferred and there is no need to declare them while declaring a variable.

let someInt = 10
let someDouble = 10.0
let x = someDouble + Double(someInt)

Structs and Classes

  • Structs are value objects and a copy of the value is passed around. Classess are reference objects.

  • Constructors are called initializers and they are special methods named init. Must specify an init method or default values when declaring the class.

class Person {
  var name: String = ""
  var age: Int = 0

  init (name: String, age: Int) { = name
    self.age = age
  • There is no new operator. So declaring a new object looks simply like this:
let p = Person()
  • The equivalent of destructor is deinit method. Only classes can have deinitializers.


Array: An ordered list of items

  • An empty array xan be declared in a verbose way such as

      var n = Array<Int>()

    or with the shorthand notation

      var n = [Int]()
  • An array with items can be intialized with

      var n = [1, 2, 3]
  • Arrays can be concatenated with +=

      n += [4, 5, 6]
  • Items can be added by append method

  • Items can be inserted to a specific index

      n.insert(8, atIndex: 3)
      print(n) // -> "[1, 2, 3, 8, 4, 5, 6, 7]"
  • Items can be deleted by removeAtIndex

      print(n) // -> "[1, 2, 3, 8, 4, 5, 7]"
  • Items can be accessed by their index

      let aNumber = n[2]
  • A range of items can be replaced at once

      var n = [1 ,2, 3, 4]
      n[1...2] = [5, 6, 7]
      print(n) // prints [1, 5, 6, 7, 4]"
  • 2-dimensional arrays can be declared as elements as arrays and multiple subscripts can be used to access sub items

      var n = [ [1, 2, 3], [4, 5, 6] ]
      n[0][1] // value 2

Dictionary: A collection of key-value pairs

  • Can be initialized without items

      var dict = [String:Int]()

    or with items

      var dict = ["key1": 5, "key2": 3, "key3": 4]
  • To add items, assign a value to a key using subscript syntax

      dict["key4"] = 666
  • To remove an item, assign nil

      dict["key2"] = nil
      print(dict) // prints ["key1": 5, "key4": 666, "key3": 4]"
  • To update a value, subscript can be used as adding the item or updateValue method can be called.

    updateValue returns an optional. If it didn’t update anything the optional has nil in it. So it can be used to check the value was actually updated or not.

      var result = dict.updateValue(45, forKey: "key2")
      if let r = result {
          print (dict["key2"])
      } else {
          print ("could not update") // --> This line would be printed

    The interesting behaviour is that if it can’t update it, it will add the new value.

      var dict = ["key1":5, "key2":3, "key3":4]
      var result = dict.updateValue(45, forKey: "key4")
      if let r = result {
          print (dict["key4"])
      } else {
          print ("could not update")
      print(dict) // prints "["key1": 5, "key4": 45, "key2": 3, "key3": 4]"
                    // key4 has been added after calling updateValue

    After a successful update it would return the old value

      result = dict.updateValue(45, forKey: "key1")
      if let r = result {
          print (r) // --> This would run and print "5"
      } else {
          print ("could not update")

    This is consistent with the unsuccessful update returning nil. It always returns the former value.

  • To get a value subscript syntax is used

      var i = dict["key1"] // 45

Set: An unordered list of distinct values

  • Initialization notation is similar to the others

      var emo : Set<Character> = [ "😡", "😎", "😬" ]
  • If duplicate items are added it doesn’t throw an error but prunes the list automatically

      var emo : Set<Character> = [ "😡", "😎", "😬", "😬" ]
      emo.count // prints 3
  • New items can be added with insert method

      var emo : Set<Character> = [ "😡", "😎", "😬", "😬" ]
      print(emo) // prints "["😱", "😎", "🤔", "😬", "😡"]"

    There is no atIndex parameter like array and the index is unpredicatable as shown above

Among the three, only arrays have ordered and can have repeated values.


  • Semi-colons are not required at the end of each line

  • Supports string interpolation

  • Swift uses reference counting and there is garbage collection.

  • Curly braces are required even if there is only one statement inside the body. For instance the following block wouldn’t compile:

      let x = 10
      if x == 10
  • println function has been renamed to print. print adds a new line to the end automatically. This behaviour can be overriden by explicitly specfying appendNewLine attribute

      print ("Hello, world without a new line", appendNewLine: false)
  • #available can be used to check compatibility

      if #available(iOS 9, *) {
          // use NSDataAsset
      } else {
          // Panic!
  • Range can be checked with … and ~= operators. For example:

      let x = 10
      if 1...100 ~= x {

    The variable is on the right in this expression. It wouldn’t compile the other way around.

  • There is Range object that can be used to define, well, ranges!

      var ageRange = 18...45
      print(ageRange) // prints "18..<46"
      print(ageRange.count) // prints "28"

    The other range operator is ..< which doesn’t include the end value

      var ageRange = 18..<45
      ageRange.contains(45) // prints "false"


personal, leisure, travel comments edit

When April 6th, 2016 marked my 5th anniversary in the UK I thought I should do something special.

I don’t know if you have seen the movie The World’s End but I liked it a lot. Inspired by the Golden Mile concept I saw in that movie, my decision was to have 5 pints in 5 pubs in the neighbourhood. 1 pint for each year. It may sound unsustainable in the long run but I’ll cross that bridge when I come to that. Without further ado, here’s the pubs I picked for my first annual celebration:

…and the winners are

The Dacre Arms

This one is definitely my favourite pub in the area. It has a very cosy and warm environment. It’s not on the main road so almost felt like discovering a hidden gem when I first noticed it. In the past it served as a nice harbour to get away from my loud and obnoxious neighbour and collect my thoughts and maintain my sanity.

Duke of Edinburgh

I don’t watch football games in pubs a lot but when I do this is my go-to pub. Nice big screen TVs all around. The last game I saw didn’t bring much joy though.

I remember Arsenal smashing Fenerbahce 3-0 in 2013 without even breaking a sweat. Part of the reason why I don’t watch football in pubs: no need for public humiliation!

The Old Tiger’s Head

The way I remember this one was it was quite spacious with a pool table and a lot of tables to sit. I used to come here a lot to wok on my blog. Although I can’t tell the difference, I’ve been informed that this was an Irish pub. I stopped by on one St. Paddy’s day a few years ago and there was quite a colourful celebration going on. I guess that’s one way of identifying whether a pub is Irish or English!

This visit was a bit disappointing though as the whole layout has changed. Pool table was gone and my favourite booth was removed. Oddly enough there were a few bookshelves and there was even a family with two toddlers inside. It felt more like a cafe than a pub.

The Swan / The Rambles

My 4th stop was The Rambles. Actually it used to be a pub called The Swan which was the first pub I ever visited in the UK. It was run by two kind and nice ladies. When you move to a different country and try to settle in there’s a lot of challenges you need to tackle initially and it can be overwhelming at first. The Swan was a nice refuge for me at those times to wind down and relax.

Unfortunately it closed down a few years ago and now there is bar / comedy club named The Rambles. The club doesn’t mean much to me except I’ve been there once before to see a comedy show but in remembrance of The Swan I went there anyway. As it’s bar now they open and close much later than a pub so I was the first customer there! So it might be a good place to work in quite and have a cold one if I’m looking to change venues.

Princess of Wales

I love Blackheath! It’s on a hill and a bit windy but it has a nice little lake and a great view.

I remember the Bonfire Night a few years ago which had a great fireworks show. Princess of Wales is a nice pub by the lake with a great view and I enjoyed my pint there quite a bit.

Same time, next year!

I don’t know if I should try the same path with adding a 6th one to the end of the chain or start with a brand new set or even I’ll be in the mood to do a pub crawl but I’ll decide that later. After all it’s just one of those nice problems to have!

development, aws, route53, angularjs comments edit

Previously in this series:

So far I was using the console client but I thought I chould use a prettier web-based-UI and came up with this:

DomChk53 AngularJS Client

It’s using AngularJS and Bootstrap which significantly improved the development process.

API in the backend is AWS API Gateway on a custom domain ( and using Lambda functions to do the actual work. One great thing about API Gatewayis that it’s very easy to set requests rates:

Currently I set it to max 5 requests per second. I chose this value because of the limitation on AWS API as stated here:

All requests – Five requests per second per AWS account. If you submit more than five requests per second, Amazon Route 53 returns an HTTP 400 error (Bad request). The response header also includes a Code element with a value of Throttling and a Message element with a value of Rate exceeded.

Of course limiting this on the client side assumes a single client so you may still get “Rate exceeded” errors even if running single query at a time. I’m planning to implement a Node server using SQS to move the queue to server side but that’s not one of my priorities right now.

The Lambda function is straightforward enough. Just calls the checkDomainAvailability API method with the supplied parameters:

exports.handler = function(event, context) {
    var AWS = require('aws-sdk');
	var options = {
	    region: "us-east-1"
	var route53domains = new AWS.Route53Domains(options);
    var params = {
        DomainName: event.domain + '.' + event.tld

    route53domains.checkDomainAvailability(params, function (err, data) {

        if (err) {
        } else {
            var result = {
                Domain: event.domain,
                Tld: event.tld,
                CheckDate: new Date().toISOString(),
                RequestResult: "OK",
                Availability: data.Availability


I wanted this tool as an improvement to AWS already provides. What you can do with Management Console is search a single domain and it searches it against the popular 13 TLDs. If you need anything outside these 13 you have to pick them manually.

In DomChk53 you can search multiple domain names at once against all supported TLDs (293 as of this writing).

Also you can group TLDs into lists so you can for example search the most common ones (com, net, etc.) and say finance related ones (money, cash, finance etc.). Depending on the domain name one group may be more relevant.

You can cancel a query at any time to avoid wasting precious requests if you change your mind about the domain.

What’s missing

For a while I’m planning to leave it as is but when I have it in me to revisit the project I will implement:

  • Server-side queueing of requests
  • The option to export/email the results in PDF format

I’m also open to other suggestions…


aws, route53, angularjs, development comments edit

I’ve been using my own dynamic DNS application (which I named DynDns53 and blogged about it here). So far it had a WPF application and I was happy with it but I thought if I could develop a web-based application I wouldn’t have to install anything (which is what I’m shooting for these days) and achieve the same results.

So I built a JavaScript client with AngularJS framework. The idea is exactly the same, the only difference is it’s all happening inside the browser.

DynDns53 web client


To have a dynamic DNS client you need to have the following

  1. A way to get your external IP address
  2. A way to update your DNS record
  3. An application that performs Step 1 & 2 perpetually

Step 1: Getting the IP Address

I have done and blogged about this several times now. (Feels like I’m repeating myself a bit, I guess I have to find something original to work with. But first I have to finish this project and have closure!)

Since it’s a simple GET request it sounds easy but I quickly hit the CORS wall when I tried the following bit:

app.factory('ExternalIP', function ($http) {
    return $http.get('', { cache: false });

In my WPF client I can call whatever service I want whenever I want but when running inside the browser things are a bit different. So I decided to take a detour and create my own service that allowed cross-origin resource sharing.

AWS Lambda & API Gateway

First I thought I could do it even without Lambda function by using the HTTP proxy integration. I could return what the external site returns:

Unfortunately this didn’t work because it was returning the IP of the AWS machine that’s actually running the API gateway. So I had to get the client’s IP from the request and send it back in my own Lambda function.

Turns out in order to get HTTP headers you need to fiddle with some template mapping and assign the client’s IP address to a variable:

This can be later referred to in the Lambda function through event parameter:

exports.handler = function(event, context) {
        "ip": event.ip

And now that we have our own service we can allow CORS and be able call it from our client inside the browser:

Step 2: Updating DNS

This bit is very similar to WPF version. Instead of using the AWS .NET SDK I just used the JavaScript SDK. AWS has a great SDK builder which lets you to select the pieces you need:

It also shows if the service supports CORS. It’s a relief that Route53 does so we can keep going.

The whole source code is on GitHub but here’s the gist of it: Loop through all the subdomains, get all the resource records in the zone, find the matching record and update it with the new IP:

  $scope.updateAllDomains = function() {
      angular.forEach($, function(value, key) {
        $scope.updateDomainInfo(, value.zoneId);
  $scope.updateDomainInfo = function(domainName, zoneId) {
    var options = {
      'accessKeyId': $rootScope.accessKey,
      'secretAccessKey': $rootScope.secretKey
    var route53 = new AWS.Route53(options);
    var params = {
      HostedZoneId: zoneId

    route53.listResourceRecordSets(params, function(err, data) {
        if (err) { 
          $rootScope.$emit('rootScope:log', err.message);
        } else {
          angular.forEach(data.ResourceRecordSets, function(value, key) {
              if (value.Name.slice(0, -1) == domainName) {
                var externalIPAddress = "";
                     externalIPAddress =;
                     $scope.changeIP(domainName, zoneId, externalIPAddress)
  $scope.changeIP = function(domainName, zoneId, newIPAddress) {
    var options = {
      'accessKeyId': $rootScope.accessKey,
      'secretAccessKey': $rootScope.secretKey

    var route53 = new AWS.Route53(options);
    var params = {
      ChangeBatch: {
        Changes: [
            Action: 'UPSERT',
            ResourceRecordSet: {
              Name: domainName,
              Type: 'A',
              TTL: 300,
              ResourceRecords: [ {
                  Value: newIPAddress
      HostedZoneId: zoneId

    route53.changeResourceRecordSets(params, function(err, data) {
      if (err) { 
        $rootScope.$emit('rootScope:log', err.message); 
      else { 
        var logMessage = "Updated domain: " + domainName + " ZoneID: " + zoneId + " with IP Address: " + externalIPAddress;
        $rootScope.$emit('rootScope:log', logMessage);

The only part that trippped me up was that I wasn’t setting the TTL in the changeResourceRecordSets parameters and I was getting an error but found a StackOverflow question that helped to get past the issue.

Step 3: A tool to bind them

Now the fun part: An AngularJS client to call these services. I guess the UI is straight-forward. Basically it just requires the user to enter AWS IAM keys and domains to update.

I didn’t want to deal with the hassle of sending the keys to a remote server and host them securely. Instead I thought it would be simpler just to use browser’s local storage with HTML5. This way the keys never leave the browser.

It also only updates the IP address if it has changed so saves unnecessary API calls.

Also it’s possible to view what’s going on in the event log area.

I guess I can have my closure now and move on!


ios, swift comments edit

I’ve been working on iOS development with Swift for some time and finally I managed to publish my first app on the Apple iOS app store: NoteMap.

NoteMap on iTunes

It’s a simple app that allows you to take notes on the map. You can take photos and attach them to the note as well as text. I thought this might be helpful if you take ad hoc pictures and then forget when and why you took them.

The main challenge was working with the location manager as you require permissions to get user’s location. You have to take into all combinations of permissions as they may be changed in the phone’s settings later on.

I have a long list of features to add but wanted to keep this one simple enough just to do the bare essentials. As I hadn’t submitted an app before I wasn’t even sure it would make it to the store. But after waiting 6 days for the app to be reviewed, now it’s out there, which is a huge relief and motivation. I’ll make sure to keep at it and add more features to NoteMap and submit more!


aws, ssl, aws certificate manager, acm comments edit

Paying a ton of money to a digital certificate, which costs nothing to generate, has always bugged me. Fortunately it isn’t just me and recently I heard about Let’s Encrypt in this blog post.

I was just planning to give it a go but I noticed a new service on AWS Management Console:

Apparently AWS is now issuing free SSL certificates, which was too tempting to pass on so I decided to dive in.

Enter AWS Certificate Manager

Requesting a certificate just takes seconds as it’s a 3-step process:

First, enter the list of domains you want the certificates for:

Wildcard SSL certificates don’t cover the zone apex so I had to enter both. (Hey it’s free so no complaints here!)

Then review and confirm and request has been made:

A verification email has been sent to the email addresses listed in the confirmation step.

At this point I could define MX records and use Google Apps to create a new user and receive the verification email. The problem is I don’t want all this hassle and certainly don’t need another email account to monitor.

SES to the rescue

I always considered SES as a simple SMTP service to send emails but while dabbling with alternatives I realized that now we can receive emails too!

To receive emails you need to verify your domain first. Also an MX record pointing to AWS SMTP server must be added. Fortunately since everything here is AWS it can be done automatically using Route53:

After this we can move on, we’ll receive a confirmation email once the domain has been verified:

In the next step we decide what to do with the incoming mails. We can bounce them, call a Lambda function, create a SNS notification etc. These all sound fun to experiment with but in this case I’ll opt for simplicity and just drop them to a S3 bucket.

Great thing is I can even assign a prefix so I can a single bucket to collect emails from a bunch of different addresses all separated into their own folders.

In step 3, we can specify more options. Another pleasant surprise was to see spam and virus protection:

After reviewing everything and confirming we are ready to receive emails to our bucket. In fact nice folks at AWS are so considerate that they even sent us a test email already:

Back to certificates

OK, after a short detour we are back on getting our SSL certificate. As I didn’t have my mailbox setup during the validation step I had to go to actions menu and select Resend validation email.

And after requesting it I immediately received the email containing a link to verify ownership of the domain.

After the approval process we get ourselves a nice free wildcard SSL certificate:

Test drive

To leverage the new certificate we need to use CloudFront to create a distribution. Here again we benefit from the integrated services. The certificate we have been issued can be selected from the dropdown list:

So after entering simple basics like the domain name and default page I created the distribution and pointed the Route53 records to this distribution instead of the S3 bucket.

And finally, after waiting (quite a bit) for the CloudFront distribution to be deployed we can see that little green padlock we’ve been looking forward to see!:

UPDATE 1 [03/03/2016]

Yesterday I was so excited about discovering this I didn’t look any further like downloading the certificate and using it on your servers.

Today unfortunately I realized that the usability is quite limited: It only works with AWS Elastic Load Balancer and CloudFront. I was hoping to use it with API Gateway but even that’s another AWS service it’s not integrated with ACM yet.

I do hope they make the cert bits available so we can have full control over them and deploy to wherever we want. So I guess Let’s Encrpt is a better option for now considering this limitation.


c#, development, gadget, fitbit, aria comments edit
"What gets measured, gets managed." - Peter Drucker

It’s important to have goals, especially SMART goals. The “M” in S.M.A.R.T. stands for Measurable. Having enough data about a process helps tremendously to improve that process. To this effect, I started to collect exercise data from my Microsoft band which I blogged about here.

Weight tracking is also crucial for me. I used to record my weight manually on a piece of paper but, for the obvious reasons, I abandoned it quickly and decided to give Fitbit Aria a shot.

Fitbit Aria Wi-Fi Smart Scale

Aria is basically a scale that can connect to your Wi-Fi network and send your weight results to Fitbit automatically which can then be viewed via Fitbit web application.


Since it doesn’t have a keyboard or any other way to interact directly setup is carried out by running a program on your computer

It’s mostly just following the steps on the setup tool. You basically let it connect to your Wi-Fi network so that it can synchronize with Fitbit servers.

Putting the scale into setup mode proved to be tricky in the past though. Also it was not easy to change Wi-Fi so I had to reset the go back to factory settings and ran the setup tool again.

Getting the data via API

Here comes the fun part! Similar to my MS Band workout demo, I developed a WPF program to get my data from Fitbit’s API. Ultimately the goal is to combine all these data in one application and make sense of it.

Like MS Health API, FitBit uses OAuth 2.0 authorization and requires a registered application.

The endpoint that returns weight data accepts a few various formats depending on your needs. As I wanted a range instead of a single day I used the following format:{user ID}/body/log/weight/date/{startDate}/{endDate}.json

This call returns an array of the following JSON objects:

	"bmi": xx.xx,
	"date": "yyyy-mm-dd",
	"fat": xx.xxxxxxxxxxxxxxx,
	"logId": xxxxxxxxxxxxx,
	"source": "Aria",
	"time": "hh:mm:ss",
	"weight": xx.xx

Sample application

The bulk of the application is very similar to MS Band sample: It first opens an authorization window and once the client consents for the app to be granted some privileges it uses the access token to retrieve the actual data.

There are a few minor differences though:

  • Unlike MS Health API it requires Authorization header in the authorization code request calls which is basically Base64 encoded client ID and client secret
string base64String = Convert.ToBase64String(Encoding.UTF8.GetBytes($"{Settings.Default.ClientID}:{Settings.Default.ClientSecret}"));
request.AddHeader("Authorization", $"Basic {base64String}");
  • It requires a POST request to redeem URL. Apparently RestSharp has a weird behaviour. You’d think a method called AddBody could be used to send the request body, right? Not quite! It doesn’t transmit the header so I kept getting a missing field error. So instead I used AddParameter:
string requestBody = $"client_id={Settings.Default.ClientID}&grant_type=authorization_code&redirect_uri={_redirectUri}&code={code}";
request.AddParameter("application/x-www-form-urlencoded", requestBody, ParameterType.RequestBody);

I found a lot of SO questions and a hillarious blog post addressing the issue. It’s good to know I wasn’t alone in this!

The rest is very straightforward. Make the request, parse JSON and assign the list to the chart:

public void GetWeightData()
    var endDate = DateTime.Today.ToString("yyyy-MM-dd");
    var startDate = DateTime.Today.AddDays(-30).ToString("yyyy-MM-dd");
    var url = $"{startDate}/{endDate}.json";
    var response = SendRequest(url);


public void ParseWeightData(string rawContent)
    var weightJsonArray = JObject.Parse(rawContent)["weight"].ToArray();
    foreach (var weightJson in weightJsonArray)
        var weight = new FitbitWeightResult();
        weight.Weight = weightJson["weight"]?.Value<decimal>() ?? 0;
        weight.Date = weightJson["date"].Value<DateTime>();

And the output is:


So far I managed to collect walking data from MS Band, weight data from Fitbit Aria. In this demo I limited the scope with weight data only but Fitbit API can be used to track sleep, exercise and nutrition.

I currently use My Fitness Pal to log what I eat. They too have an API but even though I requested twice they haven’t given me a key yet! Good news is Fitbit has a key and I can get my MFP logs through Fitbit API. I also log my sleep on Fitbit manually so next step is to combine all these in one application to have a nice overview.


c#, development, gadget, band comments edit

I bought this about 6 months ago and in this post I’ll talk about my experiences so far. They released version 2 of it in last November so I thought I should write about it before it gets terribly outdated!

Choosing the correct size

It comes in 3 sizes: Small, Medium and Large and finding the correct size is the first challenge. They seem to have improved the sizing guide for version 2. In the original one they didn’t mention the appropriate size for wrist’s circumference.

To have the same effect I followed someone’s advice on a forum regarding the circumferences. Downloaded a printable ruler to measure mine. It was at the border of medium and laarge and I decided to go with medium but even at the largest setting it’s not comfortable and irritates my skin. Most of the time I have to wear it on top of a large plaster

Wearing notes

I hope they fixed it in v2 but the first generation Band is quite bulky and uncomfortable. To be honest most of the time I just kept wearing it because I had spent £170 and didn’t come to terms with making a terrible investment. I wear it when I’m walking but as soon as I arrive at home or work I take it off because it’s almost impossible to type something with it.

Band in action

For solely getting fitness data purposes you can use it without pairing with your phone but pairing is helpful as you can read your texts on it, see emails and answer calls.

I also installed the Microsoft Health app and started using Microsoft Health dashboard:


As soon as I started using it I noticed a discrepancy with the step count on the Microsoft Health dashboard. Turns out by default it was using phone’s motion tracker as well so it was doubling my steps. After I turned it off started getting the exact same results as on Band.

Turn off motion tracking to get accurate results

Developing with Band and Cloud API

Recording data about something helps tremendously to make it manageable. That’s why I like using these health & fitness gadgets. But of course it doesn’t mean much if you don’t make sense of that data.

In my sample application I used Microsoft Health Cloud API to get the Band’s data. In order this to work Band needs to sync with Microsoft Health app on my phone and the app syncs with my MS account.

The API has a great guide here that can be downloaded as a PDF. It outlines all the necessary steps very clearly and in detail.

Long story short, firstly you need to go to Microsoft Account Developer Center and register an application. This will give you a client ID and client secret that will be used for OAuth 2.0 authentication.

API uses OAuth 2.0 authentication. After the token has been acquired, using the actual API is quite simple, in my example app I used /Summaries endpoint to get the daily step counts.


The sample application is a simple WPF desktop application. Upon launch it checks if the user has an access token stored, if not then it shows the OAuth window and the user need to login to their account.

To let the user login to their Microsoft account I added a web browser control to a window and navigated to authorization page:

string authUri = $"{baseUrl}/oauth20_authorize.srf?client_id={Settings.Default.ClientID}&scope={_scope}&response_type=code&redirect_uri={_redirectUri}";

Once the authorization is complete, the web browser is redirected to with a query parameter code. This is not the actual token we need. Now, we need to go to another URL (oauth20_token.srf) with this code and client secret as parameters and redeem the actual access token:

private void webBrowser_Navigated(object sender, System.Windows.Navigation.NavigationEventArgs e)
    if (e.Uri.Query.Contains("code=") && e.Uri.Query.Contains("lc="))
        string code = e.Uri.Query.Substring(1).Split('&')[0].Split('=')[1];

        string authUriRedeem = $"/oauth20_token.srf?client_id={Settings.Default.ClientID}&redirect_uri={_redirectUri}&client_secret={Settings.Default.ClientSecret}&code={code}&grant_type=authorization_code";

        var client = new RestClient(baseUrl);
        var request = new RestRequest(authUriRedeem, Method.GET);
        var response = (RestResponse)client.Execute(request);
        var content = response.Content;

        // Parse content and get auth code
        Settings.Default.AccessToken = JObject.Parse(content)["access_token"].Value<string>();


After we get the authorization out of the way we can actually call the API and get some results. It’s a simple GET call ( and the response JSON is pretty straightforward. The only thing to keep in mind is to add the access token to Authorization header:

request.AddHeader("Authorization", $"bearer {Settings.Default.AccessToken}");

Here’s a sample output for a daily summary:

	"userId": "67491ecc-c408-47b6-a3ad-041edb410524",
	"startTime": "2016-01-18T00:00:00.000+00:00",
	"endTime": "2016-01-19T00:00:00.000+00:00",
	"parentDay": "2016-01-18T00:00:00.000+00:00",
	"isTransitDay": false,
	"period": "Daily",
	"duration": "P1D",
	"stepsTaken": 2784,
	"caloriesBurnedSummary": {
		"period": "Daily",
		"totalCalories": 1119
	"heartRateSummary": {
		"period": "Daily",
		"averageHeartRate": 77,
		"peakHeartRate": 88,
		"lowestHeartRate": 68
	"distanceSummary": {
		"period": "Daily",
		"totalDistance": 232468,
		"totalDistanceOnFoot": 232468

Since we now have the data, we can visualize it:

If you want to play with the sample code don’t forget to register an app and update the settings with your client ID and secret


I guess the most fun would be to develop something that actually runs on the device. My next goal with my Band is to develop a custom tile using its SDK. I hope I can finish it while a first-gen device is still fairly relevant.


aws, s3, ec2, eip comments edit

I’ve been using AWS for a few years now and over the years I noticed there some questions that keep popping up. I was confused by these issues at first and as they look like they are tripping everybody up at some point I decided to compile of a small list of common gotchas. I’ll update this or post another when if I come across more of these.

1. The S3 folder delusion

When you AWS console you can create folders to group objects but this is just a delusion deliberately created by AWS to simplify the usage. In reality, S3 has a flat structure and all the objects are on the same level. Here’s the excerpt from AWS documentation that states this fact:

In Amazon S3, buckets and objects are the primary resources, where objects are stored in buckets. Amazon S3 has a flat structure with no hierarchy like you would see in a typical file system. However, for the sake of organizational simplicity, the Amazon S3 console supports the folder concept as a means of grouping objects. Amazon S3 does this by using key name prefixes for objects.

So essentially AWS is just smart enough to recognize the standard folder notation we’ve been using for ages to make this things easier for us.

2. Reserved instance confusion

Reserved instances cost less but require some resource planning and paying some money up-front. Although there is now an option to buy reserved instances with no upfront payment they generally shine on long-term commitments with heavy usage (always-on machines). The confusing bit you don’t reserve actual instances. Unfortunately management console doesn’t do a great job in bridging that gap and when you buy a reserved instance you don’t even know which running instance it covers.

Basically you just buy a subscription for 1 or 3 years and you pay less for any machine that meets that criteria. For instance, say you reserved 1 Linux t1.small instance for 12 months and you are running 2 t1.small Linux instances at the moment. You will pay reserved instance prices for one of them and on-demand price for the other. From a financial point of view it doesn’t matter which one is which. If you shut down one of those instances, again regardless of the instance, you still pay for reserved instance price as it matches your reserved instance criteria.

So that’s all there is to it really. Reserved instance is just about billing and has nothing to do with the actual running instances.

3. Public/Elastic IP uncertainty

There are 3 types of IP addresses in AWS:

Private IPs are internal IP that every instance are assigned. They remain the same throughout the lifespan of the instance and as the name implies they are not addressable from the Internet.

Public IPs are optional. They remain the same as long as the instance is running but they are likely to change after a reboot. So they are not reliable for web-accessible applications.

Elastic IPs are basically static IPs that never change. By default AWS gives up to 5 EIPs. If you need more you have to contact their support. They come free of charge as long as they are associated with a running instance. It costs a small amount if you just want to keep them around without using them though.