-->

devaws route53, angularjs

Previously in this series:

So far I was using the console client but I thought I chould use a prettier web-based-UI and came up with this:

DomChk53 AngularJS Client

It’s using AngularJS and Bootstrap which significantly improved the development process.

API in the backend is AWS API Gateway on a custom domain (api.domchk53.com) and using Lambda functions to do the actual work. One great thing about API Gatewayis that it’s very easy to set requests rates:

Currently I set it to max 5 requests per second. I chose this value because of the limitation on AWS API as stated here:

All requests – Five requests per second per AWS account. If you submit more than five requests per second, Amazon Route 53 returns an HTTP 400 error (Bad request). The response header also includes a Code element with a value of Throttling and a Message element with a value of Rate exceeded.

Of course limiting this on the client side assumes a single client so you may still get “Rate exceeded” errors even if running single query at a time. I’m planning to implement a Node server using SQS to move the queue to server side but that’s not one of my priorities right now.

The Lambda function is straightforward enough. Just calls the checkDomainAvailability API method with the supplied parameters:

exports.handler = function(event, context) {
    var AWS = require('aws-sdk');
	var options = {
	    region: "us-east-1"
	}
	
	var route53domains = new AWS.Route53Domains(options);
    
    var params = {
        DomainName: event.domain + '.' + event.tld
    };

    route53domains.checkDomainAvailability(params, function (err, data) {

        if (err) {
            context.fail(err);
        } else {
            var result = {
                Domain: event.domain,
                Tld: event.tld,
                CheckDate: new Date().toISOString(),
                RequestResult: "OK",
                Availability: data.Availability
            };
            
            context.succeed(result);  
        }
    });
}

Usage

I wanted this tool as an improvement to AWS already provides. What you can do with Management Console is search a single domain and it searches it against the popular 13 TLDs. If you need anything outside these 13 you have to pick them manually.

In DomChk53 you can search multiple domain names at once against all supported TLDs (293 as of this writing).

Also you can group TLDs into lists so you can for example search the most common ones (com, net, co.uk etc.) and say finance related ones (money, cash, finance etc.). Depending on the domain name one group may be more relevant.

You can cancel a query at any time to avoid wasting precious requests if you change your mind about the domain.

What’s missing

For a while I’m planning to leave it as is but when I have it in me to revisit the project I will implement:

  • Server-side queueing of requests
  • The option to export/email the results in PDF format

I’m also open to other suggestions…

Resources

awsdev route53, angularjs

I’ve been using my own dynamic DNS application (which I named DynDns53 and blogged about it here). So far it had a WPF application and I was happy with it but I thought if I could develop a web-based application I wouldn’t have to install anything (which is what I’m shooting for these days) and achieve the same results.

So I built a JavaScript client with AngularJS framework. The idea is exactly the same, the only difference is it’s all happening inside the browser.

DynDns53 web client

Ingredients

To have a dynamic DNS client you need to have the following

  1. A way to get your external IP address
  2. A way to update your DNS record
  3. An application that performs Step 1 & 2 perpetually

Step 1: Getting the IP Address

I have done and blogged about this several times now. (Feels like I’m repeating myself a bit, I guess I have to find something original to work with. But first I have to finish this project and have closure!)

Since it’s a simple GET request it sounds easy but I quickly hit the CORS wall when I tried the following bit:

app.factory('ExternalIP', function ($http) {
    return $http.get('http://checkip.amazonaws.com', { cache: false });
}); 

In my WPF client I can call whatever service I want whenever I want but when running inside the browser things are a bit different. So I decided to take a detour and create my own service that allowed cross-origin resource sharing.

AWS Lambda & API Gateway

First I thought I could do it even without Lambda function by using the HTTP proxy integration. I could return what the external site returns:

Unfortunately this didn’t work because it was returning the IP of the AWS machine that’s actually running the API gateway. So I had to get the client’s IP from the request and send it back in my own Lambda function.

Turns out in order to get HTTP headers you need to fiddle with some template mapping and assign the client’s IP address to a variable:

This can be later referred to in the Lambda function through event parameter:

exports.handler = function(event, context) {
    context.succeed({
        "ip": event.ip
    })
}

And now that we have our own service we can allow CORS and be able call it from our client inside the browser:

Step 2: Updating DNS

This bit is very similar to WPF version. Instead of using the AWS .NET SDK I just used the JavaScript SDK. AWS has a great SDK builder which lets you to select the pieces you need:

It also shows if the service supports CORS. It’s a relief that Route53 does so we can keep going.

The whole source code is on GitHub but here’s the gist of it: Loop through all the subdomains, get all the resource records in the zone, find the matching record and update it with the new IP:

  $scope.updateAllDomains = function() {
      angular.forEach($rootScope.domainList.domains, function(value, key) {
        $scope.updateDomainInfo(value.name, value.zoneId);
      });
  } 
  $scope.updateDomainInfo = function(domainName, zoneId) {
    var options = {
      'accessKeyId': $rootScope.accessKey,
      'secretAccessKey': $rootScope.secretKey
    };
    var route53 = new AWS.Route53(options);
    
    var params = {
      HostedZoneId: zoneId
    };

    route53.listResourceRecordSets(params, function(err, data) {
        if (err) { 
          $rootScope.$emit('rootScope:log', err.message);
          console.log(err.message);
        } else {
          angular.forEach(data.ResourceRecordSets, function(value, key) {
              if (value.Name.slice(0, -1) == domainName) {
                var externalIPAddress = "";
                ExternalIP.then(function(response){
                     externalIPAddress = response.data.ip;
                     $scope.changeIP(domainName, zoneId, externalIPAddress)
                 });
              }
          });
        }
    });
  }
  $scope.changeIP = function(domainName, zoneId, newIPAddress) {
    var options = {
      'accessKeyId': $rootScope.accessKey,
      'secretAccessKey': $rootScope.secretKey
    };

    var route53 = new AWS.Route53(options);
    var params = {
      ChangeBatch: {
        Changes: [
          {
            Action: 'UPSERT',
            ResourceRecordSet: {
              Name: domainName,
              Type: 'A',
              TTL: 300,
              ResourceRecords: [ {
                  Value: newIPAddress
                }
              ]
            }
          }
        ]
      },
      HostedZoneId: zoneId
    };

    route53.changeResourceRecordSets(params, function(err, data) {
      if (err) { 
        $rootScope.$emit('rootScope:log', err.message); 
      }
      else { 
        var logMessage = "Updated domain: " + domainName + " ZoneID: " + zoneId + " with IP Address: " + externalIPAddress;
        $rootScope.$emit('rootScope:log', logMessage);
      }
    });
  }

The only part that trippped me up was that I wasn’t setting the TTL in the changeResourceRecordSets parameters and I was getting an error but found a StackOverflow question that helped to get past the issue.

Step 3: A tool to bind them

Now the fun part: An AngularJS client to call these services. I guess the UI is straight-forward. Basically it just requires the user to enter AWS IAM keys and domains to update.

I didn’t want to deal with the hassle of sending the keys to a remote server and host them securely. Instead I thought it would be simpler just to use browser’s local storage with HTML5. This way the keys never leave the browser.

It also only updates the IP address if it has changed so saves unnecessary API calls.

Also it’s possible to view what’s going on in the event log area.

I guess I can have my closure now and move on!

Resources

aws ssl, aws_certificate_manager, acm

Paying a ton of money to a digital certificate, which costs nothing to generate, has always bugged me. Fortunately it isn’t just me and recently I heard about Let’s Encrypt.

I was just planning to give it a go but I noticed a new service on AWS Management Console:

Apparently AWS is now issuing free SSL certificates, which was too tempting to pass on so I decided to dive in.

Enter AWS Certificate Manager

Requesting a certificate just takes seconds as it’s a 3-step process:

First, enter the list of domains you want the certificates for:

Wildcard SSL certificates don’t cover the zone apex so I had to enter both. (Hey it’s free so no complaints here!)

Then review and confirm and request has been made:

A verification email has been sent to the email addresses listed in the confirmation step.

At this point I could define MX records and use Google Apps to create a new user and receive the verification email. The problem is I don’t want all this hassle and certainly don’t need another email account to monitor.

SES to the rescue

I always considered SES as a simple SMTP service to send emails but while dabbling with alternatives I realized that now we can receive emails too!

To receive emails you need to verify your domain first. Also an MX record pointing to AWS SMTP server must be added. Fortunately since everything here is AWS it can be done automatically using Route53:

After this we can move on, we’ll receive a confirmation email once the domain has been verified:

In the next step we decide what to do with the incoming mails. We can bounce them, call a Lambda function, create a SNS notification etc. These all sound fun to experiment with but in this case I’ll opt for simplicity and just drop them to a S3 bucket.

Great thing is I can even assign a prefix so I can a single bucket to collect emails from a bunch of different addresses all separated into their own folders.

In step 3, we can specify more options. Another pleasant surprise was to see spam and virus protection:

After reviewing everything and confirming we are ready to receive emails to our bucket. In fact nice folks at AWS are so considerate that they even sent us a test email already:

Back to certificates

OK, after a short detour we are back on getting our SSL certificate. As I didn’t have my mailbox setup during the validation step I had to go to actions menu and select Resend validation email.

And after requesting it I immediately received the email containing a link to verify ownership of the domain.

After the approval process we get ourselves a nice free wildcard SSL certificate:

Test drive

To leverage the new certificate we need to use CloudFront to create a distribution. Here again we benefit from the integrated services. The certificate we have been issued can be selected from the dropdown list:

So after entering simple basics like the domain name and default page I created the distribution and pointed the Route53 records to this distribution instead of the S3 bucket.

And finally, after waiting (quite a bit) for the CloudFront distribution to be deployed we can see that little green padlock we’ve been looking forward to see!:

UPDATE 1 [03/03/2016]

Yesterday I was so excited about discovering this I didn’t look any further like downloading the certificate and using it on your servers.

Today unfortunately I realized that the usability is quite limited: It only works with AWS Elastic Load Balancer and CloudFront. I was hoping to use it with API Gateway but even that’s another AWS service it’s not integrated with ACM yet.

I do hope they make the cert bits available so we can have full control over them and deploy to wherever we want. So I guess Let’s Encrpt is a better option for now considering this limitation.

Resources