fitness, concept2 comments edit

I’ve been using a Concept2 Model C rowing machine for some time now and quite enjoying it as a form of workout. (Primarily because I can still watch Netflix or Youtube while rowing!)

Concept2 Model C Rowing Machine

Since I have some data accumulated in it I decided to have a look into ways of getting it and working on it hoping that it give me some insights about possible ways of improving my stats.

Official Tools

To be honest the existing toolset that comes out of the box is quite sufficient.

LogBook

This is the official web application where you can monitor your workouts.

Concept LogBook

This application is quite good really. You can manually enter your workouts, view the existing history. Create teams and participate in challenges so there’s also a social aspect to it.

iOS App: ErgData

The monitor connected to the rowing machine (Performance Monitor - PM5) supports Bluetooth connection which can be easily paired with an iPhone. If you install ErgData app on your phone you can sync the device with your phone and get them out that way very easily. Better yet, it allows you to upload your workouts to LogBook. After you complete a workout, you can easily upload the results by clicking Sync.

Concept2 ErgData app

Unofficial Tools

RasPiRowing

I found this nice Raspberry Pi based project called RasPiRowing developed one of the staff members of Concept.

Since I’m a fan of Raspberry Pi have a whole bunch of them lyting around, it didn’t take me long to install it and use it. It works just fine and comes with a fun fish game too:

FishPi Game

It’s a nice way of interacting with the Concept2. Since it can be accessed by a Python application I can build my own applications as well to get data out of the erg.

Developer Tools

SDK

There is an SDK available to download for both Mac and Windows.

I installed the Mac version which extracts the files under /Users/{username}/C2 PM SDK/

But I couldn’t find much useful stuff in there:

SDK contents

Tried to build the XCode project but gave a build error and I just left it at that.

API

They also provide an API wich can be used to get the data out. This sounds the most interesting part to me as I can develop my own custom tools based on this API.

In the documentation, they advise to use the dev site first while trying out the API and then request using the live data. Also you need to register your application with Concept2 to be able to use their APIs.

Resources

dev, aws, s3 comments edit

When it comes to transferring files over network, there’s always a risk of ending up with corrupted files. To prevent this on transfers to and from S3, AWS provides us with some tools we can leverage to guarantee correctness of the files.

Verifying files while uploading

In order to verify the file is uploaded successfully, we need to provide AWS the MD5 hash value of our file. Once upload has been completed, AWS calculates the MD5 hash on their end and compares the both values. If they match, it means it went through successfully. So our request looks like this:

var request = new PutObjectRequest
{
    MD5Digest = md5,
    BucketName = bucketName,
    Key =  key,
    FilePath = inputPath,
};

where we calculate MD5 hash value like this:

using (var stream = new FileStream(fullPath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
    using (var md5 = MD5.Create())
    {
        var hash = md5.ComputeHash(stream);
        return Convert.ToBase64String(hash);
    }
}

In my tests, it looks like if you don’t provide a valid MD5 hash, you get a WinHttpException with the inner exception message “The connection with the server was terminated abnormally”

If you provide a valid but incorrect MD5, the exception thrown is of type AmazonS3Exception with the message “The Content-MD5 you specified did not match what we received”.

Amazon SDK comes with 2 utility methods named GenerateChecksumForContent and GenerateChecksumForStream. At the time of this writing, GenerateChecksumForStream wasn’t available in the AWS SDK for .NET Core. So the only method worked for me to calculate the hash was the way as shown above.

Verifying files while downloading

When downloading we use EtagToMatch property of GetObjectRequest to have the verification:

var request = new GetObjectRequest
{
	BucketName = bucketName,
    Key =  key,
    EtagToMatch = "\"278D8FD9F7516B4CA5D7D291DB04FB20\"".ToLower() // Case-sensitive
};

using (var response = await _s3Client.GetObjectAsync(request))
{
    await response.WriteResponseStreamToFileAsync(outputPath, false, CancellationToken.None);
}

When we request the object this way and if the the MD5 hash we send doesn’t match the one on the server we get an exception with the following message: “At least one of the pre-conditions you specified did not hold”

Once important point to keep in mind is that AWS keeps the hashes in lowerc-ase and the comparison is case-sensitive so make sure to convert everything to lower-case before you send it out.

Resources

dev, aws, certification, certified cloud practitioner comments edit

As I decided to get full AWS certification I started preparing for the exams. I wanted to start with the Cloud Practitioner just to get my self accustomed with the exam procedure in general. Here’s my notes:

Exam Objectives

According to Amazon’s official exam description page, this exam validates the following aspects:

  • Define what the AWS Cloud is and the basic global infrastructure
  • Describe basic AWS Cloud architectural principles
  • Describe the AWS Cloud value proposition
  • Describe key services on the AWS platform and their common use cases (for example, compute and analytics)
  • Describe basic security and compliance aspects of the AWS platform and the shared security model
  • Define the billing, account management, and pricing models
  • Identify sources of documentation or technical assistance (for example, whitepapers or support tickets)
  • Describe basic/core characteristics of deploying and operating in the AWS Cloud

Main Subject Areas

  1. Billing and pricing (12%)
  2. Cloud concepts (28%)
  3. Technology (36%)
  4. Security (24%)

Preparation Notes

aws.training Online Training Notes

Cloud Computing

  • On-demand delivery of IT resources. Can scale up and down based on needs.
  • Fosters agility (number one reason why customers switch to cloud computing): Speed (global reach), experimentation (operations as code, templated environments with CloudFormation) and culture of innovation (experiment quickly with low cost)
  • Region vs Availability Zone (AZ): Region is a physical location in the world which contains multiple AZs. AZs contain one or more discrete data centers with independent resources and housed in different facilities.
  • Using Auto Scaling and ELB, scale up and down and only pay for what you use.
  • Ability to deploy systems in multiple regions (lower latency)
  • Ability to choose the region where data is stored
  • AWS is responsible for data center security
  • Security policy can be formalized (as code)
  • Ability to recover from failures

Core Services

  • Global Infrastructure:
    • Regions: Have multiple AZs
    • Availability Zones: Have one or more data centres. They all have different power supplier companies.
    • Edge Locations: Used by CloudFront.
  • Amazon Virtual Private Cloud (VPC)
    • Uses same concepts as on-premise networking
    • VPC can span across multiple AZs
    • Supports multiple subnets (each of which can be deployed in a different AZ)
    • Can create public-facing subnets and private-facing subnets within the same VPC
    • Each account can create multiple VPCs
    • Using fewer VPCs is recommended to avoid complexity
    • Can assign Internet Gateways to specific subnets to allow public access

  • Security Groups
    • Act like a built-in firewall
    • Best practice: Allow what’s required only and block everything else
  • Compute Services
    • Amazon Lightsail: Managed Virtual Private Servers service
      • Fixed price.
      • Includes a static IP, DNS management and storage
      • Fixed configuration
      • Uses t2 class EC2 instances under the hood
    • AWS Elastic Compute Cloud (EC2)
      • Difference betwwen EC2-Classic and EC2-VPC
        • EC2-Classic: Your instances run in a single, flat network that you share with other customers.
        • EC2-VPC: Your instances run in a virtual private cloud (VPC) that’s logically isolated to your AWS account.
    • AWS Lambda
      • No servers to manage
      • Pay as you go: Only pay for the time your code runs
      • Continuous scaling
      • Supports subsecond metering. Charged for every 100 milliseconds of execution time
      • Some limitations apply: AWS Lambda Limits
    • AWS Elastic Beanstalk
      • Platform as a service
      • Allows quick deployments of applications
      • Allows HTTPS on load balancers
      • Supports various platforms (node.js, python etc)
      • Provisions the resources required (EC2, ELB etc) automatically
    • Application Load Balancer
      • 2nd type of load balancer offered by ELB

      • Comes with new features

      • Supports routing to containers
      • Key terms:
        • Listeners: A process that checks for connection requests using the configuration (protocol, port)
        • Target: Destination for traffic
        • Target Group: Each target group routes requests to one or more registered targets
      • Target checks can be performed per target group basis
      • Integrates with ECS and supports dynamic ports utilized by scheduled containers
      • Need to create at least 2 AZs when creating an Application Load Balancer
      • Ability to route to different target groups based on port or path
    • Elastic Load Balancer
      • Supports sticky sessions
      • Supports multiple AZs and cross-zone balancing
      • For HTTP/HTTPS it uses “Least Outstanding” method to route the request. For TCP, it uses “Round robin”. The least outstanding routing algorithm is defined as “A ‘least outstanding requests routing algorithm’ is an algorithm that choses which instance receives the next request by selecting the instance that, at that moment, has the lowest number of outstanding (pending, unfinished) requests.”
    • Auto Scaling
      • Adding more instances: Scaling out, terminating instanes: Scaling in
      • Launch configuration answers “What” (AMI, Instance type, Security Groups, Roles). Creating an LC is similar to creating a new EC2 instance.
      • Auto Scaling Group answers “Where” (VPC and subnet(s), load balancer, minimum and maximum instances, desired capacity)
      • Auto Scaling Policy answeres “When” (Scheduled/on-demand/scale out or in policy)
  • Amazon EBS
    • Allows point-in-time snapshots and creation of a new volume from a snapshot
    • Supports encrypted volumes free of charge
    • EBS volume must be created in the same AZ as the EC2 instance that will use it
  • Amazon S3
    • Objects are stored redundantly across multiple facilities withing the same region
    • The bucket names must be globally unique.
    • Can configure cross-region replication for backup and disaster recovery
    • Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket
  • Amazon Glacier
    • Vaults have access and lock policies attached to them
    • Each AWS account can create up to 1000 vaults
    • Can create an S3 lifecycle policy to move to Glacier then delete after a period of time
      • Supports up to 40TB max item size (S3 supports 5TB)
      • It costs more per retrieval
      • Vault Lock allows you to easily deploy and enforce compliance controls for individual Amazon Glacier vaults with a vault lock policy. You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits. Once locked, the policy can no longer be changed
  • Amazon RDS
    • Can create a standby copy in a different AZ within the same VPC
    • Can create multiple read replicas (in different regions as well)
  • Amazon DynamoDB
    • Always uses SSD for storage
    • Supports auto-scaling. Increases/decreases the throughput based on load
    • Tables are partitioned by primary key
    • Two query methods: Query and Scan
    • Query uses the primary key to find items. Scan can use any attribute.
    • Scan is slower than Query as it needs to look at all items
  • Amazon Redshift
    • Managed data warehouse
    • Supports standard SQL
    • Supports ODBC/JDBC connectors
  • Amazon Aurora
    • Managed MySQL-clone (compatible with MySQL)
    • After a crash it doesn’t need to redo log files. It performs it on every read operation which reduces the restart time
  • AWS Trusted Advisor
    • Checks all the resources used and gives advice based on best practices
    • 5 categories:
      • Cost optimisation
      • Performance
      • Security
      • Fault tolerance
      • Service limits
    • Upgrading support plan enables all Trusted Advisor recommendations, free plan doesn’t include all
    • Has an API and can be used to automate optimisations
    • Can use it with CloudWatch alarms

Security

  • The AWS Shared Responsibility Model
    • AWS handles infrastructure security
    • AWS provides 3rd party audit reports
    • AWS’s responsibilities include: OS and database patching, firewall configuration and disaster recovery
    • Customer is responsible for putting logical access controls in place and protect account credentials
    • Customers are responsible to secure everything they put in the cloud
  • AWS Service Catalog
    • Allows to centrally manage common IT services that are approved for use on AWS
  • AWS IAM
    • Controls access to AWS resources
    • Handles Authentication (who can access resources) and authorization (how they can use resources)
    • Users can have programmatic access and/or console access.
    • Best practices
      • Delete root account keys. Instead use IAM accounts
      • Use MFA
      • Use groups
      • Use roles
      • Rotate credentials
      • Remove unnecessary users
  • AWS Security Compliance Programs
    • Risk Management: Follow the following standards:
      • COBIT
      • AICPA
      • NIST
    • Constantly scans service endpoints for vulnerabilities
    • Compliance programs are listed here
  • AWS Security Resources

Architecting

  • Well-architected framework: https://aws.amazon.com/architecture/well-architected/
  • Fiver pillars of the framework
    • Operational excellence
    • Security
    • Reliability
    • Performance efficency
    • Cost optimization
  • Fault Tolerance
    • Remain operational even if components fail
    • Built-in redundancy of an application’s components
  • High-Availability
    • A concept for the whole system
    • “Always” functioning and accessible
    • Without human intervention
    • HA Service Tools
      • Elastic Load Balancer
      • Elastic IP Addresses
      • Amazon Route 53
      • Auto Scaling
      • Amazon CloudWatch

Pricing and Support

  • Core concepts in billing
    • Pay as you go: No up front expenses
    • Pay less when you reserve: Reserved instances cost less
    • Pay even less per unit by using more: Tiered pricing for services such as S3, EC2 etc. Data transfer in is always free of charge.
    • Pay even less as AWS grows
  • Amazon RDS Costs
    • Clock hours of server time
    • Database characteristics
    • Database purchase type
    • Number of DB instances
    • Provisional storage
      • No charge for backup storage of up to 100% of database storage for active databases. After terminated, the backups are charged
    • Additional storage
    • Requests
    • Deployment type
    • Data transfer

General Notes

Exam Centre

The exam centre was very small and there was some sort of music studio next door so there was constant noise. OVerall it was a bit disappointing to take the exam in a desolated business centre and in a small room but it’s the same exam regardless so I was able to focus on the questions after I got used to the noise.

Exam Process

  • My exam was scheduled at 3:00. I arrived early and the proctor allowed me to sit at 2:00 as there were empty places in the exam room. It was a nice surprise because I definitely didn’t want to wait for another hour in that heat
  • At one point, the screen froze. I had to call the proctor. He restarted the application. Fortunately it just resumed where it left off.
  • CCP is the easiest AWS exam but even so there were some challenging questions. Mostly non-technical questions were hard for me (like questions related to support plans). I don’t think I’ll everr see those questions in other exams.

Exam Result

… and the result is : Pass

Amazon has an interesting scoring system apparently. Right after you submit the exam, the screen displays Pass or Fail but not the actual score. You receive that in a separate email. They don’t even announce what the passing score is as they reserve the right to change when they see fit. It’s also based on other candidates’ results too so almost like a curve. Anyway, it was quite a relief to see the pass result on the screen. I’m still curiously waiting for the actual score though.

My next exam will be AWS Certified Solutions Associate. I’ll post my exam notes after that exam as well.

Resources

personal, leisure comments edit

My client’s moved to a new office recently. The view is quite nice from their office:

It’s very close to Regent’s Canal. I decided to take walks by the canal. Especially in summer time it makes a very nice walk.

It even has a boat converted into a bookshop:

Location

It’s quite long. The section I’m close by takes about 20 minutes end-to-end. I first finish the short-end, then walk the whole path twice and the short-end again which makes a nice brisk 50 minute-walk.

It crosses Regent’s Park too but unfortunately I’m not close to that section.

A bit of history

This brief introduction explains how it started (taken from Canal & River Trust’s page):

In 1812, the Regent's Canal Company was formed to cut a new canal from the Grand Junction Canal's Paddington Arm to Limehouse, where a dock was planned at the junction with the Thames. The architect John Nash played a part in its construction, using his idea of 'barges moving through an urban landscape'.

Completed in 1820, it was built too close to the start of the railway age to be financially successful and at one stage the Regent’s only narrowly escaped being turned into a railway. But the canal went on to become a vital part in southern England's transport system.

A nice walk

I might update this post with new pictures as I keep walking by the canal. Currently I like these ones:

Resources

dev, aws comments edit

A few years ago AWS announced a new SES feature: Incoming Emails. So far I have only used it once to receive domain verification emails to an S3 bucket but haven’t built a meaningful project. In this blog post my goal is to develop a sample project to demonstrate receiving emails with SES and processing those emails automatically by triggering Lambda functions.

As a demo project I will build a system that automatically responds to a sender with my latest CV as shown in the diagram below

Receiving Email with Amazon Simple Email Service

Amazon Simple Email Service (SES) is Amazon’s SMTP server. It’s core functionality has been sending emails but Amazon kept adding more features such as using templates and receiving emails.

Step 1: Verify a New Domain

First, we need a verified domain to receive emails. If you already have one you ca skip this step.

  • Step 1.1: In the SES console, click Domains –> Verify a New Domain
  • Step 1.2: Enter the domain name to verify and click Verify This Domain

  • Step 1.3: In the new dialog click Use Route 53

(This is assuming your domain is in Route53. If not you have to verify it by other means)

  • Step 1.4: Make sure you check Email Receiving Record checkbox and proceed

  • Step 1.5 Confirm verification status

Go back to Domains page in SES console and make sure the verification has been completed successfully

In my example, it only took about 2 minutes.

Step 2: Create a Lambda function to send the CV

In the next step we will continue to configure SES to specify what to do with the received email. But first we need the actual Lambda function to do the work. Then we will connect this to SES so that it runs everytime when we receive an email to a specific email.

  • Step 2.1: Create a Lambda function from scratch

  • Step 2.2: Create an SNS topic

SES will publish emails to this topic. We will do the plumbing and give necessary permissions later.

  • Step 2.3: Create subscription for the Lambda function to SNS topic

Now we tie the topic to our Lambda by creating a subscription

  • Step 2.4: Attach necessary permissions to the new role

In my example, I store my CV in an S3 bucket. So the policy would need to receive SNS notifications, read access to S3 bucket and permissions to send emails.

By default a new Lambda role comes with AWSLambdaBasicExecutionRole attached to it

First add this to have read-only access to a single bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::{BUCKET NAME}",
                "arn:aws:s3:::*/*"
            ]
        }
    ]
}

Then this to be able to send emails

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail",
                "ses:SendTemplatedEmail",
                "ses:SendRawEmail"
            ],
            "Resource": "*"
        }
    ]
}

I like to keep these small, modular policies so that I can reuse then in other projects.

After adding the policies you should be able to see these in your Lambda function’s access list when you refresh the function’s page:

Step 3: Develop the Lambda function

In this exmample I’m going to use a .NET Core and C# 2.0 to create the Lambda function.

  • Step 3.1: Install Lambda templates

In Windows, AWS Lambda function templates come with AWS Visual Studio extension but in Mac we have to install them via command line.

dotnet new -i Amazon.Lambda.Templates::*
  • Step 3.2: Create Lambda function
dotnet new lambda.EmptyFunction --name SendEmailWithAttachmentFromS3 --profile default --region eu-west-1
  • Step 3.3:

Now it’s time for the actual implementation. I’m not going to paste the whole code here. Best place to get it is its GitHub repository

  • Step 3.4 Deploy the function

Create an IAM user with access to Lambda deployment and create a profile locally named deploy-lambda-profile.

dotnet restore
dotnet lambda deploy-function send_cv

Step 4: Create a Receipt Rule

Now that we have a verified domain, we need a rule to receive emails.

In my example project, I’m going to use an email address that will send my latest CV to a provided email adress.

  • Step 4.1: In the Email Receiving section click on Rule Sets –> Create a Receipt Rule

  • Step 4.2: Add a recipient

  • Step 4.3: Add an Action

Now we choose what to do when an email is received. In this example I want it to be published to an SNS topic that I created earlier. I could invoke the Lambda function directly but leveraging publish/subscribe gives me more flexibility as in I can change the subscriber in the future or add more stuff to do without affecting the rule configuration.

Since it supports multiple actions I could choose to invoke Lambda directly and add more actions here later on if need be but I’d like to use a standard approach which is all events are published to SNS and the interested parties subscribe to the topics.

I chose UTF-8 because I’m not expecting any data in the message body so it doesn’t matter too much in this example.

  • Step 4.4 Give it a name and create the rule.

Step 4: Test end-to-end

Now that it’s all set up, it is time to test.

  • Step 4.1: Send a blank email to cv@vlkn.me (Or any other address if you’re setting up your own)

  • Step 4.2:

Then after a few seconds later, receive an email with the attachment:

The second email is optional. Basically, I creted an email subscriber too. So that whenever a blank email is received I get notified by SNS directly. This helps me to keep an eye on traffic if there is any.

Resources

travel, leisure, personal comments edit

I’ve decided to have regular trips inside the UK and as my first stop I chose Bletchley Park. This post is a compilation of my notes about the trip so that I can take a look as a refresher for the next one.

Departure: London Euston to Milton Keynes Central

I booked train tickets on Virgin Trains and I was very pleased with the experience.

They sent helpful email and text messages before and after the platform was announced. Boarding process was completely hassle-free. Just showed the QR code that was saved in my Apple Wallet and Virgin Trains app. I used the Wallet this time but the app would work just as good probably. The train departured at the exact time so all went well. Also there was a charging unit in my seat I got to charge my laptop and phone on the way.

Milton Keynes Central to Bletchley Park

Euston to Milton Keynes Central takes 30 minutes only. It’s shorter than my daily commute so it ended before I knew it. The train passes through Bletchlet station but doesn’t stop. So I had to buy a return ticket to Bletchley too (which is just 1 stop away and takes a couple of minutes)

Bletchley Park

Arrived at Bletchely park at around 9:20. It opens at :30 on Saturdays but it opened earlier so didn’t have wait too long.

Park entrance

OVerall the staff members were very kind and helpful. In the lobby, I had my ticket printed which apparently is an annual pass so you just buy one ticket and visit the park as many times you want for a whole year which sounds like a great deal. At the end of the visit I was overwhelmed with information I just might take them on that offer and visit again to have it all sink in.

Bletchley Park

Apart from its historic significance, it’s a very beautiful park too. The weather was especially great when I first arrived so didn’t mind just taking a tour outside and enjoy the view.

The park is smaller than it first like at a first glance at the map. But walking around the whole park just takes a few minutes of brisk walk so it’s easy to find and visit all the buildings.

One interesting I learned was the founder of Graphy Theory was one of the codebreakers there: William Thomas Tutte

Graph Theory

The building called The Mansion included library and was the headquarters and the recreations center.

Library

Enigma machine

Actual codebreaking was carried out in buildings called Huts. They were very well-preserved and authentic. Also they added projections and sound recordings to make the whole experience more realistic

Hut 3

Hut 11

Overall, the level of technical details was overwhleming. They explained every machine and approach to break the codes in detail.

The Bombe machine

I think I will need to make some research on my own and go back for one more time to fill in the blanks.

Milton Keynes

After the visit to the park I had a tour of Milton Keynes. It’s a small town with lots of parks, lakes and kind people.

Everybody I met was very nice. I’m not sure if it was my luck or just because people are less stressed in small towns.

Teardrop Lake

Furzton Lake

Notes to self

  • Check out local transportation
  • Work on the itinerary instead of just picking one direction and walking

Conclusion

It was a great weekend overall. Bletchley Park is a beautiful park and visiting a place which played such an important role in World War 2 was very exciting. I think I had better go back and complete the parts that I missed the fist time as there is so much information to digest.

Resources

aws, certification comments edit

I’ve been working with AWS for years. Since I love everything about it and planning to use it for the foreseeable future, I’ve decided to go ahead and get the official certificates. This is to make sure I’ve covered all the important aspects of AWS fully. Also it motivates me to devleop more projects and blog posts on it.

Overview

There are 2 main categories of tracks:

  • Role-based Certifications
  • Specialty Certifications

The tracks and exam paths to take are shown in the diagram below:

My plan is to start with Cloud Practitioner exam, continue with AWS Solutions Architect track and move on to developer and sysops tracks.

Costs

I think it’s important to analyze costs first to assess whether or not this is a journey you want to start.

Individual Exam Costs

Exam Name Cost Notes
AWS Certified Cloud Practitioner 100 USD Optional
Associate-level exams 150 USD  
Professional-level exams 300 USD  
Specialty exams 300 USD  
Recertification exams 75 USD Recertification is required every two years for all AWS Certifications
Associate-level practice exams 20 USD  
Professional-level practice exams 40 USD  

Total Tracks Costs

Exam Track Total Cost With VAT Notes
AWS Solutions Architect 450 USD 540 USD  
AWS Certified DevOps Engineer 450 USD 540 USD  
All Associate Level Exams 300 USD 360 USD 3 Exams
All Professional Level Exams 600 USD 720 USD 2 Exams (There’s no professional level for developer, both associate level exams lead to DevOps engineer)
All Exams 1150 USD 1380 USD Includes the optional Cloud Practitioner exam

The total cost is quite cheap but I think in the end it’s worth it.

Taking the Exams

It all starts with aws.training site. Just sign in with your Amazon account or create a new one. This allows you to take the online free courses. To take the exams you’d need a new account. I think this is because they partnered with a 3rd party to provide this.

Registration is quite similar. Just provide name and address and search for an exam centre.

Online Training

Free Courses

AWS Training

This is the official certification site of AWS. It allows the user to enroll to courses and view their transcript.

It’s a bit hard to find the actual course after you enrol because you can’t jump to contents from search results. What you should do is first go to My Transcript and under current courses you should be able to see the course and a link that says “Open”. Clickking that link takes to the actual content.

It has more content in the site. I’ll discover more as I go along.

edX

edX have recently launched 3 free AWS courses.

Various Training Resources

Conclusion

As my favourite saying goes: “It’s not about the destination, it’s about the journey”.

AWS certification for me is not a destination. It just plays a role for me to stay on course and stay motivated to create more projects and blog posts in a timely manner.

I’m hoping to see this journey to completion. I’ll be posting more on AWS and my journey on certification soon.

Resources

personal, leisure, travel comments edit

This is getting harder every year. When I first made up this little tradition of mine 2 years ago I had 5 pints in 5 different pubs in my neighbourhood. This year my goal was to have 7 pints but didn’t quite work out!

Attempt #1: Bermondsey Beer Mile

Apparently it’s a thing!. I took the train to London Bridge and walked all the way to the first pub in midday only to find out it was completely full! Being not a huge fan of crowded places I made a quick change of plans and decided any pub would do! So kept on walking to hunt for empty pubs.

Attempt #2: Random pubs

It was a very beautiful sunny day to be outside. So I really liked spending time in this pub: The Old Bank

I guess the problem was I spent too much time in this one that having to visit 6 more started to be daunting. But I soldiered on!

My second stop was a pub named The Ancient Foresters

with a rudish barmaid but nice cider brand called the Hogs Back

OK, after the cider here I was really looking forward to go home and enjoy the rest of my day off so I had to accept stopping at 2 pints and fail miserably.

Attempt #3: Something healthy

Since I’m in charge of making up the rules for this so-called tradition I thought I could make it more useful and healthy tradition. So for this year I decided to run 7K to commemorate the past 7 years in the UK!

I’ve recently started to run early in the mornings (mostly to play Ingress and capture enemy portals!) So I thought it would be a good fit to dedicate a running session specifically for the anniversary. Maybe not as fun as 7 pints thing but still better than nothing!

Ideas for next year

So it’s made itself abundantly clear that it’s hard to find 7-8 things of the same type. So next year I might run again 8K or divide the celebrations into 2 days and make it more managable that way! Maybe 8 parks and/or museums? Or maybe mixing and maatching would work. Since the main idea is to force myself go out and do something, I guess any celebration would do. We’ll see how it goes next year.

Resources

dev, aws, api gateway comments edit

API Gateway is Amazon’s managed API service. Serverless architecture is growing more on me everyday. I think leveraging infinite auto-scaling and only paying for what you use makes perfect sense. But to have an API that will be customer-facing first thing that needs to be setup is a custom domain which might be a bit involved when SSL certificates come in to play. In this post I’d like to create an API from scratch and use a custom domain name assigned to it.

Step 1: Create an API

Creating an API is straightforward: Just assign a meaningful and description. However, to me it was a bit confusing when it came to choosing the endpoint type.

The two options provided are: Regional and Edge optimized.

  • Edge-optimized API endpoint: The API is deployed to the specified region and a CloudFront distribution is created. API requests are routed to the nearest CloudFront Point of Presence (POP).

  • Regional API endpoint: This type was added in November 2017. The main goal is to prevent a roundtrip for in-region requests. API requests are targeted directly to the region-specific API Gateway without going through any CloudFront distribution.

Custom domain names are supported for both endpoint types.

In this example, I’ll use Regional endpoint type. For further reading, here’s a nice blog post about endpoint types.

Step 2: Create a resource and method

For demonstration purposes I created a resource called customer and a GET method that is which calls a mock endpoint.

Step 3: Deploy the API

From the Actions menu in Resources tab, I selected Deploy API.

Deployment requires a stage. Since this is the first deployment, I had to create a new stage called test. A new stage can be created while deploying. After the deployment test stage looks like this:

At this point API Gateway assigned a non-user-friendly URL already:

https://81dkdt6q81.execute-api.eu-west-2.amazonaws.com/test

This is the root domain of the API. So I was able to call the endpoint like this:

https://81dkdt6q81.execute-api.eu-west-2.amazonaws.com/test/albums

My goal was to get it working with my own domain such as:

https://hmdb.myvirtualhome.net/albums

Step 4: Generate the certificate in ACM

I’m using Route53 for all my domains and using ACM (AWS Certificate Manager) for generating SSL/TLS certificates. Before creating the custom domain name I needed my certificate available.

The wizard is quite simple: I just added the subdomain for the API and selected DNS validation.

After the review comes the validation process. Since I’m using Route 53 and ACM plays well with it, it simply provided a nice big button that said Create record in Route 53.

After clicking and confirming I got this confirmation message:

After waiting for about 3 minutes, the cerficate was issued already:

Step 5: Create Custom Domain Name in API Gateway

Now that the certificate was ready I had to go back to API Gateway to create the custom domain name and associate it with the newly created cert.

First, I clicked on Custom Domain Names on left menu and filled out the details. Make sure that your subdomain matches the one the certificate was generated for.

I assigned /test path to the test stage I had created earlier. I will use root path for the production stage when I deploy the final version.

After creating the custom domain, take note of Target Domain Name generated by AWS.

Step 6: Create A Record in Route 53

I had to also point DNS to the domain generated by API Gateway.

Since I was using a regional endpoint I had to map the custom domain name to the target domain name mentioned in the previous step.

Now the problem was when I tried to do it via AWS Management Console, it failed as explained in this StackOverflow answer.

So I had to do it via CLI as below:

aws route53 change-resource-record-sets --hosted-zone-id {ZONE_ID_OF_MY_DOMAIN} --change-batch file://changedns.json

whereas the contents of changedns.json were

{
  "Changes": [
    {
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "api.hmdb.myvirtualhome.net",
        "Type": "A",
        "AliasTarget": {
          "DNSName": "d-xyz.execute-api.eu-west-2.amazonaws.com",
          "HostedZoneId": "ZJ5UAJN8Y3Z2Q",
          "EvaluateTargetHealth": false
        }
      }
    }
  ]
}

In the JSON above, DNSName is the Target Domain Name created by AWS is Step 5. The HostedZoneId (ZJ5UAJN8Y3Z2Q), on the other hand, is the zone ID of API Gateway which is listed here.

UPDATE

If you are having issues running the command above that might mean you don’t have a default profile setup which has permissions to change DNS settings. To fix that:

1. Create a new user with no permissions

Go to IAM console and create a new user. Skip all the steps and download the credentials as .csv in the last step.

2. Assign required permissions

Create a new policy using the JSON template below and attach it to the new user

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "route53:ChangeResourceRecordSets",
            "Resource": "arn:aws:route53:::hostedzone/{ZONE ID OF YOUR DOMAIN}"
        }
    ]
}

3. Create a new profile for the user

aws configure --profile temp-route53-profile

and set the Access/Secret keys along with the region of your hosted zone.

Then you run the first CLI command with providing profile name:

aws route53 change-resource-record-sets --hosted-zone-id {ZONE_ID_OF_MY_DOMAIN} --change-batch file://changedns.json --profile temp-route53-profile

An important point here is to get your hosted zone ID from Route53. In the API Gateway, it shows a hosted zone ID which is actually AWS API Gateway zone ID. We use that zone ID in our DNS configuration (which is in changedns.json file in this example) but when we provide the hosted zone ID on the command line we provide our domain ID which can be found in Route53.

Step 7: Test

So after creating the alias for my API I visited the URL on a browser and I was able to get the green padlock indicating that it loaded the correct SSL certificate.

Resources

dev, aws, route53, angular, dotnet core, dynamic dns comments edit

A few years back I developed a project called DynDns53. I was fed up with the dynamic DNS tools available and thought could easily achieve the same functionality since I had already been using AWS Route53.

Fast forward a few years, due to some neglect on my part and technology moving so fast the project started to feel outdated and abandoned. So I decided to revise it.

Key improvements in this version are:

  • Core library is now available in NuGet so anyone can build their own clients around it
  • A new client built with .NET Core so that it runs on all platforms now
  • A Docker version is available that runs the .NET Core client
  • A new client built with Angular 5 to replace the legacy AngularJS
  • CI integration: Travis is running the unit tests of core library
  • Revised WPF and Windows Service clients and fixed bugs
  • Added more detailed documentation on how to set up the environment for various clients

Also kept the old repository but renamed it to dyndns53-legacy. I might archive it at some point as I’m not planning to support it any longer.

Available on NuGet

NuGet is a great way of installing and updating libraries. I thought it would be a good idea to make use of it in this project so that it can be used without cloning the repository.

With DotNetCore it’s quite easy to create a NuGet package. Just navigate to project folder (where .csproj file is located) and run this:

dotnet pack -c Release

The default configuration it uses is Debug so make sure you’re using the correct build and a matching pack command. You should be able to see a screen similar to this

Then push it to Nuget

dotnet nuget push ./bin/Release/DynDns53.CoreLib.1.0.0.nupkg -k {NUGET.ORG API_KEY} -s https://api.nuget.org/v3/index.json

To double-check you can go to your NuGet account page and under Manage Packages you should be able to see your newly published package:

Now we play the waiting game! Becuase it may take some time for the package to be processed by NuGet. For exmaple I saw the warning shown in the screenshot 15 minutes after I pushed the package:

Generally this is a quick process but the first time I published my package, I got my confirmation email about 7 hours later so your mileage may vary.

If you need to update your package after it’s been published, make sure to increment the version number before running dotnet pack. In order to do that, you can simply edit the .csproj file and change the Version value:

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <PackageId>DynDns53.CoreLib</PackageId>
    <Version>1.0.1</Version>
    <Authors>Volkan Paksoy</Authors>
    <Company></Company>
  </PropertyGroup>

Notes

  • Regarding the NuGet API Key: They recently changed their approach about keys. Now you only have one chance to save your key somewhere else. If you don’t save it, you won’t be able to access ti via their UI. You can create a new one of course so no big deal. But to avoid key pollution you might wanna save it in a safe place for future reference.

  • If you are publishing packages frequently, you may not be able to get the updates even after they had been published. The reason for that is the packages are cached locally. So make sure to clean your cache before you try to update the packages. On Mac, Visual Studio doesn’t have a Clean Cache option as of this writing (unlike Windows) so you have to go to your user folder and remove the packages under {user}/.nuget/packages folder. After this, you update the packages and you should get the latest validated version from Nuget.

.NET Core Client

Prerequisites

First, you’d need an IAM user who has access to Route53. You can use the policy template below to give the minimum possible permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:ListResourceRecordSets",
                "route53:ChangeResourceRecordSets"
            ],
            "Resource": "arn:aws:route53:::hostedzone/{ZONE ID}"
        }
    ]
}

Only 2 actions are performed so as long as you remmeber to update the policy with the new zone IDs if you need to manage other domains this should work fine work you.

Usage

Basic usage is very straightforward. Once compiled you can supply the IAM Access and Secret Keys and the domains to update with their Route53 Zone IDs as shown below:

dotnet DynDns53.Client.DotNetCore.dll --AccessKey {ACCESS KEY} --SecretKey {SECRET KEY} --Domains ZoneId1:Domain1 ZoneId2:Domain2 

Notes

  • .NET Core Console Application uses the NuGet package. One difference between .NET Core and classis .NET application is that the packages are no longer stored along with the application. Instead they are downloaded to the user’s folder under .nuget folder (e.g. on a Mac it’s located at /Users/{USERNAME}/.nuget/packages)

Available on Docker Hub

Even though it’s not a complex application I think it’s easier and hassle-free to run it in a self-contained Docker container. Currently it only supports Linux containers. I might need to develop a multi-architecture image in the future in need be but for now Linux only is sufficient for my needs.

Usage

You can get the image from Docker hub with the following command:

docker pull volkanx/dyndns53

and running it is very similar to running the .NET Core Client as that’s what’s running inside the container anyway:

docker run -d volkanx/dyndns53 --AccessKey {ACCESS KEY} --SecretKey {SECRET KEY} --Domains ZoneId1:Domain1 ZoneId2:Domain2 --Interval 300

The command above would run the container in daemon mode so that it can keep on updating the DNS every 5 minutes (300 seconds)

Notes

  • I had an older Visual Studio 2017 for Mac installation and it didn’t have Docker support. The setup is not very granular to pick specific features. So my solution was to reinstall the whole thing at which point Docker support was available in my project.

  • After adding Docker support the default build configuration becomes docker-compose. But it doesn’t work straight away as it throws an exception saying

      ERROR: for dyndns53.client.dotnetcore  Cannot start service 	dyndns53.client.dotnetcore: Mounts denied: 
      The path /usr/local/share/dotnet/sdk/NuGetFallbackFolder
      is not shared from OS X and is not known to Docker.
      You can configure shared paths from Docker -> Preferences... -> File Sharing.
      See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
    

I added the folder it mentions in the error message to shared folders as shown below and it worked fine afterwards:

  • Currently it only works on Linux containers. There’s a nice articlehere about creating multi-architecture Docker images. I’ll try to make mine multi-arch as well when I revisit the project or there is an actual need for that.

Angular 5 Client

I’ve updated the web-based client using Angular 5 and Bootstrap 4 (Currently in Beta) which now looks like this:

I kept a copy of the old version which was developed with AngularJS. It’s available at this address: http://legacy.dyndns53.myvirtualhome.net/

Notes

  • After I added AWS SDK package I started getting a nasty error:

      ERROR in node_modules/aws-sdk/lib/http_response.d.ts(1,25): error TS2307: Cannot find module 'stream'.
    

    Fortunately the solution is easy as shown in the accepted answer here. Just remove “types: []” line in tsconfig.app.json file. Make sure you’re updating the correct file though as there is similarly named tsconfig.json in the root. What we are after is the tsconfig.app.json under src folder.

  • In this project, I use 3 different IP checkers (AWS, DynDns and a custom one I developed myself a while back and running on Heroku). Calling these from other clients is fine but when in the web application I bumped into CORS issues. There are possible solutions for this:

    1. Create you own API to return the IP address: In the previous version, I created an API with AWS API Gateway which uses a very simple Lambda function to return caller’s IP address

       exports.handler = function(event, context) {
         	 context.succeed({
               "ip": event.ip
           })
       }
      

      I create a GET method for my API and used the above Lambda function. Now that I had full control over it I was able to enable CORS as shown below:

    2. The other solution is “tricking” the browser by injecting CORS headers by using a Chrome extension. There is an umber of them but I use the one aptly named “Allow-Control-Allow-Origin: *”

      After installed you just enable it and the getting external IP works fine.

      It’s a good practice to filter it for your specific needs so that it doesn’t affect other sites (I had some issues with Google Docs when this is turned on)

CI Integration

I created a Travis integration which is free since my project is open-source. It runs the unit tests of the core library automatically. Also added the shiny badge on the project’s readme file that shows the build status.

Resources

google cloud platform, dev, speech-to-text comments edit

Just out of curiosity I wanted to play around with Google Cloud Platform. They give $300 free credit for a 12 month trial period so I thought this would be a good chance to try it out.

The APIs I wanted to sample were speech recognition and translation.

Setting Up SDK

I followed the quick start guide which is a step-by-step process so it was quite helpful to get acquainted with the basics.

To be able to follow the instructions I downloaded and installed the GCloud SDK. On Mac it’s quite easy:

curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init

And once it’s complete it requires you to log in to your account and grant access to SDK:

Testing the API

After the initial setup I tried the sample request and it worked just fine:

The example worked but also raised a few questions in my mind:

  1. Sample uses gs protocol. First off, what does it mean?
  2. Can I use good ol’ http instead of it and point to any audio file publicly accessible?
  3. Can I use MP3 as encoding or does it need to be FLAC?

As learned from this SO thread, gs is used for Google Cloud Storage and “https://storage.googleapis.com” translates to “gs://”.

So the http version of the test file is “https://storage.googleapis.com/cloud-samples-tests/speech/brooklyn.flac”. I was able to verify the file actually exists but when I replaced it with the original value I got this error:

{
    "error": {
        "code": 400,
        "message": "Request contains an invalid argument.",
        "status": "INVALID_ARGUMENT"
    }
}

This also answered my second question. According to the documentation it only supports Google Cloud Storage currently:

uri contains a URI pointing to the audio content. 
Currently, this field must contain a Google Cloud Storage URI 
(of format gs://bucket-name path_to_audio_file). 

The answer to my 3rd question wasn’t very promising either. Apparently only the types listed below are supported:

If the authorization token expires, you can generate a new one by using the following commands:

export GOOGLE_APPLICATION_CREDENTIALS="/Path/To/Credentials/Json/File"

gcloud auth application-default print-access-token

So no way of uploading a random MP3 and get text out of it. But I’ll of course try anyway :-)

Test Case: Get lyrics for a Rammstein song and translate

OK, now that I have a free trial at my disposal and have everything setup, let’s create some storage, upload some files and put it to a real test.

Step 01: Get some media

My goal is to extract lyrics of a Rammstein song and translate them to English. For that I chose the song Du Hast. Since I couldn’t find a way to download FLAC version of the song I decided to download the official vide from Rammstein’s YouTube channel.

This is just for experimental purposes and I deleted the video after I’m done testing it so should be fine I guess. To download videos from youtube you can refer to this TechAdvisor article.

I simply used VLC to open the YouTube video. In Window -> Media Information dialog it shows the full path of the raw video file and I copied that path into a browser and downloaded the video.

Step 02: Prepare the media to process

Since all I need is audio I extracted it from video file using VLC. Probably can be done in a number of ways but VLC is quite straightforward to do it:

Click File –> Convert & Stream, drag and drop the video

In the Choose Profile section, select Audio - FLAC.

The important bit here is is that by default VLC converts to stereo audio with 2 channels but Google doesn’t support it which is explained in this documentation:

All encodings support only 1 channel (mono) audio

So make sure to customize it and enter 1 as channel count:

Step 03: Call the API

Now I was ready to call the API with my shiny single-channel FLAC file. I uploaded it to the Google Storage bucket I created, gave public access to it and tried the API.

Apparently, speech:recognize endpoint only supports audio up to a minute. This is the error I got after posting a 03:55 audio.

“Sync input too long. For audio longer than 1 min use LongRunningRecognize with a ‘uri’ parameter.”

The solution is using speech:longrunningrecognize endpoint which only returns a JSON with 1 value: name. This is a unique identifier assigned by Google to the job they created for us.

Once we have this id we can query the result of the process by calling GET operations endpoint.

Fantastic! Some results. It’s utterly disappointing of course as we only got a few words out of it, but still something (I guess!).

Step 04: Compare the results:

Now the following is the actual lyrics of the song:

Du
du hast
du hast mich
du hast mich gefragt
du hast mich gefragt, und ich hab nichts gesagt

Willst du bis der Tod euch scheidet
treu ihr sein für alle Tage

Nein

Willst du bis zum Tod, der scheide
sie lieben auch in schlechten Tagen

Nein

and this is what I got back from Google:

du hast 
du hast recht 
du hast 
du hast mich 
du hast mich 
du du hast 
du hast mich

du hast mich belogen

du hast 
du hast mich blockiert

It missed most of the lyrics. Maybe it was headbanging too hard that it couldn’t catch those parts!

Test Case: Slow German Podcast

Since my idea of translating German industrial metal lyrics on the fly failed miserably I decided to try with cleaner audio where there is no music. Found a nice looking podcast called Slow German. Nice thing about it is that it provides transcripts as well so I can compare the Speech API results with it.

Obtained a random episode from their site and followed the steps above.

First 4 paragraphs of the actual transcript of the podcast is as follows (The full transcript can be found here:

Denk ich an Deutschland in der Nacht, dann bin ich um den Schlaf gebracht.“ Habt Ihr diesen Satz schon einmal gehört? Er wird immer dann zitiert, wenn es Probleme in Deutschland gibt. Der Satz stammt von Heinrich Heine. Er war einer der wichtigsten deutschen Dichter. Aber keine Angst: Auch wenn er am 13. Dezember 1797 geboren wurde, sind seine Texte sehr aktuell und relativ leicht zu lesen. Ihr werdet ihn mögen!

Harry Heine wuchs in einem jüdischen Haushalt auf. Er war 13 Jahre alt, als Napoleon in Düsseldorf einzog. Schon als Schüler begann er, Gedichte zu schreiben. Beruflich sollte er eigentlich im Bankgeschäft arbeiten, aber dafür hatte er kein Talent. Also versuchte er es erst mit einem eigenen Geschäft für Stoffe, das aber bald pleite war. Dann begann er zu studieren. Er probierte es mit Jura und mit Geschichte, besuchte verschiedene Vorlesungen.

Mit 25 Jahren veröffentlichte er erste Gedichte. Es war eine aufregende Zeit für ihn. Er wechselte die Städte und die Universitäten, er beendete sein Jura- Studium und wurde promoviert. Um seine Chancen als Anwalt zu verbessern, ließ er sich protestantisch taufen, er kehrte also dem Judentum den Rücken und wurde Christ. Daher auch der neue Name: Christian Johann Heinrich Heine. Später hat er die Taufe oft bereut.

Wenn Ihr Heines Werke lest werdet Ihr merken, dass sie etwas Besonderes sind. Sie sind oft kritisch, sehr oft aber auch ironisch und humorvoll. Er spielt mit der Sprache. Er kann aber auch sehr böse sein und herablassend über Menschen schreiben. Seine Kritik auch an politischen Ereignissen und die Zensur, mit der er in Deutschland leben musste, führten Heinrich Heine nach Paris. Er wanderte nach Frankreich aus.

And this is the result I got from Google (Trimmed to match the above):

denk ich an Deutschland in der Nacht dann bin ich um den Schlaf gebracht habt ihr diesen Satz schon einmal gehört er wird immer dann zitiert wenn es Probleme in Deutschland gibt der Satz stammt von Heinrich Heine er war einer der wichtigsten deutschen Dichter aber keine Angst auch wenn er am 13. Dezember 1797 geboren wurde sind seine Texte sehr aktuell und relativ leicht zu lesen ihr werdet ihn mögen Harry Heine wuchs in einem jüdischen Haushalt auf er war 13 Jahre alt als Nappo

hier in Düsseldorf einen Zoo schon als Schüler begann er Gedichte zu schreiben beruflich sollte er eigentlich im Bankgeschäft arbeiten aber dafür hatte er kein Talent also versuchte er es erst mit einem eigenen Geschäft für Stoffe das aber bald pleite war dann begann er zu studieren er probierte es mit Jura und mit Geschichte besuchte verschiedene Vorlesungen mit 25 Jahren veröffentlichte er erste Gedichte es war eine aufregende Zeit für ihn er wechselte die Städte und die Universitäten er beendete sein Jurastudium und wurde Promo

auch an politischen Ereignissen und die Zensur mit der er in Deutschland leben musste führten Heinrich Heine nach Paris er wanderte nach Frankreich aus 

Comparing the translations

Since I don’t speak German I cannot judge how well it did. Clearly it didn’t capture all the words but I wanted to see if what it returned makes any sense anyway. So I put both in Google Translate and this is how they compare:

Translation of the original transcript:

When I think of Germany at night, I'm about to go to sleep. "Have you ever heard that phrase before? He is always quoted when there are problems in Germany. The sentence is by Heinrich Heine. He was one of the most important German poets. But do not worry: even if he was born on December 13, 1797, his lyrics are very up to date and relatively easy to read. You will like him!

Harry Heine grew up in a Jewish household. He was 13 years old when Napoleon moved in Dusseldorf. Even as a student, he began writing poetry. Professionally, he was supposed to work in banking, but he had no talent for that. So he first tried his own business for fabrics, which was soon broke. Then he began to study. He tried law and history, attended various lectures.

At the age of 25 he published his first poems. It was an exciting time for him. He changed cities and universities, he completed his law studies and received his doctorate. To improve his chances as a lawyer, he was baptized Protestant, so he turned his back on Judaism and became a Christian. Hence the new name: Christian Johann Heinrich Heine. Later he often regretted baptism.

When you read Heine's works, you will find that they are special. They are often critical, but often also ironic and humorous. He plays with the language. But he can also be very angry and condescending to write about people. His criticism also of political events and the censorship with which he had to live in Germany led Heinrich Heine to Paris. He emigrated to France.	

Translation of Google’s results:

I think of Germany in the night then I'm about to sleep Did you ever hear this sentence He is always quoted when there are problems in Germany The sentence comes from Heinrich Heine He was one of the most important German poets but do not be afraid he was born on December 13, 1797 his lyrics are very up to date and relatively easy to read you will like him Harry Heine grew up in a Jewish household he was 13 years old as Nappo

Here in Dusseldorf a zoo as a student he began to write poetry professionally he should actually work in the banking business but for that he had no talent so he first tried his own business for fabrics but soon broke and then began to study he tried it with Jura and with history attended various lectures at age 25 he published his first poems it was an exciting time for him he changed the cities and the universities he finished his law studies and became promo

also in political events and the censorship with which he had to live in Germany led Heinrich Heine to Paris he emigrated to France

The translations of the podcast are very close, especially the first part. It missed some sentences and when you read the API output at least you can get a general understanding of what the text is about. It’s not a good read maybe and it’s not good if you’re interested in details but it’s probably good enough

Conclusion

Speech to text can be very useful backed with automated real-time translations. Google Speech API supports real time speech recognition as well so it may be interesting to put Translation API in use as well and develop a tool to get real time translations but that’s for another blog post.

Resources

git, devops, powershell, docker comments edit

Having lots of projects and assets stored on GitHub I thought it might be a good idea to create periodical backups of my entire GitHub account (all repositories, both public and private). The beauty of it is since Git is open source, this way I can migrate my account to anywhere and even host it on my own server on AWS.

Challenges

With the above goal in mind, I started to outline what’s necessary to achieve this task:

  1. Automate calling GitHub API to get all repos including private ones. (Of course one should be aware of GitHub API rate limits which is currently 5000 requests per hour. If you use up all your allowance with your scripts you may not be able to use it yourself. Good thing is they are returning how many calls are left before you exceed your quota in x-ratelimit-remaining HTTP header in their responses.)
  2. Pull all the latest versions for all branches. Overwrite local versions in cases of conflict.
  3. Find a way to easily transfer a git repository (A compressed single file version rather than individual files) if transferring to another medium is required (such as an S3 bucket)

With these challenges ahead, I first started looking into getting the repos from GitHub:

Consuming GitHub API via PowerShell

First, I shopped around for existing libraries for this task (such as PowerShellForGitHub by Microsoft but it didn’t work for me. Basically I couldn’t even manage the samples on their Wiki. It kept giving cmdlet not found error so I gave up.)

Found a nice video on Channel 9 about consuming REST APIs via PowerShell which uses GitHub API as a case study. It was perfect for me as my goal was to use GitHub API anyway. And since this is a generic approach to consume APIs it can come handy in the future as well. It’s quite easy using basic authentication.

Authorization

First step, is to create a Personal Access Token with repo scope. (Make sure to copy the value before you close the page, there is no way to retrieve it afterwards.)

After the access token has been obtained, I had to generate authorization header as shown in the Channel 9 video:

$token = '<YOUR GITHUB ACCOUNT NAME>:<PERSONAL ACCESS TOKEN>'
$base64Token = [System.Convert]::ToBase64String([char[]]$token)
$headers = @{
    Authorization = 'Basic {0}' -f $base64Token
};

$response = Invoke-RestMethod -Headers $headers -Uri https://api.github.com/user/repos

This way I was able to get the repositories including the private ones but by default it returns 30 records on a page so I had to traverse over the pages .

Handling pagination

GitHub sends the next and the last page URLs in link header:

<https://api.github.com/user/repos?page=2>; rel="next", <https://api.github.com/user/repos?page=3>; rel="last"

The challenge here is that looks like Invoke-RestMethod response doesn’t allow to access headers which is a huge bummer as there are useful info in the headers as shown in the screenshot:

GitHub response headers in Postman

At this point, I wanted to use PSGitHub mentioned in the video but as of this writing it doesn’t support getting all repositories. In fact in a note it says “We need to figure out how to handle data pagination” which made me think we are on the same page here (no pun intended!)

GitHub supports a page size parameter (e.g. per_page=50) but the documentation says the maximum value is 100. Although it is tempting to use that one as that would bring all my repos and leave some room for the future ones as well I wanted to go with a more permanent solution. So I decided to request more pages as longs as there are objects returning like this

$page = 1

Do
{
    $response = Invoke-RestMethod -Headers $headers -Uri "https://api.github.com/user/repos?page=$page"
    
    foreach ($obj in $response)
    {
        Write-Host ($obj.id)
    }
    
    $page = $page + 1
}
While ($response.Count -gt 0)

Now in the foreach loop of course I have to do something with the repo information instead of just printing the id.

Cloning / pulling repositories

At this point I was able to get all my repositories. GitHub API only handles account information so now I needed to able to run actual git commands to get my code.

First I had installed PowerShell on Mac which is quite simple as specified in the documentation:

brew tap caskroom/cask
brew cask install powershell

With Git already installed on my machine, all is left was using Git commands to clone or update repo on PowerShell terminal such as:

git fetch --all
git reset --hard origin/master

Since this is just going to be a backup copy I don’t want to deal with merge conflicts and just overwriting everything local.

Another approach could be deleting the old repo and cloning it from scratch but I think this would be a bit wasteful to do it everytime for each and every repository.

Putting it all together

Now that I have all the bits and pieces I have glue them together in a meaningful script than can be scheduled and here it is:

Conclusion and Future Improvements

This version accomplishes the basic task of backing up an entire GitHub account but it can be improved in a few ways. Maybe I can post a follow up article including those improvements. A few ideas come to mind are:

  • Get Gists (private and public) as well.
  • Add option to exclude repos by name or by type (i.e. get only private ones or get all except repo123)
  • Add an option to export them to a “non-git” medium such as an S3 bucket using git bundle (which turns out to be a great tool to pack everything in a repository in a single file)
  • Create a Docker image that contains all the necessary software (Git, PowerShell, backup script etc) so that it can be distributed without any setup requirements.

Resources

aws, s3, devops, mysql, powershell comments edit

I have an application that uses MySQL database. Because of cost concerns it’s running on an EC2 instance instead of RDS. As it’s not a managed environment, the burden of backing up my data falls on me. This is a small step by step guide that details how I’m backing up my MySQL database to AWS S3 with PowerShell

PART 01 - AWS SETUP

  1. Create a bucket named (e.g. “application-backups”) on AWS S3 using AWS Management Console.
  2. Create a new IAM user (e.g “upload-backup-to-s3”)
  3. Create a new policy using management console. The policy will only give enough permissions to put objects into a single S3 bucket:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::xxxxxxx"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": [
                "arn:aws:s3:::xxxxxxx/*"
            ]
        }
    ]
}

In S3 bucket properties copy ARN and replace the x’s above with that.

  1. Customize S3 bucket LifeCycle settings to determine how long you want the old logs retain in your bucket. In my case I set it to expire at 21 days which I think it’s large enough window for relevant database backups. I probably wouldn’t restore anything older than 21 days anyway.

  2. [Optional - For email notifications] Create an SES user by clicking SES -> SMTP Settings -> Create My SMTP Credentials

Make sure you don’t make the mistake I made which was creating an IAM user with a policy that can send emails. In the SMTP settings page there’s a note right below the button:

Your SMTP user name and password are not the same as your AWS access key ID and secret access key. Do not attempt to use your AWS credentials to authenticate yourself against the SMTP endpoint.

When you click the button it creates an IAM user basically for the secret key is 44 bytes whereas the IAM user I created had a secret key of 40 characters. Anyway, bottom line is in order to be able to send emails via SES create the user as described above and all should be fine.

PART 02 - POWERSHELL SCRIPT

  1. Download and install AWS Tools for Windows PowerShell (https://aws.amazon.com/powershell/)

  2. Create a script as shown below. In a nutshell what the script does is:

a. Execute mysqldump command (Comes with MySQL Server)

b. Zip the backup file (which reduces the size significantly)

c. Upload the zip file to S3 bucket

d. Send a notification email using SES

e. Delete the local files

This is the full script:

  • For SQL Server databases, there’s a PowerShell cmdlet called Backup-SQLDatabase but for MySQL I think the most straightforward way is using mysqldump that comes with MySQL server.

  • For password-protected zip files, you can take a look at this article (I haven’t tried it myself)

Final step: Schedule the script by using Windows Task Scheduler

This is quite straightforward. Just create a task, schedule it to how often you want to backup your database.

In the actions section, enter “powershell” as “program/script” and the path of your PowerShell script as “argument” and that’s it.

Resources

synology, music, streaming comments edit

I’ve been a Spotify customer for quite long time but recently realized that I wasn’t using it enough to justify 10 quid per month. Amazon made a great offer for 4 months subscription for only £0.99 and I’m trying that out now but the quality of the service didn’t impress so far. Then it dawned on me: I already have lots of MP3s from my old archives, I have a fast internet connection and I have a Synology. Why not just build my own streaming?

One device to rule them all: Synology

Everyday I’m growing more fond of my Synology and regretting for all that time I haven’t utilized it fully.

For streaming audio, we need the server and client software. The server side comes with Synology: Audio Station

The Server

Using Synology Audio Station is a breeze. You simply connect to Synology over the network and copy your albums into the music folder. Try to have a cover art named as “cover.jpg” so that your albums shows nicely on the user interface.

The Client

Synology has a suite of iOS applications which are available in the Apple App Store. The one I’m using for audio streaming is called DS Audio.

By using Synology’s Control Panel you can use a specific user for listening to music only. This way even if your account is compromised the attacker will only have read-only access to your music library.

Connecting to the server

There are two ways of connecting to your server:

  1. Dynamic DNS
  2. Quick Connect (QC)

Dynamic DNS is a builtin functionality but you’d need a Synology account. Basically your Synology pings their server so that it can detec the IP changes.

QC is the way I chose to go with. It’s a proprietary technology by Synology. The nice thing about QC is when you are connected to your local network it uses the internal IP so it doesn’t use mobile data. When you’re outside it uses the external IP and connects over the Internet.

Features

  • You can download all the music you want from your own library without any limitations. There’s no limit set for manual downloads. For automatic downloads you can choose from no caching to caching everything or choose a fixed size from 250MB to 20GB.
  • When you’re offline you don’t need to login. On login form there’s a link to Downloaded Songs so you can skip logging in and go straight to your local cache.
  • You can pin your favourite albums to home screen.
  • Creating a playlist or adding songs to playlists is cumbersome (on iPhone at least):
    • Select a song and tap on … next to the song
    • Tap Add. This will add your song to the play queue.
    • Tap on Play button on top right corner.
    • Tap playlist icon on top right corner.
    • Tap the same icon again which is now on top left corner to go into edit mode
    • Now tap on the radio buttons on the left of the songs to select.
    • When done, tap on the icon on the bottom left corner. This will open the Add to Playlist screen (finally!)
    • Here you can choose an existing playlist or create a new one by clicking + icon.

Considering how easy this can be done on Spotify client this really needs to be improved.

  • In the library or Downloaded Songs sections, you can organise your music by Album, Artist, Composer, Genre and Folder. Of course in order for Artist/Composer/Genre classification to work you have to have your music properly tagged.
  • The client has Radio featue which has builtin support for SHOUTCast

SHOUTCast

  • You can rate songs. There’s a built-in Top Rated playlist. By rating them you can play your favourite songs without needing them to be added to playlists which is a neat feature.

Conclusion

I think having full control over my own music is great and even though DS Audio client has some drawbacks it’s worth it as it’s completely free. Also you can just set it up as a secondary streaming service in addition to your favourite paid one just in case so that you have a backup solution.

Resources

dev comments edit

I have been a long time Windows user. About 2 years ago I bought a MacBook but it never became my primary machine. Until now! Finally I decided to steer away from Windows and use the Macbook for development and day to day tasks.

Tipping Point

One morning I woke up and found out that Windows restarted itself again, without asking me. At the time I had a ton of open windows and there was a VMWare Virtual Machine running but none of these stopped Windows. It just abruptly shutdown the VM whic was very annoying and this wasn’t even the first time it had happened. So I decided to migrate completely to Mac. Just to give myself a better understanding of what it took and what is missing I decided to compile this post.

Migration

I thought it would be a painful process but turns out it was quite straightforward. Here’s comparison of some key applications I use:

Email: Mailbird vs. Mail

On Windows I used to use Mailbird as my email client. It allows managing multiple accounts and has a nice GUI and works fine. I was wondering if there would be an equivalent in Mac for that and how much it would cost me (I paid about £25 for Mailbird for a lifetime license but apparently it’s now free). I didn’t have to look far: The built-in Mail application does the job very well. Adding a new Google account is a breeze.

MarkdownPad 2 vs. MacDown

I like Markdown Pad 2 on Windows but it has its flaws: The live preview constanly crashes and it allows to open only 4 files in the free version. On Mac, I’m using MacDown now which has a beatiful interface and completely free.

Git Extensions vs. SourceTree

I do like Git Extensions and it’s one of the programs I wish I had on Mac but SourceTree by Atlassian seems to do the job.

Storage: Google Drive and Synology

Both have web interfaces and Google Drive has desktop clients for both Mac and Windows so no issues in migrating there.

PDF Ops

On Windows, I like Sumatra PDF which is very clean and bloatware-free. On Mac, there is no eed to install anything. The default PDF viewer is perfect. It even handles PDF merge and editing operations.

Virtual Desktops

I love using virtual cesktops on Mac. Switching desktops is so easy and intuitive with a three-finger swipe. Windows 10 has support for virtual desktops now but switching is not as fluent so using them didn’t become a habit.

Visual Studio

Now this is the only application I cannot run on Mac. Microsoft has recently released Visual Studio for Mac and they also have Visual Studio Code which is a nice code editor but they are both stripped down versions. I don’t know if .NET Core will take off but currently I use full-blown .NET Framework which only runs under Windows so for development purposes I need to keep the Windows machine alive.

After the migration

I have absolutely no regrets for switching over. I love the Macbook. the keyboard is much better than my Asus’s and the OS is great. Mac has 16GB but outperforms Asus with 24GB (both have Core i& processors and SSD drives)

Here are some more annoying things that used to bug me in the past about Windows:

  • Quite often I cannot delete a folder that used to have a video in it because of Thumbs.db file being in use.
  • I couldn’t change settings to disable Thumbs.db completely because Windows 10 Home edition didn’t allow me to do that.
  • I couldn’t upgrade to Windows 10 Pro even though I had a license for Windows 8.1 Pro. Trying to resolve the licensing issue I found myself going in circles and nothing worked.

Mac cons

There are a few things that I don’t like about Mac or miss from Windows:

  • On Windows, quite often I need to create a blank text file, then double-click and edit it. In Finder, you can only create a new folder. Apparently some scripting is required to overcome this as shown in the resources section below.
  • iCloud seems to be forced down on me. I don’t want to use it, I don’t want to see it but I cannot get rid of it. Trying to disable is just confusing. I’ve now moved everything to a different folder that it’s not watching be default and trying to ignore it completely
  • Moving windows from dispay to display is hard. Especially in my case as I have 15.4” laptop screen and two external monitors with 27” and 40” sizes. Since the size difference is huge between these, dragging a large window from 40” monitor to 15.4” messes up because it doesn’t auto-resize and I cannot even get to the top window to resize. But now I’m using virtual desktops more frequently and using 40” for multiple applications side by side this is not as big of a problem these days.

Going back?

There’s a lot to learn on Mac but I don’t think I’ll be going back anytime soon. I’m looking into virtualizing the Windows machine now so that I can decommission the laptop. I already converted my old Windows desktop into a Linux server so would have no problem with using the laptop for other purposes.

Microsoft made flop after flop starting with Windows 8 and finally they lost another user but they don’t seem to care. If they did, they wouldn’t disrespectfully keep restarting my machine, killing all my applications and VMs!

Resources

dev comments edit

Nowadays many people use their phones as their primary web browsing device. As mobile usage is ubiquitous and increasing even more, testing the web applications on mobile platforms is becoming more important.

Chrome has a great emulator for mobile devices but sometimes it’s best to test your application on an actual phone.

If your application is the default application you can access via IP address you’re fine but the problem is if you have multiple domains you want to test at some point you’d need to enter the domain name in your phone’s browser.

Today I bumped into such an issue and my solution involved one of my favourite devices in my household: Synology DS214Play

Local DNS Server on Synology

Step 01: First, I installed DNS Server package by simply searching DNS and clicking install on Package Center.

Step 02: Then, I opened the DNS Server settings and created a new Master Zone. I simply entered the domain name of my site which is hosted on IIS on my development machine and the local network IP address of the Synology as the Master DNS Server.

Step 03: Next, I needed to point to the actual web server. In order to do that I created an A record with the IP address of the local server a.k.a. my development machine.

Step 04: For all the domains that my DNS server didn’t know about (which is basically everything else!) I needed to forward the requests to “actual” DNS servers. In my case I use Google’s DNS servers so I entered those IPs as forwarders.

Step 05: At this point the Synology DNS server is pointing to the web server and web server is hosting the website. All is left is pointing the client’s (phone or laptop) DNS setting to the local DNS server.

Step 06: Now that it’s all setup I could access to my development machine using a locally-defined domain name from my phone:

Conclusion

Another simple alternative to achieve this on Windows laptops is to edit hosts file under C:\Windows\System32\drivers\etc folder but when you have multiple clients in the network i.e macbooks and phones, it’s simpler just to point to the DNS server rather than editing each and every single device. And also it’s more fun this way!

Resources

dev comments edit

I like playing around with PDFs especially when planning my week. I have my daily plans and often need to merge them into a single PDF to print easily. As I decided to migrate to Mac for daily use now I can merge them very easily from command line as I described in my TIL site here.

Mac’s PDF viewer is great as it also allows to simply drag and drop a PDF into another one to merge them. Windows doesn’t have this kind of nicety so I had to develop my own application to achieve this. I was planning to ad more PDF operations but since I’m not using it anymore I don’t think it will happen anytime soon so I decided to open the source.

Very simple application anyway but hope it helps someone save some time.

Implementation

It uses iTextSharp NuGet package to handle the merge operation:

public class PdfMerger
{
    public string MergePdfs(List<string> sourceFileList, string outputFilePath)
    {
        using (var stream = new FileStream(outputFilePath, FileMode.Create))
        {
            using (var pdfDoc = new Document())
            {
                var pdf = new PdfCopy(pdfDoc, stream);
                pdfDoc.Open();
                foreach (string file in sourceFileList)
                {
                    pdf.AddDocument(new PdfReader(file));
                }
            }
        }

        return outputFilePath;
    }
}

Also uses Fluent Command Line Parse, another of my favourite NuGet packages to take care of the input parameters:

var parser = new FluentCommandLineParser<Settings>();
parser.Setup(arg => arg.RootFolder).As('d', "directory");
parser.Setup(arg => arg.FileList).As('f', "files");
parser.Setup(arg => arg.OutputPath).As('o', "output").Required();
parser.Setup(arg => arg.AllInFolder).As('a', "all");
    
var result = parser.Parse(args);
if (result.HasErrors)
{
    DisplayUsage();
    return;
}

var p = new Program();
p.Run(parser.Object);

The full source code can be found in the GitHub repository (link down below).

Resources

dev comments edit

Slack is a great messaging platform and it can integrate very easily with C# applications.

Step 01: Enable incoming webhooks

First go to Incoming Webhooks page and turn on the webhooks if it’s not already turned on.

Step 02. Create a new configuration

You can select an existing channel or user to post messages to. Or you can create a new channel. (May need a refresh for the new one to appear in the list)

Step 03. Install Slack.Webhooks Nuget package

In the package manager console, run

Install-Package Slack.Webhooks

Step 04. Write some code!

var url = "{Webhook URL created in Step 2}";

var slackClient = new SlackClient(url);

var slackMessage = new SlackMessage
{
    Channel = "#general",
    Text = "New message coming in!",
    IconEmoji = Emoji.CreditCard,
    Username = "any-name-would-do"
};

slackClient.Post(slackMessage);

Done

That’s it! Very easy and painless integration to get real-time desktop notifications.

Some notes

  • Even though you choose a channel while creating the webhook, in my experience you can use the same one to post to different channels. You don’t need to create a new webhook for each channel.
  • Username can be any text basically. It doesn’t need to correspond to a Slack account.
  • First time you send a message with a username, it uses the emoji you specify in the message. You can leave it null in which case it uses the default. On consequent posts, it uses the same emoji for that user even if you set a different one.

Resources

dev comments edit

HTTP/2 is a major update to the HTTP 1.x protocol and I decided to spare some time to have a general idea what it is all about:

Here are my findings:

  • It’s based on SPDY (a protocol developed by Google, currently deprecated)
  • It uses same methods, status codes etc. so it is backwards-compatible and the main focus is on performance
  • The problem it is addressing is HTTP requiring a TCP connection per request.
  • Key differences:
    • It is binary rather than text.
    • It can use one connection for multiple requests
    • Allows servers push responses to browser caches. This way it can automatically start sending assets before the browser parses the HTML and sends a request for each of them (images, JavaScript, CSS etc)
  • The protocol doesn’t have built-in encryption but currently Firefox, Internet Explorer, Safari, and Chrome agree that HTTPS is required.
  • There will be a negotiation process between the client and server to select which version to use
  • WireShark has support for it but Fiddler doesn’t.
  • As the speed is the main focus it’s especially important for CDNs to support it. In September 2016, AWS announced that they now support HTTP/2. For existing distributions it needs to enabled explicitly by updating the settings.

    AWS CloudFront HTTP/2 Support

  • On the client side looks like it’s been widely adopted and supported. CanIUse.com also confirms that it’s only allowed over HTTPS on all browsers that support it.

    HTTP/2 Browser Support

What Does It Look Like on the Wire

As it’s binary I was curious, as a developer, to see what the actual bits looked like. Normally it’s easy to inspect HTTP requests/responses because it’s just text.

Apparently the easiest way to do it is WireShark. First, I had to enable session logging by creating a user variable in Windows:

Windows environment variable to capture TLS session keys

and pointing the WireShark to use that log (Edit -> Preferences -> Protocols -> SSL)

This is a very neat trick and it can be used to analyse all encrypted traffic so it serves a broader purpose. After restarting the browser and WireShark I was able to see the captured session keys and by starting a new capture with WireShark I could see the decrypted HTTP/2 traffic.

WireShark HTTP/2 capture

It’s hard to make sense of everything in the packets but I guess it’s a good start to be able to inspect the wire format of the new protocol.

Resources

ios, swift comments edit

I have a drawer full of gadgets that I bought at one point in time with hopes and dreams of magnificent projects and never even touched!

Some time ago I started a simple spreadsheet to help myself with the impulse buys. The idea was before I bought something I had to put it to that spreadsheet and it had to wait at least 7 days before I allowed myself to buy it.

After 7 days strange things started to happen: In most cases I realised I had lost appetite for that shiny new thing that I once thought was a definite must-have!

I kept at listing all the stuff but quickly it started to become hard to wield by just a spreadsheet.

Sleep On It

The idea behind the app is to automate and “beautify” that process a little bit. It has one Shopping Cart in which the items have waiting periods.

It seemed wasteful to me doing nothing during the waiting period. After all it’s not just about dissuading myself from buying new items. I should use that time to make informed decisions about the stuff I’m planning to buy. That’s why I added the product comparison feature.

The shopping cart has a limited size. Otherwise you would be able to add anything whenever you think of something to game the system so their waiting period would start (well, at least that’s how my mind works!) if your cart is full you can still add items to the wish list and start reviewing products. It’s basically a backlog of items. This way at least you wouldn’t forget about that thing you saw in your favourite online marketplace. Once you clear up some space in your cart either by waiting to buy or deleting them permanently, you can transfer items from wish list to the cart and officially kick off the waiting period.

I have a lot of ideas to improve it but you gotta release at some point and I think it has enough to get me started. Hope anyone else finds it useful too.

If you’re interested in the app please contact me. I might be able to hook you up with a promo code.

Resources