docker devops comments

In this post I’d like to cover a wider variery of Docker commands and features. My goal is to be able to use it as a quick reference while working with Docker without duplicating the official documentation. It consists of the notes I took at various times.

Docker commands

  • docker ps: Shows the list of running containers
  • docker ps -a: Shows all containers including the stopped ones
  • docker images: Shows the list of local images
  • docker inspect {container id}: Gets information about the container. It can be filtered by providing –format flag.

    Example:

      Command: docker inspect --format="" abcdabcdabcd
    
      Output: sha256:abcdabcdabcdabcdabcdabcd.....
    

    The data is in JSON format so it can be filtered by piping to jq.

    Example:

      docker inspect 7c1297c30159 | jq -r '.[0].Path'
    
  • docker rm {container id or name}: Deletes the container

    run command accepts –rm parameter which deletes the container right after it finishes.

    To delete all the containers the following command can be used:

      docker rm $(docker ps -aq) 
    

    where docker ps -aq only returns the container ids.

  • docker rmi {image id or name}: Seletes the image (provided that there are no containers created off that image)

    To delete all images the following command can be used:

      docker rmi $(docker images -q)
    

    This command is equivalent of

      docker image rm {image name or id}
    
  • docker build: Creates a new image from Dockerfile

    It is equivalent of

      docker image build ...
    

    It can be used to build images straight from GitHub:

    Example:

      docker build https://github.com/docker/rootfs.git#container:docker
    
  • docker history {image name}: Shows the history of an image

  • docker run -it {image name} bash: Runs a container in interactive mode with bash shell

    Same as:

      docker container run -it ...
    
  • docker start/stop: Starts/stops an existing container

    If a command is specified the first time the container was run, then it uses the same command when it’s started.

    Example:

      docker run -it {image name} bash
    

    This creates a container in interactive mode

    We can exit the container by running exit in the terminal (leaving the container running in the background) ???? (or Ctrl P+Q??). We can then stop the container:

      docker stop {container id}
    

    Stop command sends a signal to the process with pid = 1 in the container to stop gracefully. Docker gives the container 10 seconds to stop before it forcefully stops it itself.

    We can start it again with interactive flag to run bash again docker run -i {container id}

    This runs bash automatically because that was the command we used while creating the container.

  • docker run -p {host port}:{container port}: Forwards a host port to container

  • docker run -v {host path}:{container path}: Maps a path on host to a path on container.

    If the container path exists it will be overwritten by the contents of host path. In other words, the container will see the host files while accessing that path incestead of its local files.

  • docker diff: Lists the differences between the container and the base image. Shows the added, deleted and changed files.

  • docker commit: Creates a new image based on the container

  • docker logs {container id}: Shows the logs of the container.

    Docker logging mechanism listens to stdout and stderr. We can link other logs to stdout and stderr so that we can view them using docker logs {container id}

    Example:

      RUN ln -sf /dev/stdout /var/log/nginx/access.log && ln -sf /dev/stderr /var/log/nginx/error.log
    
  • docker network:

    docker network create {network name}: Creates a new network

    docker network ls: Lists all networks

    docker network inspect {network name}: Gets information of a specific network

    docker run –network {network name}: Creates a container inside the specified network

    Containers can address each other by name within the network.

  • docker container exec: Can be used to run commands in a running container. Also can be used to connect to the container by running a shell

    Example:

    docker container exec {container id} sh

It starts a new process inside the container

Docker Compose

A tool for defining and running multi-container Docker applications.

  • Instead of stand-alone containers, “services” are defined in “docker-compose.yml” file
  • docker-compose up command will create the entire application with all the components (containers, networks, volumes etc)
  • docker-compose config: It is used to verify a docker-compose yml file.
  • docker-compose up -d: Starts the services in detached mode. Uses the current directory name to prepend the objects (networks, volumes, containers etc) created.
  • docker-compose ps: Shows the full stack with names, commands, states and ports
  • docker-compose logs -f: Shows logs of the whole stack. tails and shows the updates.
  • docker-compose down: Stops and removes the containers and networks. Volumes are NOT deleted.

Environment variables can be used in docker-compose.yml files in ${VAR_NAME} format.

  • docker-compose -f {filename}: Used to specify non-default config files.
  • Multiple YML files can be used and can be extended using “extends:” key in the following format:

      extends: 
          file: original.yml
          service: service name
    

General Concepts

Container: Isolated area of an OS with resource usage limits applied. VMs virtualize hardware resources, containers virtualize OS resources (file system, network stack etc)

Namespaces: Isolation (like hypervisors)

Control groups: Grouping objects and setting limits. In Windows they are also called Job Objects.

Daemon: Includes REST API.

containerd: Handles container operations like start, stop etc.. on Windows, Compute Services, includes runc

runtime: OCI (Linux -> runc) runc creates the container than exits

containerd cannot create containers by itself. That capability is with the OCI layer.

Windows containers are 2 kinds:

  • Native containers
  • Hyper-V

  • Layering
    • Base Layer: At the bottom (OS Files & objects)
    • App Code: Built on top of base layer

    Each layer is located under /var/lib/docker/aufs/diff/{layer id}

    Layer id is different from the hash value of the layer

  • Images live in registries. Docker defaults to Docker Hub

  • Docker Trusted Registry comes with Docker Enterprise Edition

  • Official images can be addressed as docker.io/image name or just the image name

  • Naming convention: Registry / Repo / Image (Tag)

  • Registry: docker.io is default
  • Image: latest is default

  • Image layers are read-only. Containers have their read-write layers. Container adds a writable layer on top of the read-only layers from the image. If they need to change something on a read-only layer, they use copy-on-write operation and create a copy of the existing layer and apply the change on that new layer. Anytime container needs to change a file, creates a copy of the file in its own writable layer and changes it there.

  • Cmd vs entrypoint: runtime arguments overrides default arguments. Entrypoint runtime arguments are appended

  • Dockerfile commands
    • FROM is always the first instruction
    • Good practice to label with maintainer email address
    • RUN: Executes command and creates a layer
    • COPY: Copies files into image in a new layer
    • EXPOSE: Specifies which port to liste to. Doesn’t create a new layer. Adds metadata.
    • ENTRYPOINT: Specifies what to run when a container starts.
    • WORKDIR: Specifies the default directory. Doesn’t create layer.
  • Multi-stage builds
    • Helps to remove unnecessary packages / files from the final production image.
    • It can get specific files from intermediary builds

Notes

  • A container will run one process only
  • If the command is a background process, the container will stop right after it ran. It needs to be something like a shell and not like a web server.
  • As of v1.12 Docker runs in either single-image mode or swarm mode.
  • Recent Docker versions support logging drivers. They forward the logs to external systems such as dusk of, splunk etc (–log-driver and –log-opts)

Resources

dev asp.net, dotnet core comments

Developing projects is fun and a great way to learn new stuff. The things is everything moves so fast, you constantly need to maintain the projects as well. In that spirit, I decided to re-write my old AngularJS DomChk53.

First, I developed an API. This way I won’t have to maintain my AWS Lambda functions. Even though nobody uses them, still opening a public API poses risks and maintenance hassle. Now everybody can run their own system locally using their own AWS accounts.

API Implementation

Full source code of the API can be found in DomChk53 repo under aspnet-api folder. There’s not much benefit in covering the code line by line but I want to talk about some of the stuff that stood out in one way or the other:

CS5001: Program does not contain a static ‘Main’ method” error

I started this API with the intention of deploying it to Docker so right after I created a new project I tested Docker deployment but it kept giving the error above. After some searching I found that the Dockerfile was in the wrong folder! I moved it one level up as suggested in this answer and it worked like a charm!

Reading config values

I appreciated the ease of using configuration files. IConfiguration provider is provided out-of-the-box which handles appsettings.json config file.

So for example I was able to inject it easily:

public TldService(IConfiguration configuration, IFetchHtmlService fetchHtmlService)
{
    this.configuration = configuration;
    this.fetchHtmlService = fetchHtmlService;
}

and use it like this:

var awsDocumentationUrl = configuration["AWS.DocumentationURL"];

Using dependency injection

ASP.NET Core Web API has built-in dependency injection so when I needed to use it all I had to do was register my classes in ConfigureServices method:

// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_1);

    services.AddTransient<ITldService, TldService>();
    services.AddTransient<IFetchHtmlService, FetchHtmlService>();
}

Documentation

Adding Swagger documentation was a breeze. I just followed the steps in the documentation and now I have a pretty documentation. More on that in the usage section below.

AWS Implementation

Even though the code will run locally in a Docker container, you still need to setup an IAM user with the appropriate permissions. So make sure the policy below is attached to the user:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "route53domains:CheckDomainAvailability",
            "Resource": "*"
        }
    ]
}

Usage

Nice thing about Swagger is it can used as a full-blown API client as well. So instead of developing a dummy client I just used Swagger UI for simple domain availability enquiries. This is until I develop a better frontend on my own. For now at least it works.

To use Swagger, simply run the Domchk53.API application and change the URL to https://localhost:{port number}/docs, port number being whatever your local web server is running on.

curl usage:

Windows:

curl -i -X POST -H "Content-Type: application/json" -d "{\"domainList\": [\"volkan\"], \"tldList\": [\"com\", \"net\"]}" https://localhost:5001/api/domain

Mac:

curl --insecure -X POST -H "Content-Type: application/json" -d '{"domainList": ["volkan"], "tldList": ["com", "net"]}' https://localhost:5001/api/domain

Example Test data:

This is just a quick reference to be used as template:

{
  "domainList": ["domain1", "domain2"],
  "tldList": ["com", "net"]
}
{ "domainList": ["volkan"], "tldList": ["com", "net"] }

References

dev javascript, tdd, unit testing comments

As applications get more complex, they come with more dependencies. Mocking helps us isolate the subject under test so that our unit tests can run any external dependencies. In this example, I’d like to show a simple mocking example using Jasmine and Karma in an Angular app.

Case study: Mocking a HTTP dependency

In this example the service I want to test is called TldService. It basically gets a raw HTML from a URL and extracts some data using regex.

It looks something like this

export class TldService {

  constructor(private fetchHtmlService: FetchHtmlService) { }

  getAllSupportedTlds(): string[] {
    const rawHtml = this.fetchHtmlService.getRawHtml();
    const supportedTlds = this.parseHtml(rawHtml);
    return supportedTlds;
  }

  private parseHtml(html: string): string[] {
    const pattern = /some-regex-pattern/g;
    const regEx = new RegExp(pattern);
    let tldList = [];

    let match = regEx.exec(html);
    console.log(match);

    while (match !== null) {
      tldList.push(match[1]);
      match = regEx.exec(html);
    }

    return tldList;
  }
}

This service depends on FetchHtmlService which does the getting HTML part. Nice thing about injecting this dependency is that we can replace it with a fake one while testing. This way we can test it without even having to implement the dependency.

import { TestBed } from '@angular/core/testing';
import { TldService } from './tld.service';
import { FetchHtmlService } from './fetch-html.service';

describe('TldService', () => {
  beforeEach(() => TestBed.configureTestingModule({

  }));

  it('should be created', () => {
    const service: TldService = TestBed.get(TldService);
    expect(service).toBeTruthy();
  });

  it('should parse html and extract TLD list', () => {
    const fetchHtmlService: FetchHtmlService = new FetchHtmlService();
    spyOn(fetchHtmlService, 'getRawHtml').and.returnValue('<a href="registrar-tld-list.html#academy">.academy</a>');
    const service: TldService = new TldService(fetchHtmlService);
    expect(service.getAllSupportedTlds().length).toBe(1);
  });
});

In the 2nd test above we are creating a new FetchHtmlService and overriding the getRawHtml function behaviour by using Jasmine’s spyOn method.

That’s it! Now we don’t need a network connection to make actual calls while testing our code. We can develop and test our service in isolation independent from the dependency!

References

dev devops, git comments

When you work on the same repository on multiple computers it’s easy to forget to push code and leave some changes local. To avoid that I decided to develop a tiny project that loops through all the folders and pulls remote changes then commits and pushes all the local changes.

The script itself it very simple. It first pulls the latest and pushes the local changes:

Function Sync-Git {
    git fetch --all
    git pull
    git add .
    git commit -m 'git-sync'
    git push
}

$rootFolder = '/rootFolder'

Write-Host "Changing directory to root folder: $rootFolder"
Set-Location $rootFolder
Get-Location | Write-Host

Get-ChildItem -Directory | ForEach-Object { 
    Set-Location -Path $_ 
    Write-Host $_
    Sync-Git
    Set-Location $rootFolder
}

Write-Host "Done"

Dockerfile is based Microsoft’s Powershell image and it only needs to install Git on it:

FROM mcr.microsoft.com/powershell

RUN apt-get update && apt-get install -y git
RUN git config --global user.email "you@example.com"

RUN echo "    IdentityFile ~/.ssh/id_rsa" >> /etc/ssh/ssh_config
RUN echo "    StrictHostKeyChecking no" >> /etc/ssh/ssh_config

RUN mkdir /home/git-sync
WORKDIR /home/git-sync

COPY ./git-sync.ps1 .

ENTRYPOINT ["pwsh", "git-sync.ps1"]

Build it at the root folder:

docker build . -t {image name}

And run it as:

docker run --rm -d -v /host/path/to/root/git/folder:/rootFolder -v ~/.ssh:/root/.ssh:ro {image name}

Or pull my image from Docker Hub:

docker pull volkanx/git-sync

Issues

fatal: unable to auto-detect email address

GitHub didn’t allow me to push my changes without an email address so I had to add the command below to Dockerfile:

git config --global user.email "you@example.com"

It doesn’t look good maybe but I don’t care too much about email addresses in commits anyway so I’m fine with it.

“WARNING: UNPROTECTED PRIVATE KEY FILE!” error

This might also be an issue in which case the solution is simple (assuming the name of the private key file is id_rsa):

chmod 600 ~/.ssh/id_rsa 

Bad configuration option

Make sure to have the following options in the config file in your .ssh directory:

Host *
  StrictHostKeyChecking no
  AddKeysToAgent yes
  IgnoreUnknown UseKeychain
  UseKeychain yes
  IdentityFile ~/.ssh/id_rsa

Source Code

References

dev javascript, tdd, unit testing comments

There are so many names of tools and frameworks are flying around in JavaScript if you’re a beginner it’s easy to get lost.

First things first: Identify what’s what

Basically we need the following key components to write and run unit tests:

  • Assertions: We run a method in subject under test and compare the result with some known expected value
  • Mocking: We need to mock the external dependencies (network services, database, file system etc) so that we can reliably test subject under test and the tests can run everywhere and without requiring any credentials or permissions.
  • Test runner: Our test is some block of code. Something needs to make sense of the unit tests we wrote, execute them and show the results (fail/pass)

Choosing the weapons: Mocha and Chai

In this example I’m going to use Mocha as my test runner and Chai as my assertion library.

They can simply be installed via NPM:

npm install --save-dev chai

To be able use Mocha in the Integrated Terminal inside Visual Studio we have to install Mocha as a global package:

npm install -g mocha

To put these to test I created this simple Maths service that calculates the factorial of an integer.

module.exports = class MathsService {
    factorial(num) {
        if (num < 0) 
            return -1;
        else if (num == 0) 
            return 1;
        else {
            return (num * this.factorial(num - 1));
        }
    }
};

And my unit test to test this method looks like this:

const MathsService = require('../maths-service.js');
const chai = require('chai');
const expect = chai.expect; 

describe('Factorial', function() {
    it('factorial should return correct value', function() {
        const mathsService = new MathsService();
    
        expect(mathsService.factorial(-1)).to.equal(-1);
        expect(mathsService.factorial(0)).to.equal(1);
        expect(mathsService.factorial(1)).to.equal(1);
        expect(mathsService.factorial(3)).to.equal(6);
        expect(mathsService.factorial(5)).to.equal(120);
    });
});

Chai supports 3 styles:

  • Should
  • Expect
  • Assert

They all server the same purpose so it’s just a matter of taste at the end of the day. For this demo I used expect style.

By default Mocha searches test folder. So you can run mocha without parameters if you put your tests under test folder or you can specify the folder. For example, in the screenshot below I moved the test to the root folder and entered “mocha .” and that worked fine as well.

Conclusion

In this post I want to show the very basics of unit testing in JavaScript just enough to see a passing test. In future posts I’ll build on this and explore other frameworks and other aspects of TDD.

References

linux macbook, ubuntu comments

This is my old MacBook Mid 2009. It was sitting in the closet for so long I thought I could invent some purpose and use it before it finally dies. Looks like latest macOS version don’t even support this device so I thought maybe I could use it as a learning tool for Ubuntu.

Since SSDs are so cheap these days I just decided to keep the old macOS drive as backup and install Ubuntu on one of these babies:

I think around £18 is a small price to pay for a brand new SSD so went with it.

Installing Ubuntu

  1. Download the ISO here

  2. Burn the ISO to a USB Drive. Instead of installing extra software I followed this guide: Making a Kali Bootable USB Drive. It’s for Kali Linux but this bit works for burning any Linux distro.
  3. For the rest follow the steps here starting with Step 9

And after the installation this is what my MacBook looks like:

References

hobby productivity, apple watch, alexa, philips hue, iot comments

My #1 rule for productivity is “No Snoozing!”. If you snooze, it means you are late for everything you planned to do and that is a terrible way to start your day. This post is about a few tools, tips and techniques I use to prevent snoozing.

Tip #1: Sleep well

This is generally easier said done but there’s no way around it. You MUST get enough sleep. Otherwise, sooner or later your will power will get weaker and weaker. Eventually you’ll succumb to the sweet temptation of more sleep.

Tip #2: Place the alarm away from your bed

This way when your alarm (most likely your phone) goes off you have to make a deliberate attempt to get up and turn it off. If you have to get out of the bed you’re more likely to not to get back to it straight away.

Tip #3: Use multiple alarms with different sounds

It gets easier to wake up if you can surprise yourself! Human beings are so good at adapting to every condition we very easily start getting used and ignore the same alarm sound going off at the exact same time every day. I find it useful to change the alarm times and sounds every now and then.

Tip #4: Use Apple Watch

After Apple Watch Series 4 was released I got myself one.

I’m not sure if it’s worth the cost but when it comes to waking up a little vibration on your wrist can do miracles apparently!

When you have it pair with your iPhone, by default you can stop the alarms from your watch. This may be a nice convenience feature in some cases, but when it comes to waking up we are trying to make it as hard as possible for ourselves to turn the alarms off.

My trick is:

First I disable “Push alerts from iPhone” in the Watch app.

Then I create a separate alarm on watch for the same time.

This way I get 2 alarms at the same time. It’s easy to stop the watch as it’s within my arm’s reach. While the haptic feedback of the watch wakes me up the alarm on the phone also goes off. Now I have to physically get out of the bed to stop that one as well.

Tip #5: Use Alexa

Another gizmo to set an alarm is Alexa but you can do much more than just that with Routines.

Tip #5.1: Play a playlist

This tip requires Spotify Premium subscription.

First, create yourself a nice, loud and heavy playlist of “waking up” music. I prefer energetic Heavy Metal songs from Lamb of God and Slayer. The trick here is to play a random song every morning. Similar to Tip #3, the same song every morning becomes very boring very quickly. But having a random one keeps you surprised every morning. I use this command to play my playlist in shuffle mode:

Shuffle Playlist '{Playlist name}'

Tip #5.2: Turn the lights on

A good sleep tip is to keep your bedroom as dark as possible. That’s why I have all black curtains in my room and it’s quite dark. The downside is it’s so good for sleep it makes waking up even harder!

That’s why I bought myself a Philips Hue smart bulb and as part of my waking up routine Alexa turns it on along with playing the Spotify playlist.

This is what my routine looks like:

Conclusion

For me snoozing is a cardinal sin so I’m always on the lookout for improving my arsenal to fight against snoozing. Hope you find something useful in this post too. If you have tips on your own feel free to leave a comment.

Resources

docker devops, github, backup comments

A while back I created a PowerShell script to backup my GitHub account and blogged about it here. It’s working fine but it requires some manual installation and setup and I didn’t want to do that every time I needed to deploy it to a new machine. That’s why I decided to Dockerize my script so that everything required can come in an image pre-installed.

TL;DR:

  • There’s a Powershell script that allows you to back up your GitHub account including private repositories here: Source Code
  • There’s a Docker image that encapsulates the same script which can be found here: Docker Image
  • Below are my notes on Docker that I’ve been taking while working on Docker-related projects. Proceed if are interested in some random tidbits about Docker.

Lessons Learned on Docker

  • Shortcut to leave container without stopping it: Ctrl P + Ctrl Q

  • Build a new image from the contents of the current folder

      docker image build -t {image_name} .
    
  • Every RUN command creates a new layer in the image. Image layers are read-only.

  • Connect to an already running container:

      docker attach {container id or name}
    
  • Save a running container as an image:

      docker commit -p 78727078a04b container1
    
  • Remove image

      docker rmi {image name}
    

    This requires the image doesn’t have any containers created off of it. To delete multiple images based on name:

      docker rmi $(docker images |grep 'imagename')
    
  • List running containers

      docker container ls
    
  • To list all containers including the stopped ones:

      docker ps -a
    
  • Delete all stopped containers

      docker container prune
    
  • To delete all unused containers, images, volumes, networks:

      docker system prune
    
  • Copy files to and from a container

      docker cp foo.txt mycontainer:/foo.txt
      docker cp mycontainer:/foo.txt foo.txt
    
  • To overwrite the entrypoint and get an interactive shell

      docker run -it --entrypoint "/bin/sh" {image name}
    
  • Tip to quickly operate on images/containers: Just enter the first few letters of the image/container. For example if your docker ps -a returns something like this

      1184d20ee824        b2789ef1b26a                  "/bin/sh -c 'ssh-k..."   18 hours ago        Exited (1) 46 seconds ago                         happy_saha
      7823f76352e3        github-backup-04              "/bin/sh"                18 hours ago        Exited (255) 21 minutes ago                       objective_thompson
    

    you can start the first container by entering

      docker start 11
    

    This is of course provided that there aren’t any other containers whose ID start with 11. So no need to enter the full ID as long as the beginning is unique.

  • To get detailed info about an object

      docker inspect {object id}
    

    This returns all the details. If you interested in specific details you can tailor the output by using –format option. For example the following only returns the LogPath for the container:

      docker inspect --format ''  {container id}
    
  • To get the logs of the container:

      docker logs --details --timestamps  {container id}
    
  • Docker for Mac actually runs inside a Linux VM. All docker data is stored inside a file called Docker.qcow2. The paths that are returned are relative paths in this VM. For instance if you inspect the LogPath of a container it would look something like

      /var/lib/docker/containers/{container-id}/{container-id}-json.log
    

    But if you check your host machine, there is no /var/lib/docker folder.

    In Docker preferences it shows where the disk image is located:

    This command let me to go into VM

      screen  ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
    

    Then I was able to navigate to /var/lib/docker and peek inside the volumes where the data is persisted. This virtualization does not exist in Linux and you can view everything on the host machine straight away.

  • Enter a running container (Docker 1.3+):

      docker exec -it {container-id} bash
    
  • Copy an image from one host to another: Export vs Save

    Save, saves a non-running container image to a file:

      docker save -o <save image to path> <image name>
      docker load -i <path to image tar file>
    

    Export, saves a container’s running or paused instance to a file

      docker export {container-id} | gzip > {tar file path}
      docker import {tar file path}
    

Resources

linux ec2, ebs comments

I recently decided to purchase a reserved t3.nano instance to run some Docker containers and for general testing purposes. In addition to the default volume I decided to add a new one to separate my files from the OS. It required a few steps to get everything in place so I decided to post this mostly for future reference!

Attach a volume during creation

First I added a new volume to the instance while creating it.

Connecting to instance

Now we have to connect to the instance to format the new volume. To achieve that we must have access to the private we generated while we created the instance. So to SSH into the machine we run this command:

ssh -i {/Path/To/Key/file_name.pem} ec2-user@{public DNS name of the instance}

Format the volume

I found some AWS documentation to achieve this which was very useful: Making an Amazon EBS Volume Available for Use on Linux

No need to repeat every command in that documentation. It’s a simple step-by-step guide. Just follow it and you have a volume in use which is also mounted at start up.

Install and configure Docker

Installing Docker is as simple as running this:

sudo yum update -y
sudo yum install -y docker

To be able to use Docker without sudoing everything ad ec2-user to docker group:

sudo usermod -aG docker ec2-user

We need to make sure that Docker daemon starts on reboot too. To achieve this run this:

sudo systemctl enable docker

Copy files to the instance

To copy some files to the new instance I used SCP command:

sudo scp -i {/Path/To/Key/file_name.pem} -r {/Path/To/Local/Folder/} ec2-user@{public DNS name of the instance}:/Remote/Folder

The issue was ec2-user didn’t initially have access to write on the remote folder. In that case you can run the following command to have access:

setfacl -m u:ec2-user:rwx /Remote/Folder

Resources

aws cloudtrail, security, audit comments

Another important service under Management & Governance category is CloudTrail.

A nice thing about this service is that it’s already enabled by default with a limited capacity. You can view all AWS API calls made in your account within the last 90 days. This is completely free and enabled out of the box.

To get more out of CloudTrail we need to create trails

Inside Trails

A couple of options about trails:

  • It can apply to all regions
  • It can apply to all accounts in an organization
  • It can record all management events
  • It can record data events from S3 and Lambda functions

Just be mindful about the possible extra charges when you log every event for all organization accounts:

Testing the logs

I created a trail that logs all events for all organization accounts. I created an IAM user in another account in the organization. In the event history of the local account events look like this:

These events now can be tracked from master account as well. In the S3 bucket these events are organized by account id and date and they are stored in JSON format:

Conclusion

Having a central storage of all events in across all regions and accounts is a great tool to have. Having the raw data is a good start but making sense of that data is even more important. I’ll keep on exploring CloudTrail and getting more out of it to harden my accounts.

Resources

aws security, aws config, audit comments

In my previos blog post I talked about creating an IAM admin user and using that instead of root user all the time. Applying such best practices is a good idea which also begs the question: How can I enforce these rules?

AWS Config

The official description of the service is: “AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources”

What this means is you select some pre-defined rules or implement your custom rules and AWS config constantly runs checks against your resources and notify you if you have non-compliant resources.

Since currently I’m interested in hardening IAM users in the next example I’m going to use an IAM check

Use case: Enforcing MFA

As of this writing there are 80 managed config rules. To enforce MFA, I simply searched MFA in the “Add rule” anf got 5 matches of which I selected only 3:

After I accepted the default settings it was able to identify my IAM user without MFA:

And it comes with a nice little dashboard that shows all your non-compliant resources:

It also supports notifications via SNS. It creates a topic and all you have to do is subscribe to that via an email address and after confirming your address you can start receiving emails.

I was only expecting to get emails about non-compliant resources but it’s bit noisy as it sends emails with subjects “Configuration History Delivery Completed” or “Configuration Snapshot Delivery Started” which didn’t mean much to me.

Pricing

I think the price is exteremely high. The details can be found on their pricing page but in a nutshell a single rule costs $2/month. So for the above example I paid $6 which is a lot of money in terms of resources used.

Conclusion

I like the idea of having an auditing system with notifications but for this price I don’t think I will use it.

I will keep on exploring though as I’m keen on implementing my custom rules with AWS config and also implementing them without AWS config and see if this service adds any benefit over having scheduled Lambda functions.

Resources

aws iam, security, best practices comments

When you create a new AWS account you are the root user who has unlimited access to everything. Using such a powerful user as root on a day-to-day basis is not such a good idea because if it gets compromised you may not have a way to override and/or undo the changes done by the hacker.

Using IAM user instead of root account

Instead, suggested best practice is to create an admin-level IAM account and use it for normal operations. At first I was hesitant to adopt this practice. I didn’t see the point and thought attaching AdministratorAccess policy awould make the use as powerful as root. But there’s a whole list of things that even the most powerful IAM user cannot do. Here’s the list: AWS Tasks That Require AWS Account Root User Credentials

So as you can see root user has important privileges such as closing the account and changing billing information. Even if your account gets compromised and some mailicous person gains access using an IAM account, you can still log in as root and take necessary action.

In a nutshell, based on AWS documentation the following practices are recommended:

  • Use the root user only to create your first IAM user
  • Create an IAM user for yourself as well, give that user administrative permissions, and use that IAM user for all your work.

In addition to Eric Hammond suggests in his blog to delete the root account password as well and use Forgot Password option to create a new one when needed. I keep my passwords in a password manager so if that application is compromised, the hacker can reset my password as well so I don’t follow this practice but it might come in handy if you have to write your password down somewhere.

Templated IAM user creation

It’s a good practice to create an IAM user right after you create your AWS account. It’s even a better practice to automate this process. To achieve this I created a CloudFormation template. The YAML template below does the following:

  • Creates an IAM group named administrators
  • Creates a user named admin
  • Attaches AdministratorAccess policy to the group
  • Forces the user to change their password first time they log in (by attaching IAMUserChangePassword policy to the user)
Resources:
  AdministratorsGroup:
    Type: AWS::IAM::Group
    Properties:
      GroupName: "administrators"
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/AdministratorAccess
      Path: /

  AdminUser:
    Type: AWS::IAM::User
    Properties: 
      Groups:
        - !Ref AdministratorsGroup
      LoginProfile:
        Password: "CHANGE_THIS"
        PasswordResetRequired: true
      ManagedPolicyArns:
        - arn:aws:iam::aws:policy/IAMUserChangePassword
      Path: /
      UserName: "admin"

Resources

aws cloudformation comments

If you have a habit creating your AWS resources manually, things can get very messy very quickly. At some point you realize that you have no idea if a resource is still in use and just to be safe you leave it alone.

I found myself in this situation and decided to take advantage Infrastructure as Code paradigm using AWS CloudFormation. To start simple I decided to migrate my automated CV response application to a CloudFormation stack. Going forward this is a much efficient way to write blog posts too. Instead of writing step by step instructions I can simply post the CloudFormation stack in JSON or YAML.

Infrastructure of Code in a nutshell

Basically this approach allows you to define, manage and provision all the resources that define your system.

Advantages:

  • Changes can be source-controlled
  • Entire provisioning process can be automated
  • entire infrastructure can be easily recreated in a different account.
  • Resources can easily be identified. Tags can be used to identify which stack the resources belong to.
  • Resources can easily be clean up. Deleting a stack will delete all the resource it created.

Basic Terminology

  • Stack: All the resources used to create an infrastructure.
  • StackSet: A StackSet is a container for AWS CloudFormation stacks that lets you provision stacks across AWS accounts and regions by using a single AWS CloudFormation template.
  • Design template: This is the file YAML or JSON format that defines all the resources that will be created AWS

Where to start…

Even though you get familiar with the concepts finding where tostart can be intimidating sometimes. When it comes to CloudFormation, there are a lot of sample templates that you can start and build upon.

So here’s an easy way to get started:

Step 1: Save the following snippet to a local file such as cloudformation.sample.yaml:

Resources:
  Ec2Instance:
    Type: AWS::EC2::Instance
    Properties:
      InstanceType: t2.micro
      ImageId: ami-0aff88ed576e63e90

In the example above I’m using a stock AWS Linux AMI in London region.

Step 2: Go to Stack and click Create Stack (Make sure you’re in EU London region otherwise AWS won’t be able to find the AMI specified in our template)

Step 3: In specify template section, select Upload a template file

Step 4: Clock Choose file to locate your file and upload your template.

Step 5: Click next and specify a stack name something like FirstStackForEC2 and click Next

Step 6: Click Next on Configure Stack Options view and in the final review step of the wizard click Create Stack.

At this point you can observe the progress of your stack being created.

Now if you go to EC2 service in the same region you should be able to see the new instance:

If you delete the stack, it will in turn delete everything it created which is the EC2 instance in this example.

Launch Stack URLs

I always like the launch stack buttons that I see every now and then. I think there’s something magical about clicking a button and watching an entire infrastructure being created right before your eyes!

Basically clicking the Launch Stack URL opens the wizard we just used with the fields populated with the values in the template.

Step 1: Upload the template to S3 and make it publicly accessible.

Step 2: Use the following naming convention and replace the values:

https://console.aws.amazon.com/cloudformation/home?region=region#/stacks/new?stackName=stack_name&templateURL=template_location

In this example I uploaded the Launch Stack button image to my GitHub repository so that I can link to it.

Resources

aws aurora, rds, serverless comments

A few months ago AWS announced a serverless model for their Aurora databases. Compared to traditional DB approach this is brand new.

I’ve been trying it out for a pilot application and it works well in general. You pay for what you use just like any other serverless resource.

The only problem I’ve been having is DB startup time after pause. Meaning after 5 minutes the resources are released and the first request that comes after that suffers a performance penalty. My application was getting an error when this happened and it was showing an error screen. Obviously from a user standpoint it’s not a great experience.

So to remedy this issue I’ve updated the DB connection timeout

Connection Timeout=120

By default it’s 15 seconds which is not enough for the new server to respond. But after increasing the timeout at least I could prevent the application from failing. Of course this doesn’t speed up the response time of the DB server.

They recently announced additional regions that support serverless Aurora.

For cost-cutting reasons this can be a great option. Especially if your system is idle for extended periods of time you don’t need to pay anything. Also it scales up so you don’t have to worry about the database bottlenecks under heavy traffic.

Resources

aws cloudwatch, custom metric, devops comments

I had an issue recently with an EC2 instance running out of disk space. Unfortunately free disk space is not a metric that comes out of the box with AWS CloudWatch. This post is about implementing a custom metric and getting notifications via AWs CloudWatch based on that metric.

Steps to monitor disk space with CloudWatch

Step 1: Download sample config file

AWS provides a sample JSON file at this location: https://s3.amazonaws.com/ec2-downloads-windows/CloudWatchConfig/AWS.EC2.Windows.CloudWatch.json

Download a copy of this file.

Step 2: Set IsEnabled to true

By default it comes disabled so set the value as shown below:

"IsEnabled": true

Step 3: Add the custom metric for disk usage

Add the custom metric to monitor disk space:

{
    "Id": "PerformanceCounterDisk",
    "FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
    "Parameters": {
        "CategoryName": "LogicalDisk",
        "CounterName": "% Free Space",
        "InstanceName": "C:",
        "MetricName": "FreeDiskPercentage",
        "Unit": "Percent",
        "DimensionName": "InstanceId",
        "DimensionValue": "{instance_id}"
    }
}

Step 4: Add the new metric to flows

After defining the metric we need to add it to the flows so that it can be sent to CloudWatch. To achieve this update the flows section as shown below:

"Flows": {
    "Flows": 
    [
        "(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
        "(PerformanceCounter,PerformanceCounterDisk),CloudWatch"
    ]
}

Step 5: Add IAM role to server

It’s a good practice to manage permissions of EC2 instances via IAM roles assigned to the machine. To enable sending logs to CloudWatch add AmazonEC2RoleForSSM policy to the machine’s role

Without this role SSM agent service gets an access denied error.

Step 6: Restart Amazon SSM Agent service

Either by using Windows Services Manager or running the following command:

Restart-Service AmazonSSMAgent

Once this is all done wait a few minutes and check CloudWatch metrics. Under All -> Windows/Default you should be able to see new metric under InstanceId group (as that’s what we are using to group the logs). And when you click the metric you should be able to see a nice time-based graph of free disk space on the server:

Notes

  • It’s useful to know where SSM Agent’s logs are stored. They can be found in this path:

    %PROGRAMDATA%\Amazon\SSM\Logs\

  • The service reports every 5 minutes. The PollInterval in the JSON file is in seconds and is different than service report interval.

Resources

awssecurity organizations, iam comments

I have never been a huge fan of AWS Management Console. Some reasons for that being:

  • Inconsistencies: In some services you can search by anything (such as tag value in EC2 dashboard) whereas in others you have to put in the exact start of the object (such as CloudWatch)
  • Regional separation: Some might like it but I find it confusing and error-prone. If you need to work in multiple regions you have to constantly change the region from the dropdown menu. If you accidentally create a resource in another region you wouldn’t see it’s still running until you accidentally switch back to that region again. But S3 seems to be an exception to this as you can select the region while creating the bucket and you can see all in the same list (speaking of inconsistencies…)
  • Flat resource structure: Every resource is mixed together in an account. If you have multiple projects or teams in your company, you would see all the resources they created among yours. Also there is no environment concept. Your test and production resources live side by side.

This post is about AWS Organizations which addresses the 3rd point in the list above.

What is AWS Organizations?

It is a way to centrally manage multiple accounts inside an organization by creating a hierarchy between accounts.

Benefits of having an account structure

  • No need to label everything with project/team/environment name
  • Production and non-production resources don’t live side by side
  • Better access controls: No need to grant access on resource level. It can be done much easily on account-level

Also from a cost point of view it has the following benefits (taken from AWS Account Structure Considerations)

  • Grouping resources that require different payment instruments
  • Providing groups with different levels of administrative control over AWS resources
  • Better controlling Reserved Instances for specific workloads
  • Identifying untaggable costs such as data transfer
  • Using accounts associated with different business units or functional teams

Key Concepts

Account: Your regular AWS account. The first account you create is called a Master Account, the rest are Member Accounts.

Organization: A group of related accounts. The account creating the organization becomes the master account.

The star next to the account indicates it is the master account.

Organizational Unit: You can use organizational units (OUs) to group accounts together to administer as a single unit. This can be any logical grouping such as team, project, environment etc.

Service Control Policies (SCPs): Enables you to restrict, at the account level of granularity, what services and actions the users, groups, and roles in those accounts can do

Managing Projects and Environments

First I was tempted to separate projects as well but I’d end up with too many accounts so abandoned that idea and adopted an environment-based organizational structure. I ended up having these AWS accounts in my organization:

  • Dev
  • Integration
  • UAT (User Acceptance Test)
  • Sandbox
  • Production

Then I created an organizational unit named Stages and moved all these accounts under that OU. This is just one way of structuring projects. Based on organizational needs it can be customized. In my case I decided to keep all shared services (logging, auditing, source code) in the master account.

Logging into accounts

This baffled me at first. Initially I created a test account which I wanted to delete later on. But I wasn’t able to do that until I completed the sign up steps which in turn I wasn’t able to because I didn’t have the credentials to log in!

As stated in this document:

When you create a new account, AWS Organizations initially assigns password 
to the root user that is a minimum of 64 characters long. All characters 
are randomly generated with no guarantees on the appearance of certain 
character sets. You can't retrieve this initial password. To access the
account as the root user for the first time, you must go through the
process for password recovery.

So when you follow the sign-in link it redirects you to IAM login page. I needed to switch to root account login and recover my password by using the Forgot My Password link. On that note: Don’t use fake email addresses as you will need the confirmation email to recover your password.

Removing account issues

This one was tricky. In order to leave an organization first you need to enter a payment method and select a support plan. This way the account becomes eligible to be a standalone account. Only after that, you can Leave organization. But not right away!

After I entered all the data and completed the setup steps, I clicked Leave organization and I got this error:

I waited almost a full day after getting this error but to no avail. I kept getting the same error: “This operation requires a wait period. Try again later.”

I had a chat with a support engineer and created a case for this. Nothing helped at first but after a few days I tried again and it worked! So either the waiting period was very long or they fixed something in my account unbenownst to me.

Deleting the account without removing

Another issue I had was deleting an account before removing from the organization. I was assuming that if the account was closed permanently it would be removed from the organization as well. This was not the case. It remains listed as Suspended

Unfortunately, once this happens there is no way of resolving it using the tools at our disposal. The only solution is to contact AWS support, reactivated the suspended account, leave the organization and close it again!

But you have to do it from the suspended account, not from the master account. Since technically you’re requesting support for another account, they won’t do it (as they told in their response). Good news is that we still have access to support even though the account is suspended. Si I went to support page and created a support request to reinstate my account (so that I could close it again shortly after!)

Another option might be just to wait. I haven’t tried it myself bu in the account closure email it states “After 90 days, you will not be able to reopen your account, and any remaining content in your closed account will be deleted.” So I’m guessing it will be gone completely if I could just for 90 days.

Managing Accounts Programmatically

The coolest thing about AWS Organizations is accounts can be created via command line. It’s easy as this:

JOTUNHEIM:~ volkan$ aws organizations create-account --account-name {NAME}  --email {EMAIL}
{
    "CreateAccountStatus": {
        "Id": "xyz-abcabcabcabcabcabcabcabcabcabcab",
        "AccountName": "{NAME}",
        "State": "IN_PROGRESS",
        "RequestedTimestamp": 1532494396.633
    }
}

Notes

  • Free tier is shared among all accounts in the organization: “If your company creates your AWS account through AWS Organizations, free tier eligibility for all member accounts begins on the day the organization is created.”

Conclusion

This post is meant to be an introduction to AWS Organizations rather than a complete guide. I will post similar ones as I use this as basis of my infrastructure and build on this.

Resources

linuxsysops raid comments

AWS and cloud computing are awesome but I still enjoy having a server at home. I’ve decided to reinstate my old desktop. Replaced the disks with shiny new ones (A 500GB SSD and 2 4TB drives for data) and installed Ubuntu 18.04.

My primary goals were:

  • Partition SSD drive and use it for data that needs performance
  • Set up RAID1 on 2 4TB disks so that 1 disk failure wouldn’t result in data loss
  • Set up some sort of notifications to monitor SSD disk health
  • Set up some sort of notifications to monitor RAID disk health

Having all these in place was important for me to have a solid, reliable system before I started building stuff on top of it.

Partitioning

By default Ubuntu only adds a 512MB /boot/efi partition and leaves the rest for root (/). But since I recently had some free space issues in the boot drive recently I decided to create a boot partition as well. Also added a 32GB Swap partition. Swap is is virtual memory and it should have the same size as the computer’s memory.

So I ended up allocating 32GB for home as well and left the rest for the root. Going forward now I have more than 300GB to use for application data, Docker images etc.

RAID1

Now it’s time to set up a RAID array to have some redundancy. They are shiny new disks but you can never fully rely on them and they will eventually fail. Probably it’s not the best idea to install 2 identical disks at the same time as their lifecycle will likely end at similar times. So I’ll need to keep an eye on them and set up some monitoring and notifications (more on that later).

As my guide I started using this article called Setting up RAID 1 (Mirroring) using ‘Two Disks’ in Linux – Part 3 to set up my RAID.

This is a very nice article and explains everything step by step already so I’m not going to duplicate it here. But I bumped into an issue while following the guide: 4TB drives. My disks were MBR which only supported up to 2TB and I had to convert them into GPT disks.

The answer was parted. I followed the steps in this answer and managed to partition the drives to 4TB (3.7 to be exact!)

Testing RAID1

Now that I had a RAID it was time to go out and test it. First thing I wanted to see was it mounted automatically after a reboot. It did mount the drives but there was a problem with RAID. At boot the device was showing as md127 instead of md0 and it was failing to sync.

My mdadm.conf file looked like this when this was happening:

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=asgard:0 UUID={device id}
   devices=/dev/sda1,/dev/sdb1

After some Googling I found the answere here

The solution was simply removing the name parameter from the file!. After that, when I rebooted and entered the following commands I was able to see a healthy RAID:

cat /proc/mdstat

and

mdadm -D /dev/md0

Monitoring System Disk with smartmontols

Now that everything looked in place, I needed to ensure it stays that way!

To achieve that goal, I edited this file /etc/smartd.conf and added this line:

DEVICESCAN -o on -H -l error -l selftest -t -m my.email@address.com -M test

Set up sending emails from server

When I first installed smartmontools, it asked Postfix configuration which looks like this:

Since I was interested in getting results fast, I selected No Configuration and moved on. Now it’s time to configure it.

To bring up this screen again I entered the following command:

dpkg-reconfigure postfix

I followed the wizard, entered my email domain. Then followed this documentation from AWS: Integrating Amazon SES with Postfix

As always, I ended up having some issues :-).

First problem was it didn’t work! It wasn’t finding the SES SMTP server as relay server and was always trying to send emails from localhost. The solution was here

As the instructions say, I updated /etc/postfix/main.cf with the values below:

myhostname = localhost
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain

and I was able to send emails to the SMTP server. But the SES didn’t like my IP address and the email bounced. The solution to that was to create an IP filter in SES and allow the traffic from that address.

Then I restarted the service to test

service smartmontools restart

and received the notification. Actually received 3 emails for some reason.

The service runs at startup so this way I can be notified whenever it reboots too.

Monitoring RAID with mdadm

Took some time to complete synchronizing 4TB disks but finally I was ready to rock:

Apart from smartmontools, mdadm application is also capable of sending emails when a disk fails.

The documentation tells to add MAILADDR followed by an email address to specify target email address but in my tests adding the line didn’t change anything.

In fact, turns out by default it’s sending the notifications. As my server was set up to send emails now, by just entering the following command to send out a test email I was able to receive it

mdadm --monitor --scan --test -1

The problem is it’s now using root@mydomain.com all the time now. I wasn’t able to change it. But as long as that mailbox exists at least I can receive notifications.

Conclusion

All hardware eventually fails. Disk failure is especially annoying because it may cause some precious data loss. Apart from good backup practices, it’s also helpful to have a good monitoring system and redundancy on the disks we use.

It took me some time to set up my system but now at least I have 1 disk redundancy for large amounts of data and the ability to be notified whenever something goes wrong with the disks which gives me some peace of mind (not too much though :-)).

Resources

hobby fitness, concept2, rowing comments

I’ve been using a Concept2 Model D rowing machine for some time now and quite enjoying it as a form of workout. (Primarily because I can still watch Netflix or Youtube while rowing!)

Concept2 Model C Rowing Machine

Since I have some data accumulated in it I decided to have a look into ways of getting it and working on it hoping that it give me some insights about possible ways of improving my stats.

Official Tools

To be honest the existing toolset that comes out of the box is quite sufficient.

LogBook

This is the official web application where you can monitor your workouts.

Concept LogBook

This application is quite good really. You can manually enter your workouts, view the existing history. Create teams and participate in challenges so there’s also a social aspect to it.

iOS App: ErgData

The monitor connected to the rowing machine (Performance Monitor - PM5) supports Bluetooth connection which can be easily paired with an iPhone. If you install ErgData app on your phone you can sync the device with your phone and get them out that way very easily. Better yet, it allows you to upload your workouts to LogBook. After you complete a workout, you can easily upload the results by clicking Sync.

Concept2 ErgData app

Unofficial Tools

RasPiRowing

I found this nice Raspberry Pi based project called RasPiRowing developed one of the staff members of Concept.

Since I’m a fan of Raspberry Pi have a whole bunch of them lyting around, it didn’t take me long to install it and use it. It works just fine and comes with a fun fish game too:

FishPi Game

It’s a nice way of interacting with the Concept2. Since it can be accessed by a Python application I can build my own applications as well to get data out of the erg.

Developer Tools

SDK

There is an SDK available to download for both Mac and Windows.

I installed the Mac version which extracts the files under /Users/{username}/C2 PM SDK/

But I couldn’t find much useful stuff in there:

SDK contents

Tried to build the XCode project but gave a build error and I just left it at that.

API

They also provide an API wich can be used to get the data out. This sounds the most interesting part to me as I can develop my own custom tools based on this API.

In the documentation, they advise to use the dev site first while trying out the API and then request using the live data. Also you need to register your application with Concept2 to be able to use their APIs.

Resources

devaws s3, csharp comments

When it comes to transferring files over network, there’s always a risk of ending up with corrupted files. To prevent this on transfers to and from S3, AWS provides us with some tools we can leverage to guarantee correctness of the files.

Verifying files while uploading

In order to verify the file is uploaded successfully, we need to provide AWS the MD5 hash value of our file. Once upload has been completed, AWS calculates the MD5 hash on their end and compares the both values. If they match, it means it went through successfully. So our request looks like this:

var request = new PutObjectRequest
{
    MD5Digest = md5,
    BucketName = bucketName,
    Key =  key,
    FilePath = inputPath,
};

where we calculate MD5 hash value like this:

using (var stream = new FileStream(fullPath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
    using (var md5 = MD5.Create())
    {
        var hash = md5.ComputeHash(stream);
        return Convert.ToBase64String(hash);
    }
}

In my tests, it looks like if you don’t provide a valid MD5 hash, you get a WinHttpException with the inner exception message “The connection with the server was terminated abnormally”

If you provide a valid but incorrect MD5, the exception thrown is of type AmazonS3Exception with the message “The Content-MD5 you specified did not match what we received”.

Amazon SDK comes with 2 utility methods named GenerateChecksumForContent and GenerateChecksumForStream. At the time of this writing, GenerateChecksumForStream wasn’t available in the AWS SDK for .NET Core. So the only method worked for me to calculate the hash was the way as shown above.

Verifying files while downloading

When downloading we use EtagToMatch property of GetObjectRequest to have the verification:

var request = new GetObjectRequest
{
	BucketName = bucketName,
    Key =  key,
    EtagToMatch = "\"278D8FD9F7516B4CA5D7D291DB04FB20\"".ToLower() // Case-sensitive
};

using (var response = await _s3Client.GetObjectAsync(request))
{
    await response.WriteResponseStreamToFileAsync(outputPath, false, CancellationToken.None);
}

When we request the object this way and if the the MD5 hash we send doesn’t match the one on the server we get an exception with the following message: “At least one of the pre-conditions you specified did not hold”

Once important point to keep in mind is that AWS keeps the hashes in lowerc-ase and the comparison is case-sensitive so make sure to convert everything to lower-case before you send it out.

Resources

devaws certification, certified cloud practitioner comments

As I decided to get full AWS certification I started preparing for the exams. I wanted to start with the Cloud Practitioner just to get my self accustomed with the exam procedure in general. Here’s my notes:

Exam Objectives

According to Amazon’s official exam description page, this exam validates the following aspects:

  • Define what the AWS Cloud is and the basic global infrastructure
  • Describe basic AWS Cloud architectural principles
  • Describe the AWS Cloud value proposition
  • Describe key services on the AWS platform and their common use cases (for example, compute and analytics)
  • Describe basic security and compliance aspects of the AWS platform and the shared security model
  • Define the billing, account management, and pricing models
  • Identify sources of documentation or technical assistance (for example, whitepapers or support tickets)
  • Describe basic/core characteristics of deploying and operating in the AWS Cloud

Main Subject Areas

  1. Billing and pricing (12%)
  2. Cloud concepts (28%)
  3. Technology (36%)
  4. Security (24%)

Preparation Notes

aws.training Online Training Notes

Cloud Computing

  • On-demand delivery of IT resources. Can scale up and down based on needs.
  • Fosters agility (number one reason why customers switch to cloud computing): Speed (global reach), experimentation (operations as code, templated environments with CloudFormation) and culture of innovation (experiment quickly with low cost)
  • Region vs Availability Zone (AZ): Region is a physical location in the world which contains multiple AZs. AZs contain one or more discrete data centers with independent resources and housed in different facilities.
  • Using Auto Scaling and ELB, scale up and down and only pay for what you use.
  • Ability to deploy systems in multiple regions (lower latency)
  • Ability to choose the region where data is stored
  • AWS is responsible for data center security
  • Security policy can be formalized (as code)
  • Ability to recover from failures

Core Services

  • Global Infrastructure:
    • Regions: Have multiple AZs
    • Availability Zones: Have one or more data centres. They all have different power supplier companies.
    • Edge Locations: Used by CloudFront.
  • Amazon Virtual Private Cloud (VPC)
    • Uses same concepts as on-premise networking
    • VPC can span across multiple AZs
    • Supports multiple subnets (each of which can be deployed in a different AZ)
    • Can create public-facing subnets and private-facing subnets within the same VPC
    • Each account can create multiple VPCs
    • Using fewer VPCs is recommended to avoid complexity
    • Can assign Internet Gateways to specific subnets to allow public access

  • Security Groups
    • Act like a built-in firewall
    • Best practice: Allow what’s required only and block everything else
  • Compute Services
    • Amazon Lightsail: Managed Virtual Private Servers service
      • Fixed price.
      • Includes a static IP, DNS management and storage
      • Fixed configuration
      • Uses t2 class EC2 instances under the hood
    • AWS Elastic Compute Cloud (EC2)
      • Difference betwwen EC2-Classic and EC2-VPC
        • EC2-Classic: Your instances run in a single, flat network that you share with other customers.
        • EC2-VPC: Your instances run in a virtual private cloud (VPC) that’s logically isolated to your AWS account.
    • AWS Lambda
      • No servers to manage
      • Pay as you go: Only pay for the time your code runs
      • Continuous scaling
      • Supports subsecond metering. Charged for every 100 milliseconds of execution time
      • Some limitations apply: AWS Lambda Limits
    • AWS Elastic Beanstalk
      • Platform as a service
      • Allows quick deployments of applications
      • Allows HTTPS on load balancers
      • Supports various platforms (node.js, python etc)
      • Provisions the resources required (EC2, ELB etc) automatically
    • Application Load Balancer
      • 2nd type of load balancer offered by ELB

      • Comes with new features

      • Supports routing to containers
      • Key terms:
        • Listeners: A process that checks for connection requests using the configuration (protocol, port)
        • Target: Destination for traffic
        • Target Group: Each target group routes requests to one or more registered targets
      • Target checks can be performed per target group basis
      • Integrates with ECS and supports dynamic ports utilized by scheduled containers
      • Need to create at least 2 AZs when creating an Application Load Balancer
      • Ability to route to different target groups based on port or path
    • Elastic Load Balancer
      • Supports sticky sessions
      • Supports multiple AZs and cross-zone balancing
      • For HTTP/HTTPS it uses “Least Outstanding” method to route the request. For TCP, it uses “Round robin”. The least outstanding routing algorithm is defined as “A ‘least outstanding requests routing algorithm’ is an algorithm that choses which instance receives the next request by selecting the instance that, at that moment, has the lowest number of outstanding (pending, unfinished) requests.”
    • Auto Scaling
      • Adding more instances: Scaling out, terminating instanes: Scaling in
      • Launch configuration answers “What” (AMI, Instance type, Security Groups, Roles). Creating an LC is similar to creating a new EC2 instance.
      • Auto Scaling Group answers “Where” (VPC and subnet(s), load balancer, minimum and maximum instances, desired capacity)
      • Auto Scaling Policy answeres “When” (Scheduled/on-demand/scale out or in policy)
  • Amazon EBS
    • Allows point-in-time snapshots and creation of a new volume from a snapshot
    • Supports encrypted volumes free of charge
    • EBS volume must be created in the same AZ as the EC2 instance that will use it
  • Amazon S3
    • Objects are stored redundantly across multiple facilities withing the same region
    • The bucket names must be globally unique.
    • Can configure cross-region replication for backup and disaster recovery
    • Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket
  • Amazon Glacier
    • Vaults have access and lock policies attached to them
    • Each AWS account can create up to 1000 vaults
    • Can create an S3 lifecycle policy to move to Glacier then delete after a period of time
      • Supports up to 40TB max item size (S3 supports 5TB)
      • It costs more per retrieval
      • Vault Lock allows you to easily deploy and enforce compliance controls for individual Amazon Glacier vaults with a vault lock policy. You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits. Once locked, the policy can no longer be changed
  • Amazon RDS
    • Can create a standby copy in a different AZ within the same VPC
    • Can create multiple read replicas (in different regions as well)
  • Amazon DynamoDB
    • Always uses SSD for storage
    • Supports auto-scaling. Increases/decreases the throughput based on load
    • Tables are partitioned by primary key
    • Two query methods: Query and Scan
    • Query uses the primary key to find items. Scan can use any attribute.
    • Scan is slower than Query as it needs to look at all items
  • Amazon Redshift
    • Managed data warehouse
    • Supports standard SQL
    • Supports ODBC/JDBC connectors
  • Amazon Aurora
    • Managed MySQL-clone (compatible with MySQL)
    • After a crash it doesn’t need to redo log files. It performs it on every read operation which reduces the restart time
  • AWS Trusted Advisor
    • Checks all the resources used and gives advice based on best practices
    • 5 categories:
      • Cost optimisation
      • Performance
      • Security
      • Fault tolerance
      • Service limits
    • Upgrading support plan enables all Trusted Advisor recommendations, free plan doesn’t include all
    • Has an API and can be used to automate optimisations
    • Can use it with CloudWatch alarms

Security

  • The AWS Shared Responsibility Model
    • AWS handles infrastructure security
    • AWS provides 3rd party audit reports
    • AWS’s responsibilities include: OS and database patching, firewall configuration and disaster recovery
    • Customer is responsible for putting logical access controls in place and protect account credentials
    • Customers are responsible to secure everything they put in the cloud
  • AWS Service Catalog
    • Allows to centrally manage common IT services that are approved for use on AWS
  • AWS IAM
    • Controls access to AWS resources
    • Handles Authentication (who can access resources) and authorization (how they can use resources)
    • Users can have programmatic access and/or console access.
    • Best practices
      • Delete root account keys. Instead use IAM accounts
      • Use MFA
      • Use groups
      • Use roles
      • Rotate credentials
      • Remove unnecessary users
  • AWS Security Compliance Programs
    • Risk Management: Follow the following standards:
      • COBIT
      • AICPA
      • NIST
    • Constantly scans service endpoints for vulnerabilities
    • Compliance programs are listed here
  • AWS Security Resources

Architecting

  • Well-architected framework: https://aws.amazon.com/architecture/well-architected/
  • Fiver pillars of the framework
    • Operational excellence
    • Security
    • Reliability
    • Performance efficency
    • Cost optimization
  • Fault Tolerance
    • Remain operational even if components fail
    • Built-in redundancy of an application’s components
  • High-Availability
    • A concept for the whole system
    • “Always” functioning and accessible
    • Without human intervention
    • HA Service Tools
      • Elastic Load Balancer
      • Elastic IP Addresses
      • Amazon Route 53
      • Auto Scaling
      • Amazon CloudWatch

Pricing and Support

  • Core concepts in billing
    • Pay as you go: No up front expenses
    • Pay less when you reserve: Reserved instances cost less
    • Pay even less per unit by using more: Tiered pricing for services such as S3, EC2 etc. Data transfer in is always free of charge.
    • Pay even less as AWS grows
  • Amazon RDS Costs
    • Clock hours of server time
    • Database characteristics
    • Database purchase type
    • Number of DB instances
    • Provisional storage
      • No charge for backup storage of up to 100% of database storage for active databases. After terminated, the backups are charged
    • Additional storage
    • Requests
    • Deployment type
    • Data transfer

General Notes

Exam Centre

The exam centre was very small and there was some sort of music studio next door so there was constant noise. OVerall it was a bit disappointing to take the exam in a desolated business centre and in a small room but it’s the same exam regardless so I was able to focus on the questions after I got used to the noise.

Exam Process

  • My exam was scheduled at 3:00. I arrived early and the proctor allowed me to sit at 2:00 as there were empty places in the exam room. It was a nice surprise because I definitely didn’t want to wait for another hour in that heat
  • At one point, the screen froze. I had to call the proctor. He restarted the application. Fortunately it just resumed where it left off.
  • CCP is the easiest AWS exam but even so there were some challenging questions. Mostly non-technical questions were hard for me (like questions related to support plans). I don’t think I’ll everr see those questions in other exams.

Exam Result

… and the result is : Pass

Amazon has an interesting scoring system apparently. Right after you submit the exam, the screen displays Pass or Fail but not the actual score. You receive that in a separate email. They don’t even announce what the passing score is as they reserve the right to change when they see fit. It’s also based on other candidates’ results too so almost like a curve. Anyway, it was quite a relief to see the pass result on the screen. I’m still curiously waiting for the actual score though.

My next exam will be AWS Certified Solutions Associate. I’ll post my exam notes after that exam as well.

Resources