-->

aws ec2, migration

Recently I needed to migrate an EC2 instance to a different AWS account. There’s no built-in functionality to handle this. The solution is creating an AMI from the instance and sharing it with the target account. I scripted my solution with Powershell and AWS Tools for Powershell as below.

Prerequisites

Since the operation involves a source account and a target account, first create 2 profiles with EC2 access. credentials files should look like this

[source_profile]
aws_access_key_id = xxxxxxxxx
aws_secret_access_key = xxxxxxxxx

[target_profile]
aws_access_key_id = xxxxxxxxx
aws_secret_access_key = xxxxxxxxx

Step 00: Configuration

I put all the variables in a Powershell script file which I include in the following steps. To use the scripts you need to provide the values first. Hopefully the variable names are self-explanatory:

$sourceAccountAwsProfileName = ""
$sourceRegion = ""
$sourceAccountId = ""
$instanceId = ""
$amiName = ""
$amiDescription = ""
$targetAccountAwsProfileName = ""
$targetAccountId = ""
$targetRegion = ""

Step 01: Create AMI

First step to migrate EC2 instance is to create an AMi from the instance.

. .\"00. configuration.ps1"

# Create AMI
$imageId = New-EC2Image -InstanceId $instanceId -Name $amiName -Description $amiDescription -ProfileName $sourceAccountAwsProfileName -Region $sourceRegion

Set-Variable -Scope global -Name AMI_ID -Value $imageId
Write-Host "AMI_ID: [" $AMI_ID "]"

This operation takes a few minutes. The image has to become available before we can proceed to the next step.

Step 02: Share AMI

Now the image is ready we have to share it with the target account. The script below shares the AMI and allows new volumes created from this AMI.

. .\"00. configuration.ps1"

$imageId = Get-Variable AMI_ID -valueOnly
Edit-EC2ImageAttribute -ImageId $imageId -Attribute launchPermission -OperationType add -UserId $targetAccountId -ProfileName $sourceAccountAwsProfileName -Region $sourceRegion

$imageSnapshots = Get-EC2Snapshot -OwnerId $sourceAccountId -ProfileName $sourceAccountAwsProfileName -Region $sourceRegion
                | Where-Object {$_.Description -like "*$imageId*" }

foreach ($snapshot in $imageSnapshots) {
    Edit-EC2SnapshotAttribute -SnapshotId $snapshot.SnapshotId -Attribute createVolumePermission -OperationType add -UserId $targetAccountId -ProfileName $sourceAccountAwsProfileName -Region $sourceRegion
}

Step 03: Copy AMI

At this point, if you go to the target account you should be able to see the image when you choose “Private images” category. Make sure to choose the same region as the source account to be able to see the image.

We have access to this image but we want to have our own copy which we accomplish with the script below:

. .\"00. configuration.ps1"

$imageId = Get-Variable AMI_ID -valueOnly
Copy-EC2Image -SourceImageId $imageId -SourceRegion $sourceRegion -Name $amiName -ProfileName $targetAccountAwsProfileName -Region $targetRegion

In my experience the whole copying process took about 5 minutes.

Now that we have our own copy of the AMI we can launch instances as we please. Job (almost) done!

Step 04: Clean Up

Final step is to clean up after ourselves. Since this AMI was created to migrate to the new account only I assume we won’t need it anymore in the source account. The following script deregisters the AMI and deletes all the associated snapshots.

. .\"00. configuration.ps1"

# Create image script writes the AMI ID to a variable. If it doesn't exist get the image id from AWS Management Console
$imageId = Get-Variable AMI_ID -valueOnly

Write-Host "Unregistering image: [" $imageId "]"
Unregister-EC2Image -ImageId $imageId -ProfileName $sourceAccountAwsProfileName -Region $sourceRegion

$imageSnapshots = Get-EC2Snapshot -OwnerId $sourceAccountId -ProfileName $sourceAccountAwsProfileName -Region $sourceRegion
                | Where-Object {$_.Description -like "*$imageId*" }

foreach ($snapshot in $imageSnapshots) {
    Write-Host "Removing snapshot: [" $snapshot.SnapshotId "]"
    Remove-EC2Snapshot -SnapshotId $snapshot.SnapshotId -Force -ProfileName $sourceAccountAwsProfileName -Region $sourceRegion
}

# Delete variable
Remove-Variable AMI_ID -Scope global

Resources

dockerdev audio

In this post I’m going to show an example of playing audio in a Docker container.

Test Environment Setup

I’m going to use dotnet core 3.1 runtime image:

docker pull mcr.microsoft.com/dotnet/core/runtime:3.1

Here’s my Dockerfile:

FROM mcr.microsoft.com/dotnet/core/runtime:3.1

RUN apt-get update -y

RUN apt-get install mpg123 -y
RUN apt-get install wget -y

COPY ./play-audio.sh .
RUN chmod +x ./play-audio.sh

ENTRYPOINT ["/play-audio.sh"]

and the script (play-audio.sh) that plays the audio looks like this:

#!/bin/bash

url=$1
filename="${url##*/}"

if [ ! -f $filename ]; then
    echo "File doesn't exist. Downloading."
    wget $url
fi

# check if the audio player program exists. helpful to test the script individually on macOS
if hash mpg123 2>/dev/null; then
    echo "Playing file using mpg123"
    mpg123 $filename
elif hash afplay 2>/dev/null; then
    echo "Playing file using afplay"
    afplay $filename
else 
    echo "No player could be found."    
fi

I built the image with the following command:

docker build -t audio-test .

Testing the audio

Initially I rana container like this:

docker run audio-test https://file-examples.com/wp-content/uploads/2017/11/file_example_MP3_5MG.mp3

After running the container like this I got the error shown below:

Solution

The trick is ro run with the following parameter:

--device /dev/snd

So the full Docker run command looks like this:

docker run --rm --device /dev/snd audio-test https://file-examples.com/wp-content/uploads/2017/11/file_example_MP3_5MG.mp3

Conclusion

This was a long-winded setup for a very short solution but I enjoyed practicing with Bash scripting and Docker.

Unfortunately this solution works on Raspberry Pi only and not on Mac. Every resource I found points to installing Pulse Audio Server on macOS and Pulse Audio client in the Docker image. I haven’t tried it yet as it was beyond the scope of my requirements but I might need to implement it later in which case I will post about it.

Resources

devopsaws github, codebuild, continuous_integration, ci

In this post, I’d like to show how to achieve continuous integration using CodeBuild service.

Main goals to achieve:

  • Run unit tests in a Docker container
  • Kick off build automatically when code is pushed
  • Show build status
  • Send notifications when tests fail

Codebuild Setup

Step 01: Project Configuration

Specify a unique name and make sure to tick “Enable build badge” checkbox

Step 02: Source Selection

In this step select GitHub as source provider and select “Repository in my GitHub account” option. AWS will use OAuth and redirect to GitHub to ask for authoization to access your repositories. After granting access you should be able to see your repositories in the dropdown list:

I left “Source version” field blank as I want the build to run for all branches and for all commits.

Step 03: Webhook Configuration

Tick the “Rebuild every time a code change is pushed to this repository” checkbox and select PUSH from the event type list. You can select more events to trigger builds for CI purposes it should be enough to run after every time code is pushed.

What this does is add a CodeBuild webhook in GitHub that looks like this:

Whenever a push happens in the repository, GitHub posts the details to CodeBuild webhook and that triggers a build.

Step 04: Environment configuration

The build takes place in a Docker container so we have to provide a Docker image with build tools installed. Alternatively we can use one of the managed images that AWS provides. In this example I’ll use a managed image:

Step 05: Other configuration

In this example, I will accept the defaults for the rest of the settings because I don’t need to generate artifacts to run unit tests. It’s generally wise to enable CloudWatch logs so that you can monitor the build process closely. Since I accept the default path for buildspec.yml, I have to place it at the root of the repository.

Running the tests

The core responsibility of CI pipeline is to run the unit tests. CodeBuild is a generic service and it doesn’t come with any tools to run unit tests as it’s application and environment dependent. The way we configure is by using buildspec.yml file. In this example I’m running a dotnet core project and making sure the unit tests are run first is as easy as this:

version: 0.2

phases:
  install:
    runtime-versions:
        dotnet: 2.2
  build:
    commands:
      - dotnet restore
      - dotnet test
      - dotnet publish Sample.UI.Web -c Release -o ./output

This way CodeBuild will execute the steps above and all the unit tests in the solution will be run.

Run build automatically

Now that the GitHub repository and CodeBuild projects are both ready, let’s see if we can kick off a build by pushing some code changes:

Also after pushing code to a feature branch I was able to see that build was triggered.

So now I could confirm running the build automatically.

As a side note, the failed ones were failing due to this error

YAML_FILE_ERROR: This build image requires selecting at least one runtime version.

The solution was to specify the runtime in the buildspec.yml file explicitly by adding this bit:

install:
  runtime-versions:
      dotnet: 2.2

Also it’s worth noting that the source version value in Codebuild corresponds to commit hash. For example the commit hash shown below

appears on CodeBuild as

Showing build status

Showing the build status on GitHub repository is very easy. We already enabled build badge while creating the build project. Now we have to copy the badge URL as shown below:

Then in the GitHub repository, edit the readme.md file and add the following:

![Build status](badge URL copied from AWS console)

Now if you go to the GitHub repository page and refresh you can see the latest status of the build (of master branch):

After I fixed the error and merged into master branch I could see the build passing badge as well:

Notifications

It would be nice to have a direct integration with notifications. This can be achieved using CloudWatch Events. In this sample project I’m going to use SNS to send email notifications.

First, I went to CloudWatch and created a rule and added the build state changed events. Just gave it a name and created the rule so that it looked like this:

After that I broke the test intentionally and received email for build failure in JSON format.

Conclusion

In this post I wanted to show a continuous integration pipeline using GitHub and CodeBuild. It can further be improved by posting the build status to Slack so that the whole team can get the notifications instantly. For the time being I achieved the goals I set out for initially so I’ll wrap it up here.

Source Code

Source code can be found in the repo below under blog/CodeBuild_CI_Pipeline folder

Resources