linuxaws ec2, ebs

I recently decided to purchase a reserved t3.nano instance to run some Docker containers and for general testing purposes. In addition to the default volume I decided to add a new one to separate my files from the OS. It required a few steps to get everything in place so I decided to post this mostly for future reference!

Attach a volume during creation

First I added a new volume to the instance while creating it.

Connecting to instance

Now we have to connect to the instance to format the new volume. To achieve that we must have access to the private we generated while we created the instance. So to SSH into the machine we run this command:

ssh -i {/Path/To/Key/file_name.pem} ec2-user@{public DNS name of the instance}

Format the volume

I found some AWS documentation to achieve this which was very useful: Making an Amazon EBS Volume Available for Use on Linux

No need to repeat every command in that documentation. It’s a simple step-by-step guide. Just follow it and you have a volume in use which is also mounted at start up.

Install and configure Docker

Installing Docker is as simple as running this:

sudo yum update -y
sudo yum install -y docker

To be able to use Docker without sudoing everything ad ec2-user to docker group:

sudo usermod -aG docker ec2-user

We need to make sure that Docker daemon starts on reboot too. To achieve this run this:

sudo systemctl enable docker

Copy files to the instance

To copy some files to the new instance I used SCP command:

sudo scp -i {/Path/To/Key/file_name.pem} -r {/Path/To/Local/Folder/} ec2-user@{public DNS name of the instance}:/Remote/Folder

The issue was ec2-user didn’t initially have access to write on the remote folder. In that case you can run the following command to have access:

setfacl -m u:ec2-user:rwx /Remote/Folder

Resources

awssecurity cloudtrail, audit

Another important service under Management & Governance category is CloudTrail.

A nice thing about this service is that it’s already enabled by default with a limited capacity. You can view all AWS API calls made in your account within the last 90 days. This is completely free and enabled out of the box.

To get more out of CloudTrail we need to create trails

Inside Trails

A couple of options about trails:

  • It can apply to all regions
  • It can apply to all accounts in an organization
  • It can record all management events
  • It can record data events from S3 and Lambda functions

Just be mindful about the possible extra charges when you log every event for all organization accounts:

Testing the logs

I created a trail that logs all events for all organization accounts. I created an IAM user in another account in the organization. In the event history of the local account events look like this:

These events now can be tracked from master account as well. In the S3 bucket these events are organized by account id and date and they are stored in JSON format:

Conclusion

Having a central storage of all events in across all regions and accounts is a great tool to have. Having the raw data is a good start but making sense of that data is even more important. I’ll keep on exploring CloudTrail and getting more out of it to harden my accounts.

Resources

awssecurity aws_config, audit

In my previos blog post I talked about creating an IAM admin user and using that instead of root user all the time. Applying such best practices is a good idea which also begs the question: How can I enforce these rules?

AWS Config

The official description of the service is: “AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources”

What this means is you select some pre-defined rules or implement your custom rules and AWS config constantly runs checks against your resources and notify you if you have non-compliant resources.

Since currently I’m interested in hardening IAM users in the next example I’m going to use an IAM check

Use case: Enforcing MFA

As of this writing there are 80 managed config rules. To enforce MFA, I simply searched MFA in the “Add rule” anf got 5 matches of which I selected only 3:

After I accepted the default settings it was able to identify my IAM user without MFA:

And it comes with a nice little dashboard that shows all your non-compliant resources:

It also supports notifications via SNS. It creates a topic and all you have to do is subscribe to that via an email address and after confirming your address you can start receiving emails.

I was only expecting to get emails about non-compliant resources but it’s bit noisy as it sends emails with subjects “Configuration History Delivery Completed” or “Configuration Snapshot Delivery Started” which didn’t mean much to me.

Pricing

I think the price is exteremely high. The details can be found on their pricing page but in a nutshell a single rule costs $2/month. So for the above example I paid $6 which is a lot of money in terms of resources used.

Conclusion

I like the idea of having an auditing system with notifications but for this price I don’t think I will use it.

I will keep on exploring though as I’m keen on implementing my custom rules with AWS config and also implementing them without AWS config and see if this service adds any benefit over having scheduled Lambda functions.

Resources