I have not tested this personally, but seems to be a correctly put by Eric Hammond . If you try, do let me know if you find any catchs. 🙂
The ssh protocol uses two different keys to keep you secure:
- The user ssh key is the one we normally think of. This authenticates us to the remote host, proving that we are who we say we are and allowing us to log in.
- The ssh host key gets less attention, but is also important. This authenticates the remote host to our local computer and proves that the ssh session is encrypted so that nobody can be listening in.
Every time you see a prompt like the following, ssh is checking the host key and asking you to make sure that your session is going to be encrypted securely.
The authenticity of host 'ec2-...' can't be established. ECDSA key fingerprint is ca:79:72:ea:23:94:5e:f5:f0:b8:c0:5a:17:8c:6f:a8. Are you sure you want to continue connecting (yes/no)?
If you answer “yes” without verifying that the remote ssh host key fingerprint is the same, then you are basically saying:
I don’t need this ssh session encrypted. It’s fine for any man-in-the-middle to intercept the communication.
Ouch! (But a lot of people do this.)
Note: If you have a line like the following in your ssh config file, then you are automatically answering “yes” to this prompt for every ssh connection.
# DON'T DO THIS! StrictHostKeyChecking false
Care about security
Since you do care about security and privacy, you want to verify that you are talking to the right server using encryption and that no man-in-the-middle can intercept your session.
There are a couple approaches you can take to check the fingerprint for a new Amazon EC2 instance. The first is to wait for the console output to be available from the instance, retrieve it, and verify that the ssh host key fingerprint in the console output is the same as the one which is being presented to you in the prompt.
Scott Moser has written a blog post describing how to verify ssh keys on EC2 instances. It’s worth reading so that you understand the principles and the official way to do this.
The rest of this article is going to present a different approach that lets you in to your new instance quickly and securely.
Passing ssh host key to new EC2 instance
Instead of letting the new EC2 instance generate its own ssh host key and waiting for it to communicate the fingerprint through the EC2 console output, we can generate the new ssh host key on our local system and pass it to the new instance.
Using this approach, we already know the public side of the ssh key so we don’t have to wait for it to become available through the console (which can take minutes).
Generate a new ssh host key for the new EC2 instance.
tmpdir=$(mktemp -d /tmp/ssh-host-key.XXXXXX) keyfile=$tmpdir/ssh_host_ecdsa_key ssh-keygen -q -t ecdsa -N "" -C "" -f $keyfile
Create the user-data script that will set the ssh host key.
userdatafile=$tmpdir/set-ssh-host-key.user-data cat <<EOF >$userdatafile #!/bin/bash -xeu cat <<EOKEY >/etc/ssh/ssh_host_ecdsa_key $(cat $keyfile) EOKEY cat <<EOKEY >/etc/ssh/ssh_host_ecdsa_key.pub $(cat $keyfile.pub) EOKEY EOF
Run an EC2 instance, say Ubuntu 11.10 Oneiric, passing in the user-data script. Make a note of the new instance id.
ec2-run-instances --key $USER --user-data-file $userdatafile ami-4dad7424 instanceid=i-...
Wait for the instance to get a public DNS name and make a note of it.
ec2-describe-instances $instanceid host=ec2-...compute-1.amazonaws.com
Add new public ssh host key to our local ssh known_hosts after removing any leftover key (e.g., from previous EC2 instance at same IP address).
knownhosts=$HOME/.ssh/known_hosts ssh-keygen -R $host -f $knownhosts ssh-keygen -R $(dig +short $host) -f $knownhosts ( echo -n "$host "; cat $keyfile.pub echo -n "$(dig +short $host) "; cat $keyfile.pub ) >> $knownhosts
When the instance starts running and the user-data script has executed, you can ssh in to the server without being prompted to verify the fingerprint
Don’t forget to clean up and to terminate your test instance.
rm -rf $tmpdir ec2-terminate-instances $instanceid
There is one big drawback in the above sample implementation of this approach. We have placed secret information (the private ssh host key) into the EC2 user-data, which I generally recommend against.
Any user who can log in to the instance or who can cause the instance to request a URL and get the output, can retrieve the user-data. You might think this is unlikely to happen, but I’d rather avoid or minimize unnecessary risk.
In a production implementation of this approach, I would take steps like the following:
- Upload the new ssh host key to S3 in a private object.
- Generate an authenticated URL to the S3 object and have that URL expire in, say, 10 minutes.
- In the user-data script, download the ssh host key with the authenticated, expiring S3 URL.
Now, there is a short window of exposure and you don’t have to worry about protecting the user-data after the URL has expired.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EC2 enables “compute” in the cloud.
Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances. EBS provides highly available, highly reliable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance. It persists independently from the life of an instance. These EBS volumes are created in a particular Availability Zone and can be from 1 GB to 1 TB in size.
Follow the below steps to Create, attach and mount EBS Volumes to launched EC2 instances:
Create the EBS Volume
Log into AWS Management Console and follow the below steps for all the each extra volume to be attached to instances. For example, let’s create and attach a 6GB EBS volume (for Oracle Alert Logs and Traces) to Database server.
• Choose “Volumes” on the left hand control panel:
• In the right-hand pane under EBS Volumes, click on ‘Create Volume’
• In Create Volume dialog box that appears:
Enter the size mentioned in table, keep availability zone same as that of Database instance and select No Snapshot and click on ‘Create’.
• This will create an EBS volume and once create is complete it will be displayed as
• Select a volume and click on button to Attach Volume
• Select the instance for which EBS volume is to be attached. Also mention the mount point for the volume in device.
Here Instance is for database and mount device is /dev/sdf
• Once attached it will be displayed as
Mount the Volume
• Execute commands in the EC2 instance’s (Database Server) linux shell. As this is a new volume (with no data), we will have to format it
mkfs -t ext3 /dev/sdf
(Replace text in blue with mount device used in previous step)
• Make a directory to mount the device.
• Mount the device in newly created directory
mount /dev/sdf /mnt/disk1
(Replace text in blue as required)
• By default volumes will not be attached to the instance on reboot. To attach these volumes to given mount point every time on reboot, execute the following command
echo "/dev/sdf /mnt/disk1ext3 noatime 0 0" >> /etc/fstab"
(Replace text in blue as required)
Check attached volume by using command:
Unmounting the volume
From the Elastic Block Storage Feature Guide: A volume must be unmounted inside the instance before being detached. Failure to do so will result in damage to the file system or the data it contains.
Remember to cd out of the volume, otherwise you will get an error message
“umount: /mnt/disk1: device is busy”
Hope the above steps help you get into action in minutes.
In case you get stuck at any point, do comment below. I will be glad to help. 🙂
FUSE-based file system backed by Amazon S3.
S3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It doesn’t store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance, as if a network drive attached to it.
S3fs-fuse project is written in python backed by Amazons Simple Storage service. Amazon offers an open API to build applications on top of this service, which several companies have done, using a variety of interfaces (web, rsync, fuse, etc).
These steps are specific to an Ubuntu Server.
- Launch an Ubuntu Server on AWS EC2. (Recommended AMI – ami-4205e72b, username : ubuntu )
- Login to the Server using Winscp / Putty
- Type below command to update the existing libraries on the server.
sudo apt-get update
4.Type command to upgrade the libraries. If any msg is prompted, say ‘y’ or ‘OK’ as applicable.
sudo apt-get upgrade
Once upgrade is complete, install the necessary libraries for fuse with following command
sudo aptitude install build-essential libcurl4-openssl-dev libxml2-dev libfuse-dev comerr-dev libfuse2 libidn11-dev libkadm55 libkrb5-dev libldap2-dev libselinux1-dev libsepol1-dev pkg-config fuse-utils sshfs
If any msg is prompted, say ‘y’ or ‘OK’ as applicable.
5. Once all the packages are installed, download the s3fs source (Revision 177 as of this writing) from the Google Code project:
6.Untar and install the s3fs binary: (Run each command individually)
tar xzvf s3fs-r177-source.tar.gz
sudo make install
7. In order to use the allow_other option (see below) you will need to modify the fuse configuration:
sudo vi /etc/fuse.conf
And uncomment the following line in the conf file: ( To uncomment a line, remove the ‘#’ symbol )
Save the file using command: ‘Esc + : wq ’
8. Now you can mount an S3 bucket. Create directory using command :
sudo mkdir -p /mnt/s3
Mount the bucket to the created directory
sudo s3fs bucketname -o accessKeyId=XXX -o secretAccessKey=YYY -o use_cache=/tmp -o allow_other /mnt/s3
Replace the XXX above with your real Amazon Access Key and YYY with your real Secret Key.
Command also includes instruction to cache the bucket’s files locally (in /tmp) and to Allow other users to be able to manipulate files in the mount.
Now any files written to /mnt/s3 will be replicated to your Amazon S3 bucket.
WinScp – Verify mount directory
Check the wiki documentation for more options available to s3fs, including how to save your Access Key and Secret Key in /etc/passwd-s3fs.