Archive

Archive for the ‘Elastic Cloud Compute (EC2)’ Category

Behind the Scenes of the Amazon Appstore Test Drive

The beta launch of Test Drive last week on select Android devices has many developers wondering, how does it work?

Jeff Bar, Amazon Web Services Technical Evangelist, has taken the time to walk through the technology behind Test Drive on the AWS blog. Test Drive is hosted on Amazon Elastic Cloud Compute (EC2). The Amazon Appstore team can therefore easily add additional capacity whenever needed and where it makes the most sense with respect to the incoming traffic.

Check out Jeff’s full post on the Amazon Web Services blog.

SOURCE

Uploading Known ssh Host Key in EC2 user-data Script

June 2, 2012 1 comment

I have not tested this personally, but seems to be a correctly put by  . If you try, do let me know if you find any catchs. 🙂

The ssh protocol uses two different keys to keep you secure:

  1. The user ssh key is the one we normally think of. This authenticates us to the remote host, proving that we are who we say we are and allowing us to log in.
  2. The ssh host key gets less attention, but is also important. This authenticates the remote host to our local computer and proves that the ssh session is encrypted so that nobody can be listening in.

Every time you see a prompt like the following, ssh is checking the host key and asking you to make sure that your session is going to be encrypted securely.

The authenticity of host 'ec2-...' can't be established. ECDSA key fingerprint is ca:79:72:ea:23:94:5e:f5:f0:b8:c0:5a:17:8c:6f:a8. Are you sure you want to continue connecting (yes/no)? 

If you answer “yes” without verifying that the remote ssh host key fingerprint is the same, then you are basically saying:

I don’t need this ssh session encrypted. It’s fine for any man-in-the-middle to intercept the communication.

Ouch! (But a lot of people do this.)

Note: If you have a line like the following in your ssh config file, then you are automatically answering “yes” to this prompt for every ssh connection.

# DON'T DO THIS! StrictHostKeyChecking false 

Care about security

Since you do care about security and privacy, you want to verify that you are talking to the right server using encryption and that no man-in-the-middle can intercept your session.

There are a couple approaches you can take to check the fingerprint for a new Amazon EC2 instance. The first is to wait for the console output to be available from the instance, retrieve it, and verify that the ssh host key fingerprint in the console output is the same as the one which is being presented to you in the prompt.

Scott Moser has written a blog post describing how to verify ssh keys on EC2 instances. It’s worth reading so that you understand the principles and the official way to do this.

The rest of this article is going to present a different approach that lets you in to your new instance quickly and securely.

Passing ssh host key to new EC2 instance

Instead of letting the new EC2 instance generate its own ssh host key and waiting for it to communicate the fingerprint through the EC2 console output, we can generate the new ssh host key on our local system and pass it to the new instance.

Using this approach, we already know the public side of the ssh key so we don’t have to wait for it to become available through the console (which can take minutes).

Generate a new ssh host key for the new EC2 instance.

tmpdir=$(mktemp -d /tmp/ssh-host-key.XXXXXX) keyfile=$tmpdir/ssh_host_ecdsa_key ssh-keygen -q -t ecdsa -N "" -C "" -f $keyfile 

Create the user-data script that will set the ssh host key.

userdatafile=$tmpdir/set-ssh-host-key.user-data cat <<EOF >$userdatafile #!/bin/bash -xeu cat <<EOKEY >/etc/ssh/ssh_host_ecdsa_key $(cat $keyfile) EOKEY cat <<EOKEY >/etc/ssh/ssh_host_ecdsa_key.pub $(cat $keyfile.pub) EOKEY EOF 

Run an EC2 instance, say Ubuntu 11.10 Oneiric, passing in the user-data script. Make a note of the new instance id.

ec2-run-instances --key $USER --user-data-file $userdatafile ami-4dad7424 instanceid=i-... 

Wait for the instance to get a public DNS name and make a note of it.

ec2-describe-instances $instanceid host=ec2-...compute-1.amazonaws.com 

Add new public ssh host key to our local ssh known_hosts after removing any leftover key (e.g., from previous EC2 instance at same IP address).

knownhosts=$HOME/.ssh/known_hosts ssh-keygen -R $host -f $knownhosts ssh-keygen -R $(dig +short $host) -f $knownhosts ( echo -n "$host "; cat $keyfile.pub echo -n "$(dig +short $host) "; cat $keyfile.pub ) >> $knownhosts 

When the instance starts running and the user-data script has executed, you can ssh in to the server without being prompted to verify the fingerprint

ssh ubuntu@$host 

Don’t forget to clean up and to terminate your test instance.

rm -rf $tmpdir ec2-terminate-instances $instanceid 

Caveat

There is one big drawback in the above sample implementation of this approach. We have placed secret information (the private ssh host key) into the EC2 user-data, which I generally recommend against.

Any user who can log in to the instance or who can cause the instance to request a URL and get the output, can retrieve the user-data. You might think this is unlikely to happen, but I’d rather avoid or minimize unnecessary risk.

In a production implementation of this approach, I would take steps like the following:

  1. Upload the new ssh host key to S3 in a private object.
  2. Generate an authenticated URL to the S3 object and have that URL expire in, say, 10 minutes.
  3. In the user-data script, download the ssh host key with the authenticated, expiring S3 URL.

Now, there is a short window of exposure and you don’t have to worry about protecting the user-data after the URL has expired.

SOURCE

AWS – Migrate Linux AMI (EBS) using CloudyScripts

In a typical Amazon Web Services(AWS) Environment, Amazon Machine Images (AMIs) are strictly available in a certain region only. These AMIs cannot be moved from one region to another. Though the AMIs are shared within different Availability Zones of the same region. For this purpose, you can use a third party tool called as CloudyScripts.

CloudyScripts is a collection of tools to help you programming Infrastructure Clouds. The web-based tool is self explanatory and regularly updated. In case you find any bug, do not hesitate to email the owners right away.

Goto the CloudyScripts Copy AMI to different region tool.

AMI should be EBS-backed Linux AMI only. The AWS Access Key and Secret Key can be found at the Security Credentials page of your AWS Account. This information is unique to your account and can be misused. DO NOT share these details with anyone.

The key provided should be generated in the source and target region before using the tool. Provide the .pem key files.

AWS discourages use of “root” user for login into AWS EC2 Instances.

You may use different ssh users like, “ec2-user” for AWS Linux Instances or “ubuntu” user for Ubuntu instances.

Output will be displayed as:

Verify that the AMI is registered in the destination as Private to you i.e.owner.

If you opt to receive mail of the status, enter your email id in the status window.

The mail will be received as:

AWS EBS-Backed Instance Backup &Restore

Starting with the 2009-10-31 API, Amazon Web Services (AWS) has a new type of Amazon Machine Image(AMI) that stores its root device as an Amazon Elastic Block Store(EBS) volume. They refer to these AMIs as Amazon EBS-backed. When an instance of this type of AMI launches, an Amazon EBS volume is created from the associated snapshot, and that volume becomes the root device. You can create an AMI that uses an Amazon EBS volume as its root device with Windows or Linux/UNIX operating systems.These instances can be easily backed-up. You can modify the original instance to suit your particular needs and then save it as an EBS-backed AMI. Hence, if in future you need the the modified version of instance, you can simply launch multiple new instances from the backed-up AMI and are ready to-go.

Following are the steps to be performed for backup/restoring of AWS EBS instance into/from an AWS AMI. Also brief steps for deletion of AMI backup are noted for reference.
 

EBS-instance to EBS-backed AMI

  • Go to AWS Management Console and in the My Instances Pane, select the instance which has to be backed up.
  • Right click the instance and select option Create Image (EBS AMI).
  • In the Create Image dialog box, give proper AMI Name and Description. Click on Create This Image button.
  • The image creation will be in progress. This will take sometime depending upon the number & size of volumes attached to the instance. Click on View pending image link. It will take you to the AMIs pane.
  • The AMI will be in pending state. It is important to note that this AMI is private to the account and not available for AWS public use.
  • If you select Snapshots from the Navigation Pane, then you can see that EBS volumes attached to the instance will be backed up as too.
  • Once the backup is done, the AMI will be in available state.

 

Restore from backup AMI into instance

In case, the running instance needs to be restored, use the latest backup AMI. To launch an instance from this AMI, right-click the AMI and select Launch Instance option. The Launch Instance Wizard will be displayed, perform the usual configurations and a new instance will be created containing all the data & configurations done before backup.

 

Delete AMI & Snapshots:

  • To delete any AMI, Right-click it and select De-register AMI.
  • Remember, deleting AMI doesn’t delete the EBS volume snapshots. Click on Snapshots from Navigation pane, search & select the snapshot(s) to be deleted. Right-click on the snapshot(s) and select delete snapshot option.
 

References:

Microsoft SharePoint on the AWS Cloud

Amazon Web Services (AWS) provides services & tools for deploying Microsoft® SharePoint® workloads on its cloud infrastructure platform. This white paper discusses general concepts regarding how to use these services and provides detailed technical guidance on how to configure, deploy, and run a SharePoint Server farm on AWS.

Deploy SharePoint quickly at lower total cost on AWS Cloud. Learn how

Install and Configure MySQL on CentOS

MySQL is the world’s most popular open source database.

MySQL Community Edition is freely downloadable version.

Commercial customers have the flexibility of choosing from multiple editions to meet specific business and technical requirements. For more details please refer to the MySQL official website.

INSTALL :

On any CentOS server with open internet, run the below command to install MySQL Community Edition:

yum install mysql-server mysql php-mysql

OR

Download the server and client rpm files from the MySQL Website depending upon the platform(OS) and architecture(32/64bit).

install both rpm files using below command:

rpm -ivh <<rpm_filenames>>

Example:

rpm -ivh mysql-server-version.rpm mysqlclient9-version.rpm

CONFIGURE :

Once installed, run the below commands to configure MySQL Server:

1.Set the MySQL service to start on boot

chkconfig --levels 235 mysqld on

2. Start the MySQL service

service mysqld start

3. By default the root user will have no password, so to log into MySQL use command:

mysql -u root

4. To exit Mysql Console, enter below command

exit;

SET PASSWORD FOR ROOT :

To set the root user password for all local domains, login and run below commands

SET PASSWORD FOR 'root'@'localhost' = PASSWORD('<<new-password>>');

SET PASSWORD FOR 'root'@'localhost.localdomain' = PASSWORD('<<new-password>>');

SET PASSWORD FOR 'root'@'127.0.0.1' = PASSWORD('<<new-password>>');

(Replace <<new-password>> with actual password)

OR

run below command at linux shell:

mysqladmin -u root password '<<new-password>>'

(Replace <<new-password>> with actual password)

Once password is set, to login to Mysql use below command:

mysql -u root -p

Once you enter the above command, you will be prompted for the root password.

ADD NEW USER :

To add a new user for MySQL login, use the below SQL query. Remember this query must be run from the MySQL prompt.

for localhost:

INSERT INTO user (Host, User, Password, Select_priv, Insert_priv, Update_priv, Delete_priv, Create_priv, Drop_priv, Reload_priv, Shutdown_priv, Process_priv, File_priv, Grant_priv, References_priv, Index_priv, Alter_priv, Show_db_priv, Super_priv, Create_tmp_table_priv, Lock_tables_priv, Execute_priv, Repl_slave_priv, Repl_client_priv, Create_view_priv, Show_view_priv, Create_routine_priv, Alter_routine_priv, Create_user_priv, Event_priv, Trigger_priv, ssl_type, ssl_cipher, x509_issuer, x509_subject, max_questions, max_updates, max_connections, max_user_connections) VALUES ('localhost', '<<USERNAME>>', password('<<PASSWORD>>'), 'Y','Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N', 'N', '', '', '', '', 0, 0, 0, 0);

for anyhostname:

INSERT INTO user (Host, User, Password, Select_priv, Insert_priv, Update_priv, Delete_priv, Create_priv, Drop_priv, Reload_priv, Shutdown_priv, Process_priv, File_priv, Grant_priv, References_priv, Index_priv, Alter_priv, Show_db_priv, Super_priv, Create_tmp_table_priv, Lock_tables_priv, Execute_priv, Repl_slave_priv, Repl_client_priv, Create_view_priv, Show_view_priv, Create_routine_priv, Alter_routine_priv, Create_user_priv, Event_priv, Trigger_priv, ssl_type, ssl_cipher, x509_issuer, x509_subject, max_questions, max_updates, max_connections, max_user_connections) VALUES ('%', '<<USERNAME>>', password('<<PASSWORD>>'), 'Y','Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'Y', 'N', 'N', '', '', '', '', 0, 0, 0, 0);

Replace <<USERNAME>> and <<PASSWORD>> with actual username and password respectively.

Note they must be enclosed in single quotes.

DROP ANY USER :

In case, you want to drop any user use below command:

DROP '<<username>>''@'localhost';

DROP '<<username>>''@'localhost.localdomain';

(Replace <<username>> with actual username)


For more help and commands, refer –> http://www.yolinux.com/TUTORIALS/LinuxTutorialMySQL.html

Create and Attach AWS EBS Volume to AWS EC2 -Linux

May 20, 2012 1 comment

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EC2 enables “compute” in the cloud.

Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances. EBS provides highly available, highly reliable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance. It persists independently from the life of an instance. These EBS volumes are created in a particular Availability Zone and can be from 1 GB to 1 TB in size.

Follow the below steps to Create, attach and mount EBS Volumes to launched EC2 instances:

Create the EBS Volume

Log into AWS Management Console and follow the below steps for all the each extra volume to be attached to instances. For example, let’s create and attach a 6GB EBS volume (for Oracle Alert Logs and Traces) to Database server.

• Choose “Volumes” on the left hand control panel:

• In the right-hand pane under EBS Volumes, click on ‘Create Volume’

• In Create Volume dialog box that appears:
Enter the size mentioned in table, keep availability zone same as that of Database instance and select No Snapshot and click on ‘Create’.

• This will create an EBS volume and once create is complete it will be displayed as

Attach Volume

• Select a volume and click on button to Attach Volume

• Select the instance for which EBS volume is to be attached. Also mention the mount point for the volume in device.
Here Instance is for database and mount device is /dev/sdf

• Once attached it will be displayed as

Mount the Volume

• Execute commands in the EC2 instance’s (Database Server) linux shell. As this is a new volume (with no data), we will have to format it
Run command:

mkfs -t ext3 /dev/sdf

(Replace text in blue with mount device used in previous step)

• Make a directory to mount the device.

mkdir /mnt/disk1

• Mount the device in newly created directory

mount /dev/sdf /mnt/disk1

(Replace text in blue as required)

• By default volumes will not be attached to the instance on reboot. To attach these volumes to given mount point every time on reboot, execute the following command

echo "/dev/sdf /mnt/disk1ext3 noatime 0 0" >> /etc/fstab"

(Replace text in blue as required)

Check attached volume by using command: df -h

Unmounting the volume

From the Elastic Block Storage Feature Guide: A volume must be unmounted inside the instance before being detached. Failure to do so will result in damage to the file system or the data it contains.

umount /mnt/disk1

Remember to cd out of the volume, otherwise you will get an error message

umount: /mnt/disk1: device is busy

Hope the above steps help you get into action in minutes.

In case you get stuck at any point, do comment below. I will be glad to help. 🙂