Enterprises with growing data and multiple locations are facing challenges associated with managing the rapid growth of data and keeping employees connected. Do you need a solution that securely delivers read & write access to content shared across offices? If so, then you won’t want to miss this video about Nasuni’s Data Continuity Services. Nasuni delivers multi-site access to a shared storage repository in the cloud that is locally available at every office or location in your organization.
Moderator: Debra Bulkeley, Sr Managing Editor – IDG Enterprise
Featured Speakers: Chris Glew, Product Evangelist, Nasuni
Register here for Video link- Bringing the Cloud to the Data Center
Nasuni – How it Works : http://www.nasuni.com/how_it_works
In a typical Amazon Web Services(AWS) Environment, Amazon Machine Images (AMIs) are strictly available in a certain region only. These AMIs cannot be moved from one region to another. Though the AMIs are shared within different Availability Zones of the same region. For this purpose, you can use a third party tool called as CloudyScripts.
CloudyScripts is a collection of tools to help you programming Infrastructure Clouds. The web-based tool is self explanatory and regularly updated. In case you find any bug, do not hesitate to email the owners right away.
Goto the CloudyScripts Copy AMI to different region tool.
AMI should be EBS-backed Linux AMI only. The AWS Access Key and Secret Key can be found at the Security Credentials page of your AWS Account. This information is unique to your account and can be misused. DO NOT share these details with anyone.
The key provided should be generated in the source and target region before using the tool. Provide the .pem key files.
AWS discourages use of “root” user for login into AWS EC2 Instances.
You may use different ssh users like, “ec2-user” for AWS Linux Instances or “ubuntu” user for Ubuntu instances.
Output will be displayed as:
Verify that the AMI is registered in the destination as Private to you i.e.owner.
If you opt to receive mail of the status, enter your email id in the status window.
The mail will be received as:
Starting with the 2009-10-31 API, Amazon Web Services (AWS) has a new type of Amazon Machine Image(AMI) that stores its root device as an Amazon Elastic Block Store(EBS) volume. They refer to these AMIs as Amazon EBS-backed. When an instance of this type of AMI launches, an Amazon EBS volume is created from the associated snapshot, and that volume becomes the root device. You can create an AMI that uses an Amazon EBS volume as its root device with Windows or Linux/UNIX operating systems.These instances can be easily backed-up. You can modify the original instance to suit your particular needs and then save it as an EBS-backed AMI. Hence, if in future you need the the modified version of instance, you can simply launch multiple new instances from the backed-up AMI and are ready to-go.
Following are the steps to be performed for backup/restoring of AWS EBS instance into/from an AWS AMI. Also brief steps for deletion of AMI backup are noted for reference.
EBS-instance to EBS-backed AMI
- Go to AWS Management Console and in the My Instances Pane, select the instance which has to be backed up.
- Right click the instance and select option Create Image (EBS AMI).
- In the Create Image dialog box, give proper AMI Name and Description. Click on Create This Image button.
The image creation will be in progress. This will take sometime depending upon the number & size of volumes attached to the instance. Click on View pending image link. It will take you to the AMIs pane.
The AMI will be in pending state. It is important to note that this AMI is private to the account and not available for AWS public use.
- If you select Snapshots from the Navigation Pane, then you can see that EBS volumes attached to the instance will be backed up as too.
- Once the backup is done, the AMI will be in available state.
Restore from backup AMI into instance
Delete AMI & Snapshots:
- To delete any AMI, Right-click it and select De-register AMI.
Remember, deleting AMI doesn’t delete the EBS volume snapshots. Click on Snapshots from Navigation pane, search & select the snapshot(s) to be deleted. Right-click on the snapshot(s) and select delete snapshot option.
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers. Amazon EC2 enables “compute” in the cloud.
Amazon Elastic Block Store (EBS) provides block level storage volumes for use with Amazon EC2 instances. EBS provides highly available, highly reliable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance. It persists independently from the life of an instance. These EBS volumes are created in a particular Availability Zone and can be from 1 GB to 1 TB in size.
Follow the below steps to Create, attach and mount EBS Volumes to launched EC2 instances:
Create the EBS Volume
Log into AWS Management Console and follow the below steps for all the each extra volume to be attached to instances. For example, let’s create and attach a 6GB EBS volume (for Oracle Alert Logs and Traces) to Database server.
• Choose “Volumes” on the left hand control panel:
• In the right-hand pane under EBS Volumes, click on ‘Create Volume’
• In Create Volume dialog box that appears:
Enter the size mentioned in table, keep availability zone same as that of Database instance and select No Snapshot and click on ‘Create’.
• This will create an EBS volume and once create is complete it will be displayed as
• Select a volume and click on button to Attach Volume
• Select the instance for which EBS volume is to be attached. Also mention the mount point for the volume in device.
Here Instance is for database and mount device is /dev/sdf
• Once attached it will be displayed as
Mount the Volume
• Execute commands in the EC2 instance’s (Database Server) linux shell. As this is a new volume (with no data), we will have to format it
mkfs -t ext3 /dev/sdf
(Replace text in blue with mount device used in previous step)
• Make a directory to mount the device.
• Mount the device in newly created directory
mount /dev/sdf /mnt/disk1
(Replace text in blue as required)
• By default volumes will not be attached to the instance on reboot. To attach these volumes to given mount point every time on reboot, execute the following command
echo "/dev/sdf /mnt/disk1ext3 noatime 0 0" >> /etc/fstab"
(Replace text in blue as required)
Check attached volume by using command:
Unmounting the volume
From the Elastic Block Storage Feature Guide: A volume must be unmounted inside the instance before being detached. Failure to do so will result in damage to the file system or the data it contains.
Remember to cd out of the volume, otherwise you will get an error message
“umount: /mnt/disk1: device is busy”
Hope the above steps help you get into action in minutes.
In case you get stuck at any point, do comment below. I will be glad to help. 🙂
FUSE-based file system backed by Amazon S3.
S3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem. It doesn’t store anything on the Amazon EC2, but user can access the data on S3 from EC2 instance, as if a network drive attached to it.
S3fs-fuse project is written in python backed by Amazons Simple Storage service. Amazon offers an open API to build applications on top of this service, which several companies have done, using a variety of interfaces (web, rsync, fuse, etc).
These steps are specific to an Ubuntu Server.
- Launch an Ubuntu Server on AWS EC2. (Recommended AMI – ami-4205e72b, username : ubuntu )
- Login to the Server using Winscp / Putty
- Type below command to update the existing libraries on the server.
sudo apt-get update
4.Type command to upgrade the libraries. If any msg is prompted, say ‘y’ or ‘OK’ as applicable.
sudo apt-get upgrade
Once upgrade is complete, install the necessary libraries for fuse with following command
sudo aptitude install build-essential libcurl4-openssl-dev libxml2-dev libfuse-dev comerr-dev libfuse2 libidn11-dev libkadm55 libkrb5-dev libldap2-dev libselinux1-dev libsepol1-dev pkg-config fuse-utils sshfs
If any msg is prompted, say ‘y’ or ‘OK’ as applicable.
5. Once all the packages are installed, download the s3fs source (Revision 177 as of this writing) from the Google Code project:
6.Untar and install the s3fs binary: (Run each command individually)
tar xzvf s3fs-r177-source.tar.gz
sudo make install
7. In order to use the allow_other option (see below) you will need to modify the fuse configuration:
sudo vi /etc/fuse.conf
And uncomment the following line in the conf file: ( To uncomment a line, remove the ‘#’ symbol )
Save the file using command: ‘Esc + : wq ’
8. Now you can mount an S3 bucket. Create directory using command :
sudo mkdir -p /mnt/s3
Mount the bucket to the created directory
sudo s3fs bucketname -o accessKeyId=XXX -o secretAccessKey=YYY -o use_cache=/tmp -o allow_other /mnt/s3
Replace the XXX above with your real Amazon Access Key and YYY with your real Secret Key.
Command also includes instruction to cache the bucket’s files locally (in /tmp) and to Allow other users to be able to manipulate files in the mount.
Now any files written to /mnt/s3 will be replicated to your Amazon S3 bucket.
WinScp – Verify mount directory
Check the wiki documentation for more options available to s3fs, including how to save your Access Key and Secret Key in /etc/passwd-s3fs.