1. Introduction

In this tutorial, we’re going to take a look at how we can utilize the Amazon Webservices (AWS) Command Line Interface (CLI) to work with Amazon’s Simple Storage Service.

2. Amazon S3

S3, as it’s commonly called, is a cloud-hosted storage service offered by AWS that’s extremely popular due to its flexibility, scalability, and durability paired with relatively low costs. S3 uses the term objects to refer to individual items, such as files and images, that are stored in buckets. A bucket is akin to a root folder, and folders within buckets are known as prefixes.

S3 affords us complete control over who can access our data and what permissions they have when it comes to editing that data. Permissions range from buckets being fully publically accessible and editable to completely private. We can even configure S3 to host websites!

Let’s take a look at some common tasks that we can perform on S3 using the AWS S3 CLI.

3. Creating Buckets in S3

3.1. Creating an S3 Bucket in the Default Region

The very first command that we’re going to look at is the mb (make bucket) command, which is used to create a new bucket in S3. By using mb, we’re able to create a bucket in the region that we selected as the default region when installing the AWS CLI:

$ aws s3 mb s3://linux-is-cool

Let’s check the output to confirm that the bucket was created successfully:

make_bucket: linux-is-cool

We’ve successfully created a bucket called linux-is-cool in S3. The bucket will be created in the AWS default region set during the configuration of the CLI.

3.2. Creating an S3 Bucket in a Specific Region

We can create buckets in any AWS region by simply adding a value for the region parameter to our base mb command:

$ aws s3 mb s3://linux-is-awesome --region eu-central-1

We get confirmation again that the bucket was created successfully:

make_bucket: linux-is-awesome

What we’ve done now is create a bucket called linux-is-awesome in the specified region.

4. Listing Buckets

4.1. Listing All Available Buckets

Now that we’ve created a couple of buckets, let’s see how we can use the ls (list) command to get listing of all our buckets in S3:

$ aws s3 ls

This is the base form of the ls command, and the output we get from its execution is a list of all our buckets in S3 along with the date and time that each bucket was created:

2019-11-16 19:10:17  linux-is-awesome
2019-11-16 19:09:59  linux-is-cool

4.2. Listing Items in the Root of a Specific Bucket

Let’s say we’re only interested in the contents of a specific bucket. We can improve on the base ls command to get the details of that specific bucket by adding the bucket name as a parameter. This gives us a view of the items contained in the root of that bucket:

$ aws s3 ls s3://linux-is-awesome

Let’s see what the output looks like this time:

                    PRE      subFolder/
2019-11-16 19:53:53 232059   README.md
2019-11-16 20:03:22 0        delete-me
2019-11-16 19:54:50 5242880  output.log

The creation date and time and the size of each object is returned as well. Prefixes (folders) are represented by “PRE” in the details column and don’t have a creation date or size returned for them.

4.3. Listing All the Items in a Specific Bucket

Adding the bucket name to the ls command returns the contents at the root of the bucket only. Fortunately, we can list all the contents of a bucket recursively when using the ls command:

$ aws s3 ls s3://linux-is-awesome --recursive --human-readable

There’s a bit extra happening in this command, so let’s break it down. Firstly, we added the –recursive flag, which tells the ls command to recursively list all objects in all prefixes (folders) of the linux-is-awesome bucket.

What we’ve further included is the –human-readable flag, which returns the object size in a human-readable format. Let’s see the output of this command:

2019-11-16 19:53:53  226.6 KiB   README.md
2019-11-16 20:03:22  0 Bytes     delete-me
2019-11-16 19:54:50  5.0 MiB     output.log
2019-11-16 19:53:47  226.6 KiB   subFolder/README.md

As expected, we’re presented with a list of every object in the bucket along with the object size in a human-readable format.

5. Deleting S3 Buckets

5.1. Deleting an Empty S3 Bucket

There are times when we want to remove an empty bucket. That’s when the rb (remove bucket) command comes into play:

$ aws s3 rb s3://linux-is-cool

We simply pass the name of the bucket that we want to delete to the rb command. The rb command will delete the bucket only if the bucket is empty.

And the output we receive confirms that the bucket has been removed:

remove_bucket: linux-is-cool

5.2. Forcibly Delete an S3 Bucket

We’ve seen how the base form of the rb command is used to delete an empty bucket. What about deleting buckets that aren’t empty?

We can delete buckets that contain objects by adding the –force flag to the rb command. This will force the removal of the bucket:

$ aws s3 rb s3://bucket-with-objects --force

As always, we check the output to confirm that the command was executed successfully. The output we get in this instance will list each individual object that gets deleted before the bucket itself is finally deleted:

delete: s3://bucket-with-objects/console.log
delete: s3://bucket-with-objects/catalina.out
remove_bucket: bucket-with-objects

We must take care when using the –force flag, as we won’t be able to recover our data once the command has executed successfully.

6. Working with Objects and Buckets

We’ve spent some time understanding how to create and manipulate buckets. Now let’s take a look at how we can use the S3 CLI to work with objects within S3.

6.1. Copying Files to a Bucket

We can use the cp (copy) command to copy files from a local directory to an S3 bucket.

We provide the cp command with the name of the local file (source) as well as the name of S3 bucket (target) that we want to copy the file to:

$ aws s3 cp new.txt s3://linux-is-awesome

Once the command completes, we get confirmation that the file object was uploaded successfully:

upload: .\new.txt to s3://linux-is-awesome/new.txt

We can go further and use this simple command to give the file we’re copying to S3 a new name. We can achieve this by specifying the new file name when performing the copy. Let’s take a look at how this works:

$ aws s3 cp new.txt s3://linux-is-awesome/new-from-local.txt

What we’ve done now is upload our new.txt to the linux-is-awesome bucket and renamed it to new-from-local.txt.

It doesn’t stop there. We can use the cp command to copy objects from one bucket to another. In this instance, we’ll specify a bucket as both the source location and the target location:

$ aws s3 cp s3://linux-is-awesome/new-from-local.txt s3://a-back-up-bucket/file.txt

As before, we check the output to confirm that everything worked correctly:

copy: s3://linux-is-awesome/new-from-local.txt to s3://a-back-up-bucket/file.txt

6.2. Copying Objects from a Bucket

The cp command can also be used to retrieve objects from an S3 bucket and store them locally.

We use the cp command again, but this time, we place the bucket name and object key as the source and use our local directory as the target:

$ aws s3 cp s3://linux-is-awesome/new-from-local.txt copied-from-s3.txt

The confirmation output will tell us that the S3 object new-from-local.txt was downloaded to the current folder and renamed to copied-from-s3.txt:

download: s3://linux-is-awesome/new-from-local.txt to .\copied-from-s3.txt

6.3. Move an Object

Now let’s take a look at the mv (move) command. The mv command is used to move objects around when working with S3 buckets. As with the cp command, we can use the mv command to move items between buckets, or between buckets and a local folder.

The base form of the mv command takes in a source as the first parameter followed by a destination as the second. Let’s look at an example of moving a local file to a bucket:

$ aws s3 mv my-local-file.log s3://linux-is-awesome/moved-from-local.log

And then we check out the output that confirms that our file has been moved successfully:

move: .\my-local-file.log to s3://linux-is-awesome/moved-from-local.log

It’s just as easy to move files from an S3 bucket to our local folders. Let’s move the moved-from-local.log file back to our local folder:

$ aws s3 mv s3://linux-is-awesome/moved-from-local.log .

We’ve chosen to specify only the target directory in the above command and did not specify a name for the file. What we expect is that the file will be moved to our local directory with the same name as before — moved-from-local.log. Our output will confirm this:

move: s3://linux-is-awesome/moved-from-local.log to .\moved-from-local.log

We can use the same approach to move objects between buckets, too.

Remember to use the ls command to view the items in the bucket to confirm that the files have been moved accordingly.

6.4. Delete an Object

Now, we’re going to look at deleting objects from buckets using the rm (remove) command.

This is a simple command, and all that we need to specify is the location of the object to delete:

$ aws s3 rm s3://linux-is-awesome/delete-me

And, as is usual when executing a command via the CLI, we get confirmation that the operation has completed successfully:

delete: s3://linux-is-awesome/delete-me

7. Hosting a Website on S3

S3 is more than just storage. We can use a unique S3 feature that enables us to use an S3 bucket for hosting a website. We need to configure our bucket in a specific way to enable this feature. Let’s look at the website command:

$ aws s3 website s3://linux-is-cool --index-document index.html --error-document error.html

The website command instructs S3 to configure the bucket to behave like a website. We added two key parameters as well: index-document is important as it tells S3 which page to use as the default landing page for visitors to our site, while error-document tells S3 which page to serve as the default error page. Bucket permissions are handled automatically.

The default URL for an S3-hosted website takes the form:

http://<bucket-name>.s3-website-<region>.amazonws.com

So if our example bucket linux-is-cool is hosted in the us-east-1 region, the website URL would be:

http://linux-is-cool.s3-website-us-east-1.amazonws.com

If we’ve configured everything correctly, the index.html file will be loaded when we navigate to the URL. Keep in mind that although we have this nifty feature, it’s not the recommended method of hosting a website. We would make use of another AWS service called CloudFront together with Route 53 and Certificate Manager to host a secure website on S3.

8. Conclusion

In this tutorial, we introduced Amazon Webservice’s S3 cloud-based storage service and had a brief look at what it offers us. We worked with a set of basic S3 CLI commands that help us manage buckets and the objects stored in those buckets.

We’ve only just scratched the surface of what’s possible with the S3 CLI, so be sure to check out the S3 developer guide for more detailed information about all the options available via the S3 service.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.