Generic Top

Get started with Spring 5 and Spring Boot 2, through the Learn Spring course:

>> CHECK OUT THE COURSE

1. Introduction

In this tutorial, we'll learn how to interact with the Amazon S3 (Simple Storage Service) storage system programmatically from Java.

Remember that S3 has a very simple structure; each bucket can store any number of objects, which can be accessed using either a SOAP interface or a REST-style API.

Going forward, we'll use the AWS SDK for Java to create, list, and delete S3 buckets. We'll also upload, list, download, copy, move, rename and delete objects within these buckets.

2. Maven Dependencies

Before we get started, we need to declare the AWS SDK dependency in our project:

<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk</artifactId>
    <version>1.11.163</version>
</dependency>

To view the latest version, we can check Maven Central.

3. Prerequisites

To use AWS SDK, we'll need a few things:

  1. AWS Account: we need an Amazon Web Services account. If we don't have one, we can go ahead and create an account.
  2. AWS Security Credentials: These are our access keys that allow us to make programmatic calls to AWS API actions. We can get these credentials in two ways, either by using AWS root account credentials from the access keys section of the Security Credentials page, or by using IAM user credentials from the IAM console.
  3. Choosing AWS Region: We also have to select the AWS region(s) where we want to store our Amazon S3 data. Keep in mind that S3 storage prices vary by region. For more details, head over to the official documentation. In this tutorial, we'll use US East (Ohio, region us-east-2).

4. Creating Client Connection

First, we need to create a client connection to access the Amazon S3 web service. We'll use the AmazonS3 interface for this purpose:

AWSCredentials credentials = new BasicAWSCredentials(
  "<AWS accesskey>", 
  "<AWS secretkey>"
);

Then we'll configure the client:

AmazonS3 s3client = AmazonS3ClientBuilder
  .standard()
  .withCredentials(new AWSStaticCredentialsProvider(credentials))
  .withRegion(Regions.US_EAST_2)
  .build();

5. Amazon S3 Bucket Operations

5.1. Creating a Bucket

It's important to note that the bucket namespace is shared by all users of the system. So our bucket name must be unique across all existing bucket names in Amazon S3 (we'll find out how to check that in just a moment).

In addition, as specified in the official documentation, the Bucket names must comply with the following requirements:

  • names shouldn't contain underscores
  • names should be between 3 and 63 characters long
  • names shouldn't end with a dash
  • names can't contain adjacent periods
  • names can't contain dashes next to periods (e.g., “my-.bucket.com” and “my.-bucket” are invalid)
  • names can't contain uppercase characters

Now let's create a bucket:

String bucketName = "baeldung-bucket";

if(s3client.doesBucketExist(bucketName)) {
    LOG.info("Bucket name is not available."
      + " Try again with a different Bucket name.");
    return;
}

s3client.createBucket(bucketName);

Here we're using the s3client that we created in the previous step. Before we create a bucket, we have to check whether our bucket name is available or not by using the doesBucketExist() method. If the name is available, then we'll use the createBucket() method.

5.2. Listing Buckets

Now that we've created a few buckets, let's print a list of all the buckets available in our S3 environment using the listBuckets() method. This method will return a list of all the Buckets:

List<Bucket> buckets = s3client.listBuckets();
for(Bucket bucket : buckets) {
    System.out.println(bucket.getName());
}

This will list all the buckets that are present in our S3 environment:

baeldung-bucket
baeldung-bucket-test2
elasticbeanstalk-us-east-2

5.3. Deleting a Bucket

It's important to ensure that our bucket is empty before we delete it. Otherwise, an exception will be thrown. Also, note that only the owner of a bucket can delete it, regardless of its permissions (Access Control Policies):

try {
    s3client.deleteBucket("baeldung-bucket-test2");
} catch (AmazonServiceException e) {
    System.err.println("e.getErrorMessage());
    return;
}

6. Amazon S3 Object Operations

A file or collection of data inside an Amazon S3 bucket is known as an object. We can perform several operations on objects like uploading, listing, downloading, copying, moving, renaming and deleting.

6.1. Uploading Objects

Uploading an object is a pretty straightforward process. We'll use the putObject() method, which accepts three parameters:

  1. bucketName: The bucket name where we want to upload the object
  2. key: This is the full path to the file
  3. file: The actual file containing the data to be uploaded
s3client.putObject(
  bucketName, 
  "Document/hello.txt", 
  new File("/Users/user/Document/hello.txt")
);

6.2. Listing Objects

We'll use the listObjects() method to list all the available objects in our S3 bucket:

ObjectListing objectListing = s3client.listObjects(bucketName);
for(S3ObjectSummary os : objectListing.getObjectSummaries()) {
    LOG.info(os.getKey());
}

Calling the listObjects() method of the s3client object will yield the ObjectListing object, which can be used to get a list of all the object summaries in the specified bucket. We're just printing the key here, but there are also a couple of other options available, like size, owner, last modified, storage class, etc.

This will now print a list of all the objects inside our bucket:

Document/hello.txt

6.3. Downloading an Object

To download an object, we'll first use the getObject() method on the s3client, which will return an S3Object object. Once we get this, we'll call getObjectContent() on it to get an S3ObjectInputStream object, which behaves like a conventional Java InputStream:

S3Object s3object = s3client.getObject(bucketName, "picture/pic.png");
S3ObjectInputStream inputStream = s3object.getObjectContent();
FileUtils.copyInputStreamToFile(inputStream, new File("/Users/user/Desktop/hello.txt"));

Here we're using the FileUtils.copyInputStreamToFile() method by Apache Commons. We can also visit this Baeldung article to explore other ways to convert an InputStream to a File.

6.4. Copying, Renaming, and Moving an Object

We can copy an object by calling the copyObject() method on our s3client, which accepts four parameters:

  1. source bucket name
  2. object key in source bucket
  3. destination bucket name (it can be same as source)
  4. object key in destination bucket
s3client.copyObject(
  "baeldung-bucket", 
  "picture/pic.png", 
  "baeldung-bucket2", 
  "document/picture.png"
);

Note: We can use a combination of the copyObject() method and deleteObject() for performing moving and renaming tasks. This will involve copying the object first and then deleting it from its old location.

6.5. Deleting an Object

To delete an Object, we'll call the deleteObject() method on the s3client and pass the bucket name and object key:

s3client.deleteObject("baeldung-bucket","picture/pic.png");

6.6. Deleting Multiple Objects

To delete multiple objects at once, we'll first create the DeleteObjectsRequest object and pass the bucket name to its constructor. Then we'll pass an array of all the object keys that we want to delete.

Once we have this DeleteObjectsRequest object, we can pass it to the deleteObjects() method of our s3client as an argument. If successful, it'll delete all the objects that we supplied:

String objkeyArr[] = {
  "document/hello.txt", 
  "document/pic.png"
};

DeleteObjectsRequest delObjReq = new DeleteObjectsRequest("baeldung-bucket")
  .withKeys(objkeyArr);
s3client.deleteObjects(delObjReq);

7. Conclusion

In this article, we focused on the basics of interacting with the Amazon S3 web service, both at the bucket and object level.

As always, the full implementation of this article can be found over on Github.

Generic bottom

Get started with Spring 5 and Spring Boot 2, through the Learn Spring course:

>> CHECK OUT THE COURSE
Cloud footer banner
Comments are closed on this article!