Via G. Falcone 5, Pollenza (MC), Italy
+39 0733 203595

Tips and tricks for Simple Storage Service S3

Simple Storage Service – S3 is one of the first services created by Amazon AWS, and it is also one of the
most popular. Its simplicity has determined its great success.
In addition to the most commonly used functions or methods of use, there are some functions or features
that are often little used or unknown.

In this article we report some functions or features we use, have used and would like to share.

Pre-Signed URLs

In S3 all files or objects are private by default. Thus, until they are made public, these files can only be read
by the owners. However, it often happens that we want to share objects with people outside our
organization. In this case there are several ways to share the resource. The most simple, functional and
secure way is through pre-signed urls.

Amazon S3 creates a URL with integrated:
• AWSAccessKeyId
• Expires
• Signature

This url can be shared outside our infrastructure and can be used with a set deadline. Here is an example:

At the end of the set deadline, the token associated with our request is no longer available and it will no
longer be possible to download the object.

Multipart Upload

When it is necessary to load large objects with S3, it is recommended to use multipart upload. Amazon S3
subdivides and loads the various parts in parallel, and this makes the upload of objects more efficient. The
best practices recommend using multipart uploads for items over 100MB in size and require using them for
sizes over 5GB. The aws s3 commands automatically use multipart loading, if necessary.

While the following command allows you to manually indicate the type of operation that has to be
performed:

However, there may be cases where it is necessary to proceed differently, for example if the parts of the
file are present on different servers. Let’s take an example, with this command we get the UploadID:

At this point, with the following command, we load the various parts of the file (we can also load them in
parallel)

Once the individual parts have been loaded, we can verify that the uploads are present on the server:

Now we can combine the files on S3 into a single file:

In this way we can divide a large file into smaller parts and load them in parallel from our PC or from
workstations on different servers.

Range GETs

Another very useful but little used feature is to see a part of a file (for example large) on S3, without having
to download it completely. AWS provides the possibility to download a portion of a file, perhaps to check if
the content is what we expect, before downloading it.

The command downloads the first 1024 bytes of the test.db file and saves it in the file my_data_range.

S3 has many other interesting features that can help us during our work. We just wanted to show you some
of them.

Leave a reply


This site uses Akismet to reduce spam. Learn how your comment data is processed.