swirl
Home Software Blog Wallpapers Webtools
S3 Select, a nifty thing
Sunday 18, April 2021   |   Post link
S3 logo

Overview

This blog post discusses the AWS S3 Select feature. It discusses why you might want to use it and provides Java sample to code to work with S3 select feature.

A bit of history

Amazon S3 was released in March 2006 and its one of the top 10 most widely used AWS services. When it comes to storing large files, I can't think of any service I'd use over S3.

Most organization don't operate solely in the cloud. There's still a lot of data that's on-prem. S3 is a great way to move data from on-prem on to the cloud. We use S3 to transfer data we collected each day, send it over to AWS via S3 and run various kinds of analytics on the data.

Anytics and more analytics

Even if you have great network connectivity, tranferring large amounts of data everyday adds a significant time to the overall processing time. This means you shold try to transfer data just once such that multiple analytics that you want to run can refer to the same data that has been uploaded. This may mean transferring some extra data which may not been needed by one analytic service but is needed by another.

The idea of collecting related data together may increase read time of application. What I mean by this is, let's say we are transferring book informations stored in CSV files using S3. Lets say one analytical process calculates the total sales per day. The data that it operates on looks like:

bookid  cost    quantity
------------------------
10034   $10     12
20104   $4.90    5

Let's also assume we have another analytical process which calculates the what type of book are popular by analyzing the summary of each book sold.

bookid   quantity   excerpt
---------------------------
10034          12   It is a group of children who see - and feel - what makes Derry so horr ...
20104           9   First published fifteen years ago, shortly after his death, inside this collection ...

In order to reduce the size of data, it makes sense to combine the two datasets. It also helps to have related data together and reduce joining across datasets. The combined set will now look like this:

bookid   cost   quantity   excerpt
----------------------------------
10034    $10     12        It is a group of children who see - and feel - what makes Derry so horr ...
20104    $4.90    9        First published fifteen years ago, shortly after his death, inside this collection ...

There is just a slight problem we have introduced which is each procss will now also have to ready a small bit of data it is not interested in that is the sales calculation process will have to ignore the excerpt and the topic popularity process will have to ignore the cost column.

AWS introduced a nifty feature in Nov 2017 called S3 select. It allows you to query CSV, JSON and Apache Parquet files using simple SQL statements. Using this feature your services can only pull out as much data they need using filters (WHERE clauses) and by selecting only the required columns.

Sample Java code

The Amazon AmazonS3 class has a method called selectObjectContent() which allows running SQL queries to select data from S3 files. The code below demonstrates running a query on a csv file stored in an S3 bucket:

private void runQuery(Regions regions, String accessKey, String secretKey, String bucket, String objectKey, String sql) throws Exception {
    AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);

    AmazonS3 s3client = AmazonS3ClientBuilder
            .standard()
            .withCredentials(new AWSStaticCredentialsProvider(credentials))
            .withRegion(regions)
            .build();

    SelectObjectContentRequest socr = new SelectObjectContentRequest();
    socr.setBucketName(bucket);
    socr.setKey(objectKey);
    socr.setExpressionType("SQL");
    socr.setExpression(sql);

    CSVInput csvInput = new CSVInput();
    csvInput.setFileHeaderInfo("Use");
    csvInput.setFieldDelimiter(",");

    InputSerialization iser = new InputSerialization();
    iser.setCsv(csvInput);
    iser.setCompressionType("NONE");
    socr.setInputSerialization(iser);

    CSVOutput csvOutput = new CSVOutput();
    csvOutput.setFieldDelimiter(",");

    OutputSerialization oser = new OutputSerialization();
    oser.setCsv(csvOutput);
    socr.setOutputSerialization(oser);

    Instant startedAt = Instant.now();
    SelectObjectContentResult result = s3client.selectObjectContent(socr);
    InputStream resultInputStream = result.getPayload().getRecordsInputStream();
    BufferedReader streamReader = new BufferedReader(new InputStreamReader(resultInputStream, "UTF-8"));
    String line;
    int matchedRecords = 0;
    while ((line = streamReader.readLine()) != null) {
        System.out.println(line);
        matchedRecords++;
    }
    Instant endedAt = Instant.now();
    System.out.printf("Got %d records in %d ms\n", matchedRecords, Duration.between(startedAt, endedAt).toMillis());
}

The full source code to the sample application is available at GitHub.

Performance

How does S3 select actually perform? I was pleasantly surprised to find that S3 select performs very well indeed. It took S3 select approximately 8 seconds to find 2 matching records out of a total of 1 million records in a file of 1.5 GB size. This is quite impressive. The same file takes about an entire minute to load in Microsoft Excel after which it takes 1 min 45 seconds to find the two matching records on my latop which has 16 GB running an Intel 8365U processor. GZIPing the 1.5 GB CSV to a 350 MB compressed file did not reduce the performance which is great - you'll spend less time transferring large files over the network.

Limits

S3 select does come with some limits, detail are available here.

  • The maximum length of a SQL expression is 256 KB
  • The maximum length of a record in the input or result is 1 MB

Conclusion

S3 select is a very nifty feature which makes using csv, json and Apache parquet files very easy. You now have ways to filter your data vertically and horizontally right at the source (S3 bucket) itself. Though the type of operations are limited, its still a very useful feature to have.

References

Sample code in GitHub
AWS documentation
Querying data without servers or databases using Amazon S3 Select



Categories: AWS (5)
Tags: S3-Select(1)

Comments

Posts By Year

2024 (1)
2023 (5)
2022 (10)
2021 (5)
2020 (12)
2019 (6)
2018 (8)
2017 (11)
2016 (6)
2015 (17)
2014 (2)
2013 (4)
2012 (2)

Posts By Category

.NET (4)
.NET Core (2)
ASP.NET MVC (4)
AWS (5)
AWS API Gateway (1)
Android (1)
Apache Camel (1)
Architecture (1)
Audio (1)
Azure (2)
Book review (3)
Business (1)
C# (3)
C++ (2)
CloudHSM (1)
Containers (4)
Corporate culture (1)
Database (3)
Database migration (1)
Desktop (1)
Docker (1)
DotNet (3)
DotNet Core (2)
ElasticSearch (1)
Entity Framework (3)
Git (3)
IIS (1)
JDBC (1)
Java (9)
Kibana (1)
Kubernetes (1)
Lambda (1)
Learning (1)
Life (7)
Linux (1)
Lucene (1)
Multi-threading (1)
Music (1)
OData (1)
Office (1)
PHP (1)
Photography (1)
PowerShell (2)
Programming (28)
Rants (5)
SQL (2)
SQL Server (1)
Security (2)
Software (1)
Software Engineering (1)
Software development (2)
Solr (1)
Sql Server (2)
Storage (1)
T-SQL (1)
TDD (1)
TSQL (5)
Tablet (1)
Technology (1)
Test Driven (1)
Unit Testing (1)
Unit Tests (1)
Utilities (3)
VC++ (1)
VMWare (1)
VSCode (1)
Visual Studio (2)
Wallpapers (1)
Web API (2)
Win32 (1)
Windows (9)
XML (2)

Posts By Tags

.NET(6) API Gateway(1) ASP.NET(4) AWS(3) Adults(1) Advertising(1) Android(1) Anti-forgery(1) Asynch(1) Authentication(2) Azure(2) Backup(1) Beliefs(1) BlockingQueue(1) Book review(2) Books(1) Busy(1) C#(4) C++(3) CLR(1) CORS(1) CSRF(1) CTE(1) Callbacks(1) Camel(1) Certificates(1) Checkbox(1) CloudHSM(1) Cmdlet(1) Company culture(1) Complexity(1) Consumer(1) Consumerism(1) Containers(3) Core(2) Custom(2) DPI(1) Data-time(1) Database(4) Debugging(1) Delegates(1) Developer(2) Dockers(2) DotNetCore(3) EF 1.0(1) Earphones(1) Elastic Search(2) ElasticSearch(1) Encrypted(1) Entity framework(1) Events(1) File copy(1) File history(1) Font(1) Git(2) HierarchyID(1) Hyper-V(1) IIS(1) Installing(1) Intelli J(1) JDBC(1) JSON(1) JUnit(1) JWT(1) Java(3) JavaScript(1) Kubernetes(1) Life(1) LinkedIn(1) Linux(2) Localization(1) Log4J(1) Log4J2(1) Lucene(1) MVC(4) Management(2) Migration history(1) Mirror(1) Mobile Apps(1) Modern Life(1) Money(1) Music(1) NGINX(1) NTFS(1) NUnit(2) OData(1) OPENXML(1) Objects(1) Office(1) OpenCover(1) Organization(1) PHP(1) Paths(1) PowerShell(2) Producer(1) Programming(2) Python(2) QAAC(1) Quality(1) REDIS(2) REST(1) Runtimes(1) S3-Select(1) SD card(1) SLF4J(1) SQL(2) SQL Code-first Migration(1) SSH(2) Sattelite assemblies(1) School(1) Secrets Manager(1) Self reliance(1) Service(1) Shell(1) Solr(1) Sony VAIO(1) Spirituality(1) Spring(1) Sql Express(1) System Image(1) TDD(1) TSQL(3) Table variables(1) Tables(1) Tablet(1) Ubuntu(1) Url rewrite(1) VMWare(1) VSCode(1) Validation(2) VeraCode(1) Wallpaper(1) Wallpapers(1) Web Development(4) Windows(2) Windows 10(2) Windows 2016(2) Windows 8.1(1) Work culture(1) XML(1) Yii(1) iTunes(1) renew(1) security(1) static ip address(1)