Interested in speaking at MongoDB World 2022? Click here to become a speaker.
HomeLearnArticleAtlas Data Lake SQL Integration to Form Powerful Data Interactions

Atlas Data Lake SQL Integration to Form Powerful Data Interactions

Updated: Nov 23, 2021 |

Published: Nov 23, 2021

  • Data Lake
  • Atlas
  • JavaScript
  • ...

By Pavel Duchovny

Rate this article

Modern platforms have a wide variety of data sources. As businesses grow, they have to constantly evolve their data management and have sophisticated, scalable, and convenient tools to analyse data from all sources to produce business insights.

MongoDB has developed a rich and powerful query language, including a very robust aggregation framework.

These were mainly done to optimize the way developers work with data and provide great tools to manipulate and query MongoDB documents.

Having said that, many developers, analysts, and tools still prefer the legacy SQL language to interact with the data sources. SQL has a strong foundation around joining data as this was a core concept of the legacy relational databases normalization model.

This makes SQL have a convenient syntax when it comes to describing joins.

Providing MongoDB users the ability to leverage SQL to analyse multi-source documents while having a flexible schema and data store is a compelling solution for businesses.

#Data Sources and the Challenge

Consider a requirement to create a single view to analyze data from operative different systems. For example:

  • Customer data is managed in the user administration systems (REST API).
  • Financial data is managed in a financial cluster (Atlas cluster).
  • End-to-end transactions are stored in files on cold storage gathered from various external providers (S3 store).

How can we combine and best join this data?

MongoDB Atlas Data Lake connects multiple data sources using the different source types. Once the data sources are mapped, we can create collections consuming this data. Those collections can have SQL schema generated, allowing us to perform sophisticated joins and do JDBC queries from various BI tools.

In this article, we will showcase the extreme power hidden in the Data Lake SQL interface.

#Setting Up My Data Lake

In the following view, I have created three main data sources:

  • S3 Transaction Store (S3 sample data).
  • Accounts from my Atlas clusters (Sample data sample_analytics.accounts).
  • Customer data from a secure https source.

Data Lake Configuration

I mapped the stores into three collections under FinTech database:

  • Transactions
  • Accounts
  • CustomerDL

Now, I can see them through a data lake connection as MongoDB collections.

Let's grab our data lake connection string from the Atlas UI.

Data Lake Connection String

This connection string can be used with our BI tools or client applications to run SQL queries.

#Connecting and Using $sql

Once we connect to the data lake via a mongosh shell, we can generate a SQL schema for our collections. This is required for the JDBC or $sql operators to recognise collections as SQL “tables.”

#Generate SQL schema for each collection:

1use admin;
2db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: ["FinTech.customersDL"], sampleSize: 1000, setSchemas: true})
3{
4 ok: 1,
5 schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
6}
7db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: ["FinTech.accounts"], sampleSize: 1000, setSchemas: true})
8{
9 ok: 1,
10 schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
11}
12db.runCommand({sqlGenerateSchema: 1, sampleNamespaces: ["FinTech.transactions"], sampleSize: 1000, setSchemas: true})
13{
14 ok: 1,
15 schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]
16}

#Running SQL queries and joins using $sql stage:

1use FinTech;
2db.aggregate([{
3 $sql: {
4 statement: "SELECT a.* , t.transaction_count FROM accounts a, transactions t where a.account_id = t.account_id SORT BY t.transaction_count DESC limit 2",
5 format: "jdbc",
6 formatVersion: 2,
7 dialect: "mysql",
8 }
9}])

The above query will prompt account information and the transaction counts of each account.

#Connecting Via JDBC

Let’s connect a powerful BI tool like Tableau with the JDBC driver.

Download JDBC Driver.

JDBC driver

Setting connection.properties file.

1user=root
2password=*******
3authSource=admin
4database=FinTech
5ssl=true
6compressors=zlib

#Connect to Tableau

Click the “Other Databases (JDBC)” connector and load the connection.properties file pointing to our data lake URI.

Tableau Connector Selection

Once the data is read successfully, the collections will appear on the right side.

#Setting and Joining Data

Setting Tables

We can drag and drop collections from different sources and link them together.

Joining Tables with Drag & Drop

In my case, I connected Transactions => Accounts based on the Account Id field, and accounts and users based on the Account Id to Accounts field.

Unified View of Accounts, Transactions and Users data

In this view, we will see a unified table for all accounts with usernames and their transactions start quarter.

#Summary

MongoDB has all the tools to read, transform, and analyse your documents for almost any use-case.

Whether your data is in an Atlas operational cluster, in a service, or on cold storage like S3, Atlas Data Lake will provide you with the ability to join the data in real time. With the option to use powerful join SQL syntax and SQL-based BI tools like Tableau, you can get value out of the data in no time.

Try Atlas Data Lake with your BI tools and SQL today.

Rate this article
MongoDB logo
© 2021 MongoDB, Inc.

About

  • Careers
  • Legal Notices
  • Privacy Notices
  • Security Information
  • Trust Center
© 2021 MongoDB, Inc.