2

How to use AWS Glue crawlers with Amazon Athena

 11 months ago
source link: https://www.pluralsight.com/resources/blog/data/how-to-use-aws-glue-crawlers
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How to use AWS Glue crawlers with Amazon Athena

Amazon Athena provides a simplified, flexible way to analyze petabytes of data right where they live. For example, Athena can analyze data or build applications from an Amazon Simple Storage Service (S3) data lake and 30 data sources, including on-premises data sources or other cloud systems using SQL or Python. 

There are four main Amazon Athena use cases:

  1. Run queries on S3, on-premises data centers, or on other clouds 

  2. Prepare data for machine learning models

  3. Use machine learning models in SQL queries or Python to simplify complex tasks, such as anomaly detection, customer cohort analysis, and sales predictions

  4. Perform multicloud analytics (like querying data in Azure Synapse Analytics and then visualizing the results with Amazon QuickSight)

Now that we’ve covered Amazon Athena, let's talk about AWS Glue. You can do a few different things with AWS Glue. 

First, you can use AWS Glue data integration engines, which allow you to get data from a few different sources. This includes Amazon S3, Amazon DynamoDB, and Amazon RDS, as well as databases running on Amazon EC2 (which integrates with AWS Glue studio) and AWS Glue for Ray, Python Shell, and Apache Spark. 

Once the data is interfaced and filtered so it can interact with places to load or create data, this list expands to include data from places like Amazon Redshift, data lakes, and data warehouses.

You can also use AWS Glue to run your ETL jobs. These jobs allow you to segregate customer data, protect customer data in transit and at rest, and access customer data only as needed in response to customer requests. When provisioning an ETL job, all you need to do is provide input data sources and output data targets in your virtual private cloud.

The final way you can use AWS Glue is through a data catalog to quickly discover and search multiple AWS datasets without moving the data. Once the data is cataloged, it’s immediately available for search and query using Amazon Athena, Amazon EMR, and Amazon Redshift Spectrums.

So, how can you get data from AWS Glue into Amazon Athena? Follow these steps:

  1. Start by uploading data to a data source. The most popular option is an S3 bucket, but DynamoDB tables and Amazon RedShift are also options. 

  2. Select your data source and create a classifier if necessary. A classifier reads the data and generates a schema if it recognizes the format. You can create custom classifiers to see different data types. 

  3. Create a crawler. 

  4. Set up a name for the crawler, then choose your data sources and add any custom classifiers to make sure AWS Glue recognizes the data correctly.

  5. Set up an Identity and Access Management (IAM) role to make sure the crawler can run the processes correctly.

  6. Create a database that will hold the data set. Set when and how often the crawler works to keep your data fresh and up to date.

  7. Run the crawler. This process can take a while depending on how big the dataset is. Once the crawler has successfully run, you’ll see changes to tables in the database.

Now that you’ve completed this process, you can jump over to Amazon Athena and run the queries you need to filter the data and get the results you’re looking for.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK