28

In AWS Glue job, we can write some script and execute the script via job.

In AWS Lambda too, we can write the same script and execute the same logic provided in above job.

So, my query is not whats the difference between AWS Glue Job vs AWS Lambda, BUT iam trying to undestand when AWS Glue job should be preferred over AWS Lambda, especially while when both does the same job? If both does the same job, then ideally I would blindly prefer using AWS Lambda itself, right?

Please try to understand my query..

mega6382
  • 9,211
  • 17
  • 48
  • 69
john
  • 925
  • 1
  • 12
  • 20

4 Answers4

27

Additional points:

Per this source and Lambda FAQ and Glue FAQ

Lambda can use a number of different languages (Node.js, Python, Go, Java, etc.) vs. Glue can only execute jobs using Scala or Python code.

Lambda can execute code from triggers by other services (SQS, Kafka, DynamoDB, Kinesis, CloudWatch, etc.) vs. Glue which can be triggered by lambda events, another Glue jobs, manually or from a schedule.

Lambda runs much faster for smaller tasks vs. Glue jobs which take longer to initialize due to the fact that it's using distributed processing. That being said, Glue leverages its parallel processing to run large workloads faster than Lambda.

Lambda looks to require more complexity/code to integrate into data sources (Redshift, RDS, S3, DBs running on ECS instances, DynamoDB, etc.) while Glue can easily integrate with these. However, with the addition of Step Functions, multiple lambda functions can be written and ordered sequentially due reduce complexity and improve modularity where each function could integrate into a aws service (Redshift, RDS, S3, DBs running on ECS instances, DynamoDB, etc.)

Glue looks to have a number of additional components, such as Data Catalog which is a central metadata repository to view your data, a flexible scheduler that handles dependency resolution/job monitoring/retries, AWS Glue DataBrew for cleaning and normalizing data with a visual interface, AWS Glue Elastic Views for combining and replicating data across multiple data stores, AWS Glue Schema Registry to validate streaming data schema.

There are other examples I am missing, so feel free to comment and I can update.

Hrvoje
  • 13,566
  • 7
  • 90
  • 104
deesolie
  • 867
  • 7
  • 17
  • 1
    Good list. I would add Step Functions to the list of AWS services that Lambda integrates with as this brings state machine functionality to data processing using Lambda. – Bill Weiner Jun 30 '21 at 14:29
  • Oh wow, I didn't know this integration existed. Pretty cool! So it looks to help with reducing complexity and improving code modularity if customer's existing processes already used lambda functions. @BillWeiner, would you say this helps bridge the gap between lambda and glue? Reading add. documentation here and in terms of ETL functionality, that looks to be the case (https://aws.amazon.com/step-functions/) – deesolie Jun 30 '21 at 20:29
  • Absolutely. Step functions allow for flexibility in overall execution of serverless workflows and enable cost effective polling processes for Lambda. It is my preferred method for implementing ETL/ELT workflows (data movement orchestration). Glue while easy to set up very often breaks down, wrong data types & incorrect expectations of data formats, and is a quagmire to modify its functionality. Classic AWS service issue - solves 70% of the problem easily but you SOL if your problem lands in the 30%. Lambda with Step Functions is easily understandable and flexible to meet all needs. – Bill Weiner Jun 30 '21 at 21:32
  • Good to know Bill, thanks for sharing this info! So Lambda can integrate with Redshift and other DBs, it just requires a bit more set up but worth it compared to the complexities of Glue? – deesolie Jun 30 '21 at 22:07
  • 1
    The complexities that arise with Glue when you need anything outside of "normal". Yes, I'd rather spend some bounded upfront time and have a flexible, extensible solution than start out easy and have to reset late. – Bill Weiner Jun 30 '21 at 22:36
11

Lambda has a lifetime of fifteen minutes. It can be used to trigger a glue job as an event based activity. That is, when a file lands in S3 for example, we can have an event trigger which can run a glue job. Glue is a managed services for all data processing.

If the data is very low maybe you can do it in lambda, but for some reason the process goes beyond fifteen minutes, then data processing would fail.

Hrvoje
  • 13,566
  • 7
  • 90
  • 104
Yuva
  • 2,831
  • 7
  • 36
  • 60
3

The answer to this can involve some foundational design decisions. What is this job doing? What kind of data are you dealing with? Is there a decision to be made whether the task should be executed in a batch or event oriented paradigm?

Batch

This may be necessary or desirable because the task:

  • Is being done over large monolithic data (e.g., binary).
  • Relies on context of multiple records in a dataset such that they must be loaded into a single job.
  • Order matters.

I feel like just as often I see batch handling chosen by default because "this is the way we've always done it" but breaking from this approach could be worth consideration.

Glue is built for batch operations. With a current maximum execution time of 15 minutes and maximum memory of 10gb, Lambda has become capable of processing fairly large datasets in a single execution, as well. It can be difficult to pin down a direct cost comparison without specifics of the workload. When it comes to development, I feel that Lambda has the edge as far as tooling to build, test, deploy.

Event

In the case where your data consists of a set of records, it might behoove you to parse and "stream" them into Lambda. Consider a flow like:

  • CSV lands in S3.
  • S3 event triggers Lambda.
  • Lambda reads and parses CSV into discrete events, submits to another Lambda or publishes to SNS for downstream processing. Concurrent instances of this Lambda can be employed to speed up ingest, where each instance is responsible for certain lines of the S3 object.

This pushes all logic and error handling, as well as resources required, to the level of individual event/record level. Often mechanisms such as dead-letter queues are employed for remediation. While context of a given container persists across invocations - assuming the container has not been idle and torn down - Lambda should generally be considered stateless such that the processing of an event/record is thought of as occurring within its own scope, outside that of others in the dataset.

ormu5
  • 66
  • 3
  • 1
    great analysis, but also... I/O is freaking expensive. We do batches not because we've always been doing it this way, but because this limits the number of I/Os. If I have a million events to process, I would definitely be able to do it much faster and much cheaper by grabbing a few batches rather than invoking a million lambda instances. Each lambda invocation internally is an HTTP request and that takes time. And then what? You put that data into a DB? A million inserts in parallel will just kill your DB, but well managed batches will be handled very quickly without any hickups – Kamil Janowski Jul 22 '22 at 09:21
  • Good points, Kamil. A few thoughts: - HTTP: in the era of SOA, this in of itself is not a negative. Loose-coupling brings inefficiencies but also benefits. - "faster and cheaper": Anecdotally, I see mixed opinion online about this. I may run some tests. Will report back if I do. - DB: This depends on data domains. If I have a domain for "orders," I may not want an ad hoc Glue job writing directly to my orders table. I would likely push ETL'd records to a SQS queue for the designated service/Lambda who owns the table to insert them (yes, optionally in batch off the queue and into the DB). – ormu5 Sep 14 '22 at 18:22
0

Lambda has some limitation you can find lambda limitation here glue has also limitation here but it's much powerful than lambdas. you can compare the limitations and decide when to use glue