Wednesday 28 April 2021

AWS Lambda

 

AWS Lambda

Concurrency

Your functions’ concurrency is the number of instances that serve requests at a given time

For an initial burst of traffic, your functions’ cumulative concurrency in a Region can reach an initial level of between 500 and 3000, which varies per Region.

Burst concurrency quotas:

  • 3000 – US West (Oregon), US East (N. Virginia), Europe (Ireland)
  • 1000 – Asia Pacific (Tokyo), Europe (Frankfurt), US East (Ohio)
  • 500 – Other Regions

When requests come in faster than your function can scale, or when your function is at maximum concurrency, additional requests fail with a throttling error (429 status code).

Throttling can result in the error: “Rate exceeded” and 429 “TooManyRequestsException”

If the above error occurs, verify if you see throttling messages in Amazon CloudWatch Logs but no corresponding data points in the Lambda Throttles metrics.

If there are no Lambda Throttles metrics, the throttling is happening on API calls in your Lambda function code.

Methods to resolve throttling include:

  • Configure reserved concurrency.
  • Use exponential backoff in your application code.

Concurrency metrics:

  • ConcurrentExecutions
  • UnreservedConcurrentExecutions
  • ProvisionedConcurrentExecutions
  • ProvisionedConcurrencyInvocations
  • ProvisionedConcurrencySpilloverInvocations
  • ProvisionedConcurrencyUtilization

Invocations

Synchronous:

  • CLI, SDK, API Gateway.
  • Result returned immediately.
  • Error handling happens client side (retries, exponential backoff etc.).

Asynchronous:

  • S3, SNS, CloudWatch Events etc.
  • Lambda retries up to 3 times.
  • Processing must be idempotent (due to retries).

Event source mapping:

  • SQS, Kinesis Data Streams, DynamoDB Streams.
  • Lambda does the polling (polls the source).
  • Records are processed in order (except for SQS standard).

Traffic Shifting

With the introduction of alias traffic shifting, it is now possible to trivially implement canary deployments of Lambda functions. By updating additional version weights on an alias, invocation traffic is routed to the new function versions based on the weight specified.

Detailed CloudWatch metrics for the alias and version can be analyzed during the deployment, or other health checks performed, to ensure that the new version is healthy before proceeding.

The following example AWS CLI command points an alias to a new version, weighted at 5% (original version at 95% of traffic):

aws lambda update-alias --function-name myfunction --name myalias --routing-config '{"AdditionalVersionWeights" : {"2" : 0.05} }'

AWS Batch

Batch jobs run as Docker images.

Dynamically provisions EC2 instances in a VPC.

Deployment Options:

  • Managed for you entirely (serverless).
  • Manage yourself.

For managed deployments:

  • Choose your pricing model: On-demand or Spot.
  • Choose instance types.
  • Configure VPC/subnets.

Pay for underlying EC2 instances.

Schedule using CloudWatch Events.

Orchestrate with Step Functions.

Can use on-demand or Spot instances.

Multi Node can be used for HPC use cases.

Comparison with Lambda:

  • No execution time limit (Lambda is 15 minutes)
  • Any runtime (Lambda has limited runtimes)
  • Uses EBS for storage (Lambda has limited scratch space; can use EFS if in VPC)
  • Batch using EC2 it is not serverless
  • Can use Fargate with Batch for serverless architecture
  • Lambda is serverless + you pay only for execution time
  • Can be more expensive.

Amazon EC2

Placement Groups

Cluster Placement Groups:

  • A cluster placement group is a logical grouping of instances within a single Availability Zone.
  • A cluster placement group can span peered VPCs in the same Region.
  • Instances in the same cluster placement group enjoy a higher per-flow throughput limit for TCP/IP traffic and are placed in the same high-bisection bandwidth segment of the network.
    Cluster placement groups are recommended for applications that benefit from low network latency, high network throughput, or both.
  • They are also recommended when the majority of the network traffic is between the instances in the group.

Network Adapters

An Elastic Fabric Adapter (EFA) is a network device that you can attach to your Amazon EC2 instance to accelerate High Performance Computing (HPC) and machine learning applications.

EFA enables you to achieve the application performance of an on-premises HPC cluster, with the scalability, flexibility, and elasticity provided by the AWS Cloud.

AWS Elastic Beanstalk

With AWS Elastic Beanstalk you can perform a blue/green deployment, where you deploy the new version to a separate environment, and then swap CNAMEs of the two environments to redirect traffic to the new version instantly.

No comments:

Post a Comment