Brotli in Lamba Function S3 Upload Event

This is office two of the Serverless-AllTheThings three part blog series. You can review the code from this serial on the Serverless-AllTheThings repo.

In part 1 of this weblog series, I covered what serverless is, what it isn't and some of my serverless lovers spats over the years. In role two I'm excavation into an instance serverless forepart end and providing you with all the code yous demand to spin upwardly an environment of your ain. In function three I'll explore an instance serverless back finish.

KISS

The near rudimentary manner to host a serverless website on AWS would be to put all of the static files in an S3 bucket and configure the saucepan to serve the files equally a website. Information technology satisfies the requirements to be serverless (i.e. AWS manages the servers and those servers scale to handle the need) and is extremely cost effective. For many use cases, I would advise stopping hither to relish the simplicity. Come across below for an example AWS CloudFormation S3 bucket template.

S3BucketStatic:     Properties:       BucketEncryption:         ServerSideEncryptionConfiguration:           - ServerSideEncryptionByDefault:               SSEAlgorithm: "AES256"       BucketName: "serverless-allthethings"     Type: "AWS::S3::Bucket"        
CloudFormation S3 Bucket

However, information technology's not equally fast (for the end user) as it could exist, nor dynamic, nor sexy. So the next pace would exist to put a CloudFront distribution in front of the S3 bucket. This allows all of the S3 files to be served from a CDN of virtually 200 edge locations effectually the world. In layman's terms, this means stop users will be able to access your website files much faster. Encounter below for an example AWS CloudFormation CloudFront distribution template.

CloudFrontDistribution:     Properties:       DistributionConfig:         Annotate: !Ref "AWS::StackName"         DefaultCacheBehavior:           AllowedMethods:             - "Become"             - "HEAD"             - "OPTIONS"           CachedMethods:             - "GET"             - "Head"           Compress: false           DefaultTTL: 31536000           ForwardedValues:             Headers:               - "accept-encoding"               - "10-uri"             QueryString: simulated           MaxTTL: 31536000           MinTTL: 0           SmoothStreaming: false           TargetOriginId: !Join [             "",             [               "s3:",               !ImportValue "ServerlessAllTheThingsS3BucketStaticName",             ],           ]           ViewerProtocolPolicy: "redirect-to-https"         Enabled: true         HttpVersion: "http2"         IPV6Enabled: false         Origins:           - DomainName: !ImportValue "ServerlessAllTheThingsS3BucketStaticDomainName"             Id: !Join [               "",               [                 "s3:",                 !ImportValue "ServerlessAllTheThingsS3BucketStaticName",               ],             ]             OriginPath: !Sub "/${BranchSlug}"             S3OriginConfig:               OriginAccessIdentity:                 !Bring together [                   "",                   [                     "origin-access-identity/cloudfront/",                     !ImportValue "ServerlessAllTheThingsCloudFrontCloudFrontOriginAccessIdentityId",                   ],                 ]         PriceClass: "PriceClass_All"     Type: "AWS::CloudFront::Distribution"        
CloudFormation CloudFront Distribution

Well, that solved the speed issue, and it is a little sexier, but it's still not dynamic. To solve this trouble, I would indicate you to Lambda@Edge. Lambda@Border is AWS'due south serverless compute functionality (i.due east. Lambda) that runs on the border (i.e. CloudFront border locations). CloudFront provides us with four places we can skid a Lambda@Edge function into its request/response flow (called event types):

  • viewer request
  • origin asking
  • origin response
  • viewer response

This affords you with the opportunity to dynamically modify the request and/or response to your liking.

 
Request Menses Through CloudFront and Lambda@Edge Issue Types

Livin' Life On The Edge

To put things in CloudFront terminology, what we accept then far is an S3 saucepan origin server that contains all of our static files (js, css, images, etc.). What nosotros want is to evangelize a server-side rendered (SSR) single page app (SPA) via i of the Lambda@Edge viewer/origin request/response event types. All of the event types would work, but for a number of reasons (viewer request/response limits and sequential cold starts to name a couple) I have establish it ideal to run SSR SPAs as an origin response. See below for an example AWS CloudFormation template with Lambda function and version resources and how they are associated with a CloudFront template.

LambdaFunctionApp:     DependsOn:       - "IamRoleLambdaApp"     Properties:       Code:         S3Bucket: !ImportValue "ServerlessAllTheThingsS3BucketArtifactsName"         S3Key: !Sub "${Commit}/app/lambda.null"       Description: !Sub "${AWS::StackName}-app"       FunctionName: !Sub "${AWS::StackName}-app"       Handler: "alphabetize.handler"       MemorySize: 128       Role: !GetAtt "IamRoleLambdaApp.Arn"       Runtime: "nodejs8.10"       Timeout: 20     Type: "AWS::Lambda::Function"  LambdaVersionApp:     DependsOn:       - "LambdaFunctionApp"     Properties:       FunctionName: !Ref "LambdaFunctionApp"     Type: "AWS::Lambda::Version"  CloudFrontDistribution:     DependsOn:       - "LambdaVersionApp"     Properties:         DefaultCacheBehavior:           ...           LambdaFunctionAssociations:             - EventType: "origin-response"               LambdaFunctionARN: !Ref "LambdaVersionApp"     Type: "AWS::CloudFront::Distribution"        
CloudFormation Lambda Function and Version and Relevant CloudFront Association

As for the other 3 Lambda@Border event types, they tin can be used for a number of other purposes, including:

  • Compression treatment
  • HTTP status code tweaking
  • URL redirects and rewrites

Putting It All Together

Let's take a look at a few requests to come across how they're handled via the origin asking and origin response Lambda@Edge functions in the Serverless-AllTheThings GitHub repo.

Static Request – #i

First upwards is a elementary static request for a file with the URI /favicon.ico.

  1. CloudFront checks to see if it is cached (allow's assume it isn't)
  2. The origin request Lambda@Edge function determines the ideal compression algorithm for static assets (let'southward presume it is brotli) and updates the S3 file path
  3. S3 contains favicon.ico.br and responds with its contents and HTTP status code 200
  4. The origin response Lambda@Edge office passes it through
  5. CloudFront caches the response based on the have-encoding header

Static Request – #2

Now that favicon.ico.br is cached in CloudFront, permit's expect at another request with the aforementioned URI and accept-encoding header.

  1. CloudFront checks for and returns the cached contents of favicon.ico.br

Static Request – #3

Suppose there is a new request where the accept-encoding header prioritizes gzip. In this instance, fifty-fifty if favicon.ico.br is already buried, the first request would follow static flow #1 above and return the contents of favicon.ico.gzip. Subsequent requests with the same gzip prioritized accept-encoding header would then follow static period #2 higher up (where CloudFront would return the contents of favicon.ico.gzip).

Dynamic Request – #1

Now let'southward take a await at a dynamic request for the URI / (i.eastward. the Home view).

  1. CloudFront checks to see if it is cached (let'southward assume information technology isn't)
  2. The origin request Lambda@Edge function determines the ideal compression algorithm for static avails and updates the S3 file path (permit's assume it is brotli)
  3. S3 does not contain a file and responds with HTTP status code 403
  4. The origin response Lambda@Border function initiates the SSR process, renders the html for the Habitation view, determines the ideal compression algorithm for dynamic assets (let'due south assume information technology is gzip), compresses the html, sets a curt-lived cache-command timeout, and converts the HTTP condition code to 200
  5. CloudFront caches the response based on the accept-encoding header

Dynamic Asking – #2

Now that the Dwelling house view is cached in CloudFront, allow's look at another request with the same URI and accept-encoding header within the cache-control timeout.

  1. CloudFront checks for and returns the cached, rendered folio

Dynamic Request – #iii

Once the cache-control timeout has expired, the next request will follow dynamic request menstruum #1 above and and so subsequent requests inside the new enshroud-command timeout volition follow dynamic request menses #2 above.

Dynamic Request – #4

Suppose there is a new request where the accept-encoding header prioritizes identity (i.eastward. no encoding). In this case, fifty-fifty if the Home view is already cached, the first asking would follow dynamic request menstruation #1 to a higher place and return the rendered view without compression. Subsequent requests with the same identity-prioritized take-encoding header would then follow dynamic request flow #2 above.

Instantaneous Scaling

In the above section I covered the basic requests under a minimum volume of users, but it doesn't capture the true value that serverless brings to the table. Let'due south suppose your website is featured in a prime position on Hackernews and a tsunami of users heads your fashion. How do you think the website volition handle the flood of traffic?

Inundation Scenario

Seconds after your website goes viral, let'southward presume 1,000,000 new users simultaneously request your website home folio (/) every minute (i.due east. all ane million requests happen on the first second each minute).

Minute 1

  1. One of the requests will follow the dynamic asking period #2 above. The other 999,999 requests will pause until the showtime request completes and and then immediately reply with the cached content
    • This assumes all of the accept-encoding headers are the same. If not, in that location volition exist one request per unique accept-encoding header

Infinitesimal 2+

  1. All requests will receive the cached response until the cache-command timeout expires. At this bespeak information technology will then follow the minute one flow to a higher place.

No requests are dropped and the handling of traffic instantly scales from zero to one million requests per infinitesimal. In non-Lambda environments, variations in traffic are traditionally handled by scaling a cluster of servers up and down, but it is far from instantaneous (i.e. during unexpected scale upward events users will experience slower response times and/or requests will be dropped). Furthermore, while those servers are up, y'all are paying for them the whole time regardless of whether or not they are handling requests. At present what well-nigh the costs? Won't one million serverless requests per minute be actually, really expensive? Permit'southward practice the math.

Cost

Lambda@Edge functions cost $0.60 per one meg requests and $0.00000625125 for every 128MB-2d used (metered at 50ms increments). In this scenario, permit'south assume:

  • Origin request functions consummate in under 50ms
  • Origin response functions complete in nether 150ms
  • This overflowing of traffic occurs for 8 hours a day and there are zip requests in the other sixteen hours each twenty-four hours
  • In that location are 200 unique requests every five minutes (due to accept-encoding headers, expired cache command and unique URIs)
  • Average transfer is 30KB

8 hours x hr x 1 million requests = 480 million requests
8 hours x lx minutes / five minutes 10 200 requests = 19,200 unique requests
480 1000000 x (xxx/1024/1024)GB = 13.4 TB transferred

Lambda cost = 0.0192 ten $0.6 + nineteen,200 x (0.05 + 0.15)ms x $0.00000625125 = $0.01 + $0.02 = $0.03

In other words, the dynamic server cost (i.e. Lambda) for one flood day is about three cents. The supporting infrastructure (CloudFront and Route 53) is likely used in both serverless and non-serverless environments and is where the bulk of the costs volition reside, but for abyss the math is below. In a worst case scenario where nothing is cached by users' browsers:

CloudFront transfer toll = $0.085/GB x 10TB + $0.08/GB x three.4TB = $ane,150
CloudFront request cost = $0.one 10 (480 million / 10,000) = $four,800
CloudFront total cost = $5,950

Route 53 cost = $0.4 * 480 = $192

Notation: These are back-of-envelope calculations, actual costs will vary.

Performance

Serverless is faster than fast. Quicker than quick. It is Lightning! (Ka-Chow!)

For requests where no Lambda functions are warm nor has anything been cached by CloudFront (i.e. the worst case scenario), information technology takes about 1.61 seconds for the request to consummate and the DOM content to exist loaded.

For requests where Lambda functions are warm and everything has been cached by CloudFront, it takes about 119 milliseconds for the asking to complete and the DOM content to be loaded.

In Summary

Serverless is awesome and is the perfect pick for both static and dynamic front ends. Stay tuned for role 3 of this serverless weblog series where nosotros'll explore an example serverless back stop and provide you with everything y'all need to spin up an environment of your ain.

harveythades.blogspot.com

Source: https://moduscreate.com/blog/serverless-allthethings-2/

0 Response to "Brotli in Lamba Function S3 Upload Event"

إرسال تعليق

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel