A Performance Perspective for Graviton-Based Lambda Functions

0

×

SUBSCRIBE TO OUR BLOG

Get our new blogs delivered straight to your inbox.

THANK YOU FOR YOUR REGISTRATION!

We’ll make sure to share the best materials designed for you!

2 minutes to read


PUBLISHED in Sep 2021

IN

Oguzhan Ozdemir

Written by Oguzhan Ozdemir

Solutions Engineer @Thundra

linkedin-share

    X

introduction

AWS Lambda is constantly growing, and over time things start to push the boundaries. AWS has been working on new hardware architectures to make the price-performance ratio more efficient. AWS Graviton2 is a custom silicon-based 64-bit ARM processor coming to AWS Lambda that will help serverless clients have the same leverage as other services that can be built on AWS Graviton2. Lambda functions powered by Graviton2 offer pricing performance up to 34% better than x86-based Lambda functions.

As Thundra, we didn’t want to miss the opportunity to become a launch partner for such an enhancement and decided to test it ourselves. The goal of this article is to run simple benchmarks on AWS Graviton2 against the x86_64 based architecture.

If you want to learn more about the AWS Graviton2 launch for AWS Lambda, be sure to visit the announcement page.

Landmarks

Our test case consists of the Python script below which calculates the nth Fibonacci number to exhaust the processor. It’s a bit cliché, but it’s simple. To keep the test simple and consistent, we decided to calculate the 35th number as the sweet spot for our tests.

```python
import os
import multiprocessing


def handler(event, context):
    nth = 35

    processes = list()
    for i in range(int(os.getenv('PROCESS_COUNT', 1))):
        print(f'starting process #{i}.')
        process = multiprocessing.Process(
            target=calculateFib, args=(nth,))
        processes.append(process)
        process.start()

    for i, process in enumerate(processes):
        print(f'joining process #{i}.')
        process.join()
        print(f'finished process #{i}.')

    response = {
        "statusCode": 200
    }
    return response


def calculateFib(nth):
    if nth <= 2:
        return 1
    else:
        return calculateFib(nth - 2) + calculateFib(nth - 1)
```

We ran this script for all combinations of (number of processes, RAM, architecture) filled from the following data.

  • Number of processes: 1, 2, 4, 8, 16
  • RAM: 256, 512, 1024, 2048, 4096, 8192, 10240 MB
  • Architecture: x86_64, arm64 (AWS Graviton2)

After performing all the combinations, we noticed some trends. The most important of these is probably the billed duration. There is a significant difference between the two architectures. Assuming there’s a margin of error and there are other things at stake here, it’s still awesome with Graviton2.

As you can guess, the duration will decrease as the RAM increases. For single-process invocation, you don’t get much CPU gain after 2048MB of RAM. For 16 process calls, the margin gets bigger as you can see in the graphics below.

Raw data for the billed time gives you the big picture.

x86_64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

18952

9571

4752

2709

2746

2828

2731

2

38428

19242

9687

4803

2742

2737

2790

4

76149

37965

19007

9498

4904

2777

2804

8

155968

76986

38184

18914

9578

4891

4359

16

305743

156496

76358

38699

19274

9703

7723

arm64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

15570

7830

4105

2481

2478

2513

2494

2

30856

15482

8259

4062

2482

2482

2572

4

61394

30588

15378

7899

4270

2491

2498

8

121505

61284

30533

15398

7942

4430

3697

16

244346

121374

61116

30660

15409

8124

6541

In the end, we can clearly see the advantage of AWS Graviton2 on CPU intensive tasks. Overall, when AWS Lambda is running on arm64 we would get a considerable amount of savings with the right setup.

Outro

In conclusion, we can see the advantage of AWS Graviton2 both for price and performance. Of course, in real life we ​​would have much more complicated storylines and things to fear. However, this should give us a rough idea of ​​how much improvement we can expect. Getting started with AWS Graviton is pretty easy if you just follow this GitHub repository. To be able to quickly debug your applications, go to https://start.thundra.io/ and create a free account.

introduction

AWS Lambda is constantly growing, and over time things start to push the boundaries. AWS has been working on new hardware architectures to make the price-performance ratio more efficient. AWS Graviton2 is a custom silicon-based 64-bit ARM processor coming to AWS Lambda that will help serverless clients have the same leverage as other services that can be built on AWS Graviton2. Lambda functions powered by Graviton2 offer pricing performance up to 34% better than x86-based Lambda functions.

As Thundra, we didn’t want to miss the opportunity to become a launch partner for such an enhancement and decided to test it ourselves. The goal of this article is to run simple benchmarks on AWS Graviton2 against the x86_64 based architecture.

If you want to learn more about the AWS Graviton2 launch for AWS Lambda, be sure to visit the announcement page.

Landmarks

Our test case consists of the Python script below which calculates the nth Fibonacci number to exhaust the processor. It’s a bit cliché, but it’s simple. To keep the test simple and consistent, we decided to calculate the 35th number as the sweet spot for our tests.

```python
import os
import multiprocessing


def handler(event, context):
    nth = 35

    processes = list()
    for i in range(int(os.getenv('PROCESS_COUNT', 1))):
        print(f'starting process #{i}.')
        process = multiprocessing.Process(
            target=calculateFib, args=(nth,))
        processes.append(process)
        process.start()

    for i, process in enumerate(processes):
        print(f'joining process #{i}.')
        process.join()
        print(f'finished process #{i}.')

    response = {
        "statusCode": 200
    }
    return response


def calculateFib(nth):
    if nth <= 2:
        return 1
    else:
        return calculateFib(nth - 2) + calculateFib(nth - 1)
```

We ran this script for all combinations of (number of processes, RAM, architecture) filled from the following data.

  • Number of processes: 1, 2, 4, 8, 16
  • RAM: 256, 512, 1024, 2048, 4096, 8192, 10240 MB
  • Architecture: x86_64, arm64 (AWS Graviton2)

After performing all the combinations, we noticed some trends. The most important of these is probably the billed duration. There is a significant difference between the two architectures. Assuming there’s a margin of error and there are other things at stake here, it’s still awesome with Graviton2.

As you can guess, the duration will decrease as the RAM increases. For single-process invocation, you don’t get much CPU gain after 2048MB of RAM. For 16 process calls, the margin gets bigger as you can see in the graphics below.

Raw data for the billed time gives you the big picture.

x86_64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

18952

9571

4752

2709

2746

2828

2731

2

38428

19242

9687

4803

2742

2737

2790

4

76149

37965

19007

9498

4904

2777

2804

8

155968

76986

38184

18914

9578

4891

4359

16

305743

156496

76358

38699

19274

9703

7723

arm64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

15570

7830

4105

2481

2478

2513

2494

2

30856

15482

8259

4062

2482

2482

2572

4

61394

30588

15378

7899

4270

2491

2498

8

121505

61284

30533

15398

7942

4430

3697

16

244346

121374

61116

30660

15409

8124

6541

In the end, we can clearly see the advantage of AWS Graviton2 on CPU intensive tasks. Overall, when AWS Lambda is running on arm64 we would get a considerable amount of savings with the right setup.

Outro

In conclusion, we can see the advantage of AWS Graviton2 both for price and performance. Of course, in real life we ​​would have much more complicated storylines and things to fear. However, this should give us a rough idea of ​​how much improvement we can expect. Getting started with AWS Graviton is pretty easy if you just follow this GitHub repository. To be able to quickly debug your applications, go to https://start.thundra.io/ and create a free account.

introduction

AWS Lambda is constantly growing, and over time things start to push the boundaries. AWS has been working on new hardware architectures to make the price-performance ratio more efficient. AWS Graviton2 is a custom silicon-based 64-bit ARM processor coming to AWS Lambda that will help serverless clients have the same leverage as other services that can be built on AWS Graviton2. Lambda functions powered by Graviton2 offer pricing performance up to 34% better than x86-based Lambda functions.

As Thundra, we didn’t want to miss the opportunity to become a launch partner for such an enhancement and decided to test it ourselves. The goal of this article is to run simple benchmarks on AWS Graviton2 against the x86_64 based architecture.

If you want to learn more about the AWS Graviton2 launch for AWS Lambda, be sure to visit the announcement page.

Landmarks

Our test case consists of the Python script below which calculates the nth Fibonacci number to exhaust the processor. It’s a bit cliché, but it’s simple. To keep the test simple and consistent, we decided to calculate the 35th number as the sweet spot for our tests.

```python
import os
import multiprocessing


def handler(event, context):
    nth = 35

    processes = list()
    for i in range(int(os.getenv('PROCESS_COUNT', 1))):
        print(f'starting process #{i}.')
        process = multiprocessing.Process(
            target=calculateFib, args=(nth,))
        processes.append(process)
        process.start()

    for i, process in enumerate(processes):
        print(f'joining process #{i}.')
        process.join()
        print(f'finished process #{i}.')

    response = {
        "statusCode": 200
    }
    return response


def calculateFib(nth):
    if nth <= 2:
        return 1
    else:
        return calculateFib(nth - 2) + calculateFib(nth - 1)
```

We ran this script for all combinations of (number of processes, RAM, architecture) filled from the following data.

  • Number of processes: 1, 2, 4, 8, 16
  • RAM: 256, 512, 1024, 2048, 4096, 8192, 10240 MB
  • Architecture: x86_64, arm64 (AWS Graviton2)

After performing all the combinations, we noticed some trends. The most important of these is probably the billed duration. There is a significant difference between the two architectures. Assuming there’s a margin of error and there are other things at stake here, it’s still awesome with Graviton2.

As you can guess, the duration will decrease as the RAM increases. For single-process invocation, you don’t get much CPU gain after 2048MB of RAM. For 16 process calls, the margin gets bigger as you can see in the graphics below.

Raw data for the billed time gives you the big picture.

x86_64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

18952

9571

4752

2709

2746

2828

2731

2

38428

19242

9687

4803

2742

2737

2790

4

76149

37965

19007

9498

4904

2777

2804

8

155968

76986

38184

18914

9578

4891

4359

16

305743

156496

76358

38699

19274

9703

7723

arm64

256 MB

512 MB

1024 MB

2048 MB

4096 MB

8192 MB

10240 MB

1

15570

7830

4105

2481

2478

2513

2494

2

30856

15482

8259

4062

2482

2482

2572

4

61394

30588

15378

7899

4270

2491

2498

8

121505

61284

30533

15398

7942

4430

3697

16

244346

121374

61116

30660

15409

8124

6541

In the end, we can clearly see the advantage of AWS Graviton2 on CPU intensive tasks. Overall, when AWS Lambda is running on arm64 we would get a considerable amount of savings with the right setup.

Outro

In conclusion, we can see the advantage of AWS Graviton2 both for price and performance. Of course, in real life we ​​would have much more complicated storylines and things to fear. However, this should give us a rough idea of ​​how much improvement we can expect. Getting started with AWS Graviton is pretty easy if you just follow this GitHub repository. To be able to quickly debug your applications, go to https://start.thundra.io/ and create a free account.


Source link

Share.

About Author

Leave A Reply