NOTE: This post builds on top of my previous post, and assumes you are familiar with deploying to AWS Lambda from a Docker container.
In this post I will describe how to host a .NET Web API on AWS Lambda – it’s actually really easy! I will also explain step-by-step, how to add Terraform support to be able to set up the AWS resources automatically.
Create a Lambda ASP.NET Core Minimal API project
Install AWS Toolkit for Visual Studio, if you don’t already have it. Then go ahead and create new Lambda ASP.NET Core Minimal API and take a look at the Program.cs file. It is identical to a Microsoft generic Minimal API template, with the exception of the line that adds AWS Lambda hosting.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
builder.Services.AddAWSLambdaHosting(LambdaEventSource.RestApi);
var app = builder.Build();
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.MapGet("/", () => "Welcome to running ASP.NET Core Minimal API on AWS Lambda");
app.Run();
Pretty neat if you ask me! No special syntax, classes, or decoration with custom attributes. This makes the migration of an existing API to AWS Lambda much, much easier.
Add a Dockerfile
To be able to host ASP.NET in AWS Lambda, we need to wrap it in a Docker container. Instead of a very simple Dockerfile to accomplish this, as I did in the last post, let’s add something a bit closer to the Dockerfile we’d want in production.
#if (!ServerLess)
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
#else
FROM public.ecr.aws/lambda/dotnet:8 as base
#endif
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ENV DOTNET_EnableWriteXorExecute=0
ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["NuGet.Config", "."]
COPY ["src/TrailheadTechnology.API/TrailheadTechnology.API.csproj", "src/TrailheadTechnology.API/"]
COPY . .
WORKDIR "/src/src/TrailheadTechnology.API"
RUN dotnet restore "TrailheadTechnology.API.csproj"
RUN dotnet build "TrailheadTechnology.API.csproj" -c $BUILD_CONFIGURATION -o /app/build
FROM build AS publish
RUN dotnet publish "TrailheadTechnology.API.csproj" -c $BUILD_CONFIGURATION -o /app/publish /p:UseAppHost=false
FROM base AS final
#if (!ServerLess)
WORKDIR /app
#else
WORKDIR /var/task
#endif
COPY --from=publish /app/publish .
#if (!ServerLess)
CMD ["dotnet", "TrailheadTechnology.API.dll"]
#else
CMD ["TrailheadTechnology.API"]
#endif
The #if, #else, #endif conditional directives highlight the differences between API as Lambda Dockerfile vs standard Web API. The primary differences are:
- Serverless API has a different base image
- Serverless CMD doesn’t contain the ‘dotnet’ argument
The serverless lambda image already contains the .NET hosting environment (think of it as a bootstrapper) and hence, we only need to provide the assembly name as the CMD argument.
Build and publish the image (if you’re unsure how take a look at my previous post about deploying to AWS Lambda from a container).
Add the Serverless Terraform Module
Now that we have a our API wrapped in a Docker container and ready to deploy to AWS Lambda, let’s set up a Terraform module that will set up the AWS resources for it. Of course, we could do this manually, but by scripting this in an infrastructure-as-code style, we’ll be able to put this in our DevOps process to make sure we’re always deploying to cloud resources that are set up correctly.
Our Terraform module will use the following variables:
variable "workload_name" {
type = string
}
variable "environment" {
type = string
}
variable "secret_arn" {
type = string
}
variable "ecr_name" {
type = string
default = "threg"
}
variable "ecr_image_version" {
type = string
}
The module itself I will discuss piece by piece, but feel free to concatenate the pieces. It will work.
This next part sets up the data source for the repository (you might reconsider creating it as part of the module).
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.14.0"
}
}
required_version = ">= 1.4.6"
}
data "aws_caller_identity" "current" {}
data "aws_ecr_repository" "trailhead-ecr-repo" {
name = var.ecr_name
}
Next, we define the locals to be used as environment variables.
locals {
ApplicationName = var.workload_name
Environment = var.environment
Account_id = data.aws_caller_identity.current.account_id
}
The section below creates the API gateway and the integration. The integration type should be AWS_PROXY and the integration method should be POST. Keep in mind that the integration method DOES NOT imply the HTTP methods that the API will use. It is simply used by an internal communication protocol between the API and the Lambda (very misleading).
# API Gateway
resource "aws_apigatewayv2_api" "gw" {
name = "${var.environment}-${var.workload_name}-gw"
description = "description here..."
version = "1.0.0"
protocol_type = "HTTP"
}
resource "aws_apigatewayv2_integration" "gw_integration" {
api_id = aws_apigatewayv2_api.gw.id
integration_type = "AWS_PROXY"
connection_type = "INTERNET"
description = "test service"
integration_method = "POST"
integration_uri = aws_lambda_function.api_function.invoke_arn
payload_format_version = "2.0"
timeout_milliseconds = 30000
}
Next, we create a route (feel free to adjust to your needs) and a stage. Do yourself a favor and don’t forget to add the $context.integrationErrorMessage to access_log_settings. This way, when something fails, you will find the API server(less) error in the CloudWatch log group.
resource "aws_apigatewayv2_route" "test_service" {
operation_name = "CatchAll"
api_id = aws_apigatewayv2_api.gw.id
route_key = "$default"
target = "integrations/${aws_apigatewayv2_integration.gw_integration.id}"
}
resource "aws_apigatewayv2_stage" "v1" {
api_id = aws_apigatewayv2_api.gw.id
name = "v1"
auto_deploy = true
default_route_settings {
throttling_burst_limit = 50
throttling_rate_limit = 100
}
access_log_settings {
destination_arn = aws_cloudwatch_log_group.lambda_logging.arn
format = "$context.identity.sourceIp,$context.requestTime,$context.httpMethod,$context.routeKey,$context.protocol,$context.status,$context.responseLength,$context.requestId, $context.integrationErrorMessage"
}
}
The next section creates the lambda from the image and sets the environment variables.
# Lambda
resource "aws_cloudwatch_log_group" "lambda_logging" {
name = "/aws/api/${aws_lambda_function.api_function.function_name}"
retention_in_days = 14
}
resource "aws_lambda_function" "api_function" {
function_name = "${var.environment}-${var.workload_name}-lambda"
timeout = 30 # seconds
image_uri = "${data.aws_ecr_repository.trailhead-ecr-repo.repository_url}:${var.ecr_image_version}"
package_type = "Image"
role = aws_iam_role.api_function_role.arn
tracing_config {
mode = "Active"
}
environment {
variables = {
ApplicationName = local.ApplicationName
EnvironmentName = local.Environment
ASPNETCORE_ENVIRONMENT = local.Environment
}
}
}
The following allows the function to be invoked from the API gateway.
resource "aws_lambda_permission" "allow_api" {
statement_id_prefix = "ExecuteByAPI"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.api_function.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.gw.execution_arn}/*/*"
}
Now we create the assume-role policy and attach it to the new IAM role.
data "aws_iam_policy_document" "policy-document" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
resource "aws_iam_role" "api_function_role" {
name = "lambda_iam_role"
assume_role_policy = data.aws_iam_policy_document.policy-document.json
tags = {
Name = "${var.environment}-${var.workload_name}-lambda-iam-role"
}
}
Now we can attach the basic execution role to the Lambda.
resource "aws_iam_role_policy_attachment" "execution_attachment" {
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
role = aws_iam_role.api_function_role.name
}
The following allows us to read the secret (ARN of the connection string secret is passed into this module via the var.secret_arn variable).
data "aws_iam_policy_document" "secrets_policy_document" {
statement {
effect = "Allow"
actions = ["secretsmanager:GetSecretValue"]
resources = [var.secret_arn]
}
}
resource "aws_iam_policy" "secrets_policy" {
name = "secrets-policy"
policy = data.aws_iam_policy_document.secrets_policy_document.json
}
resource "aws_iam_role_policy_attachment" "secrets_attachment" {
policy_arn = aws_iam_policy.secrets_policy.arn
role = aws_iam_role.api_function_role.name
}
resource "aws_iam_role_policy_attachment" "s3_attachment" {
policy_arn = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
role = aws_iam_role.api_function_role.name
}
resource "aws_iam_role_policy_attachment" "rds_attachment" {
policy_arn = "arn:aws:iam::aws:policy/AmazonRDSFullAccess"
role = aws_iam_role.api_function_role.name
}
Finally, we set it up to allow it to write to CloudWatch and XRay.
data "aws_iam_policy_document" "logs_policy_document" {
statement {
actions = [
"logs:*",
"xray:*"
]
resources = [
"arn:aws:logs:*:*:*"
]
}
}
resource "aws_iam_policy" "logs_policy" {
name = "logs-policy"
policy = data.aws_iam_policy_document.logs_policy_document.json
}
resource "aws_iam_role_policy_attachment" "logs_attachment" {
policy_arn = aws_iam_policy.logs_policy.arn
role = aws_iam_role.api_function_role.name
}
Summary
As you can see, hosting a .NET Web API on AWS Lambda is a pretty simple process. With just one line of code added to our project, an existing ASP.NET Core Web API can be hosted as an AWS Lambda Serverless API. By combining this with Terraform, you can also get your infrastructure up and running in seconds as part of your deployment or DevOps pipeline.
Contact Trailhead if you’d like to learn more about making your APIs serverless with AWS Lambda, or need help implementing it with your project.