What are Lambda Layers?
Lambda Layers are a way to share code and external dependencies between Lambdas. Layers were introduced at re:invent 2018 and have been a less talked about feature, but they deserve a closer look if you’re going to be using Lambda.
Layers are code, data or dependencies packaged separately for use within your lambda functions. Dependent layers are unpacked in the deployed lambda’s /opt directory for use. Layers are also versioned by default which allows you to make breaking changes in new layer versions and only reference the new version when ready.
The benefits of using Lambda Layers are as follows:
1. You can keep the size of deployments small. Each lambda function can have the code only specific to the action it is intended to perform.
2. Single package for all shared dependencies. No need to package shared dependencies with your lambda functions. Instead, create a layer and reuse with different functions.
3. Easier code updates. If the common dependencies are managed in the layers, then updating the dependency is very easy, as you only need to update the layer in which the dependency is packaged.
Lambda Layers can be used in the following different ways:
1. Create a Custom layer: On Lambda dashboard, select Layers, and add a Custom Layer. This allows you to upload a .zip file. Alternatively you can also use the CLI to create a layer. If the layer is to contain all your dependencies, ensure that all of them are included in the zip. Refer this link for directions on packaging.
2. Use an existing layer using ARN: Specify an ARN to use a layer that’s shared by another account, or a layer that doesn’t match your function’s runtime.
Using Lambda Layers
Let's say that after the dense layer named dense_layer_3 we'd like to do some sort of operation on the tensor, such as adding the value 2 to each element. How can we do that? None of the existing layers does this, so we'll have to build a new layer ourselves. Fortunately, the Lambda layer exists for precisely that purpose. Let's discuss how to use it.
Start by building the function that will do the operation you want. In this case, a function named custom_layer is created as follows. It just accepts the input tensor(s) and returns another tensor as output. If more than one tensor is to be passed to the function, then they will be passed as a list.
In this example just a single tensor is fed as input, and 2 is added to each element in the input tensor.
def custom_layer(tensor):
return tensor + 2
After building the function that defines the operation, next we need to create the lambda layer using the Lambda class as defined in the next line. In this case, only one tensor is fed to the custom_layer function because the lambda layer is callable on the single tensor returned by the dense layer named dense_layer_3.
lambda_layer=tensorflow.keras.layers.Lambda(custom_layer, name="lambda_layer")(dense_layer_3)
Here is the code that builds the full network after using the lambda layer.
input_layer = tensorflow.keras.layers.Input(shape=(784), name="input_layer")
dense_layer_1 = tensorflow.keras.layers.Dense(units=500, name="dense_layer_1")(input_layer)
activ_layer_1 = tensorflow.keras.layers.ReLU(name="relu_layer_1")(dense_layer_1)
dense_layer_2= tensorflow.keras.layers.Dense(units=250, name="dense_layer_2")(activ_layer_1)
activ_layer_2 = tensorflow.keras.layers.ReLU(name="relu_layer_2")(dense_layer_2)
dense_layer_3 = tensorflow.keras.layers.Dense(units=20, name="dense_layer_3")(activ_layer_2)
def custom_layer(tensor):
return tensor + 2
lambda_layer=tensorflow.keras.layers.Lambda(custom_layer, name="lambda_layer")(dense_layer_3)
activ_layer_3 = tensorflow.keras.layers.ReLU(name="relu_layer_3")(lambda_layer)
dense_layer_4 = tensorflow.keras.layers.Dense(units=10, name="dense_layer_4")(activ_layer_3)
output_layer = tensorflow.keras.layers.Softmax(name="output_layer")(dense_layer_4)
model = tensorflow.keras.models.Model(input_layer, output_layer, name="model")
In order to see the tensor before and after being fed to the lambda layer we'll create two new models in addition to the previous one. We'll call these before_lambda_model and after_lambda_model. Both models use the input layer as their inputs, but the output layer differs. The before_lambda_model model returns the output of dense_layer_3 which is the layer that exists exactly before the lambda layer. The output of the after_lambda_model model is the output from the lambda layer named lambda_layer. By doing this, we can see the input before and the output after applying the lambda layer.
before_lambda_model=tensorflow.keras.models.Model(input_layer,dense_layer_3, name="before_lambda_model")
after_lambda_model=tensorflow.keras.models.Model(input_layer,lambda_layer, name="after_lambda_model")