Amazon Lambda

Develop a lambda in your laptop using

Reference for python api
if you call in this way , you don't have the problem of the ssl certificate verification

client = boto3.client('autoscaling',verify=False,region_name='eu-west-1')

but an annoing message is printed every time you do a call

Some notes

Using external library

ls  requests/  requests-2.11.1.dist-info/
  • after you need to zip the content of the directory in a file zip and upload
keep in mind the file name and the function you specified in this moment must reflect what you upload in the zip file

With pip3

pip3 install --target=. requests
Collecting requests
  Using cached
Installing collected packages: chardet, urllib3, idna, certifi, requests
Successfully installed certifi-2018.11.29 chardet-3.0.4 idna-2.8 requests-2.21.0 urllib3-1.24.1

compiling-dir> ll
total 8
drwxr-xr-x   3 giuseppe  staff    96B Jan 18 17:27 bin
drwxr-xr-x   7 giuseppe  staff   224B Jan 18 17:27 certifi
drwxr-xr-x  10 giuseppe  staff   320B Jan 18 17:27 certifi-2018.11.29.dist-info
drwxr-xr-x  43 giuseppe  staff   1.3K Jan 18 17:27 chardet
drwxr-xr-x  10 giuseppe  staff   320B Jan 18 17:27 chardet-3.0.4.dist-info
drwxr-xr-x  11 giuseppe  staff   352B Jan 18 17:27 idna
drwxr-xr-x   8 giuseppe  staff   256B Jan 18 17:27 idna-2.8.dist-info
-rw-r--r--   1 giuseppe  staff   2.1K Jan 18 17:03
drwxr-xr-x  21 giuseppe  staff   672B Jan 18 17:27 requests
drwxr-xr-x   8 giuseppe  staff   256B Jan 18 17:27 requests-2.21.0.dist-info
drwxr-xr-x  16 giuseppe  staff   512B Jan 18 17:27 urllib3
drwxr-xr-x   8 giuseppe  staff   256B Jan 18 17:27 urllib3-1.24.1.dist-info

now that your code and all the library are in the same dir it is possible do a .zip

I have also resolved the error with homebrew creating a file in ~/.pydistutils.cfg


Using the Environment Variables in a function

this is a simple example with Python

import os
def lambda_handler(event, context):
    return  os.environ['PREFIX']


IAM permissions to run/exec/invoke only the functions

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "*"

Linuxacademy course

Following the linuxacademy course I created the first lambda function with node.js

An useful resource is the sdk for node js
Steps to create:

  • Have an s3 bucket in the same region of the lambda function
  • from inside lambda create a trigger for any creation object in that bucket
  • use the example of node.js get s3-get-object
  • after the upload is possible to see the logs of the action and also the cloudwatch with the output.

the python environment for lambda is python2.7
some useful code

print("Memory limit: ", context.memory_limit_in_mb
print("Request ID: ", context.aws_request_id )

Grunt , testing NODE.JS lambda code locally

The repos for this project is

Emulambda , testing PYTHON lambda code locally

Using the repository you can have a way to test the code locally


cd ~
git clone
sudo /usr/local/bin/pip install -e emulambda

create a simple test that print a line

  • create a basic python function
import boto3

def first_handler(event, context):
        print ("Test Inside the handler")
        return "Ok return"
  • and an empty json file
cat event.json
  • run the code with -v you will have more info
$ emulambda  firstfunction.first_handler event.json
Test Inside the handler
Ok return
$ emulambda -v  firstfunction.first_handler event.json
Test Inside the handler
Executed firstfunction.first_handler
...execution clock time:                 0ms (100ms billing bucket)
...execution peak RSS memory:            36B (36 bytes)
Ok return


Disable and enable the autoscaling Launch Event

import boto3
import logging
logger = logging.getLogger()

def lambda_handler(event, context):
    howmanydisable = 0
    regionstoscan = ['us-east-1','eu-west-1','us-west-2']

    for region in regionstoscan:"region: " + region + "___________") 
        client = boto3.client('autoscaling',region_name=region)
        response = client.describe_auto_scaling_groups()
        for group in response['AutoScalingGroups']:
            #response = client.suspend_processes(AutoScalingGroupName=groupname, ScalingProcesses=['Launch',])
            #response = client.resume_processes(AutoScalingGroupName=groupname, ScalingProcesses=['Launch',])
            disableLaunch = True
            for tag in group['Tags']:
                if (tag['Key'] == 'EnableDuringNight') & (tag['Value'] == 'true'):
                    disableLaunch = False

            if disableLaunch:
                howmanydisable = howmanydisable + 1
                response = client.resume_processes(AutoScalingGroupName=groupname, ScalingProcesses=['Launch',])
       + "will have a disable launch acction")
            print '--------------------------'
    return "Total reenable: " + str(howmanydisable)

call a function with parameter from command line

aws lambda invoke --invocation-type RequestResponse --function-name myfunctioname --region eu-west-2 --log-type Tail --payload {"par1":"myfirstparameter", "par2": "mysecondparameter"} outputfile.txt

inside the func
import time
import requests
import simplejson as json
from requests.auth import HTTPDigestAuth

def lambda_handler(event, context):
    parone = event['par1']
    partwo = event['par2']

Codecommit trigger lambda

this is the event

  "Records": [
      "eventId": "5a824061-17ca-46a9-bbf9-114edeadbeef",
      "eventVersion": "1.0",
      "eventTime": "2016-01-01T23:59:59.000+0000",
      "eventTriggerName": "my-trigger",
      "eventPartNumber": 1,
      "codecommit": {
        "references": [
            "commit": "5c4ef1049f1d27deadbeeff313e0730018be182b",
            "ref": "refs/heads/master"
      "eventName": "TriggerEventTest",
      "eventTriggerConfigId": "5a824061-17ca-46a9-bbf9-114edeadbeef",
      "eventSourceARN": "arn:aws:codecommit:us-east-1:0123456789:my-repo",
      "userIdentityARN": "arn:aws:iam::0123456789:root",
      "eventSource": "aws:codecommit",
      "awsRegion": "us-east-1",
      "eventTotalParts": 1

IAM permisssions so your lambda function can write on cloudwatch

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:us-east-1:YOUR_ACCOUNT_ID:*"
            "Effect": "Allow",
            "Action": [
            "Resource": [

total volumes size

import boto3
def lambda_handler(event, context):
    client = boto3.client('ec2',"eu-west-1")
    response = client.describe_volumes()
    vol = response['Volumes']
    total = 0
    for v in vol :
        total = total + v['Size']
    return total

Run Lambda function regularly like cron


Keep a ec2 machine always up

  • no need external libraries
  • increase the timeout
  • 2 external variables defined
import boto3
import time
import logging
import os

logger = logging.getLogger()

region = os.environ['REGION']
ec2id = os.environ['EC2ID']

def lambda_handler(event, context):

    client = boto3.client('ec2',region_name=region)
    response = client.describe_instances(InstanceIds=[ec2id])

    status = response['Reservations'][0]['Instances'][0]['State']['Name']
    responsestart = "no action performed""Status is: " + status)
    if status == 'running':"running system nothing to do")
    if status == 'pending':"pending system nothing to do")
    if status == 'stopped':"stop system, run a start action")
        responsestart = client.start_instances(InstanceIds=[ec2id])
    if status == 'stopping':"wait a minute and run a start action")
        responsestart = client.start_instances(InstanceIds=[ec2id])

Policy to apply to the role

  • ec2 full access
  • allow cloudwatch login
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:logs:*:*:*"
Salvo diversa indicazione, il contenuto di questa pagina è sotto licenza Creative Commons Attribution-ShareAlike 3.0 License