the new command line interface is here

to find all the command start from here


install in windows using chocolatey

You need an administrative powershell
if you don't want reload the powershell you have the last command to update the path variable

iex ((New-Object System.Net.WebClient).DownloadString(''))
choco install awscli -y
$env:path = "C:\Program Files\Amazon\AWSCLI\;$env:path"

install in linux

I decide to install using pip

sudo apt-get install python-pip
sudo pip install awscli
# test if works
aws help

configure in linux

instruction are here

all the configuration will be in ~/.aws/config
to run a quick start

aws configure

after that I put in my config file for the default profile

region = us-west-2
output = text

there is the option to have multiple profile
it is convenient use the auto complete so i run this for my bash

complete -C aws_completer aws


it is very usefull to speed up your aws cli usage

install doesn't work on linux ami, it worked well on my ubuntu server 15.04

sudo pip install aws-shell

Problem with proxy Installation

If you have a proxy in the middle you can find this kind of problem

# pip install awscli
Downloading/unpacking awscli
  Cannot fetch index base URL
  Could not find any downloads that satisfy the requirement awscli
Cleaning up...
No distributions at all found for awscli
Storing debug log for failure in /root/.pip/pip.log

this because you have self signed cert in the the middle, so you can workaround with this
pip install --index-url= --trusted-host  awscli

but with the version of pip 1.5 there isn't the option —trusted-host and to update to the latest pip it always try to access to the https site.

The best solution is import the certificate in your list

  • get the full list of certificate with this command
openssl s_client -showcerts -connect

extract only you can find in the middle of two lines

copy in a .crt files on /usr/local/share/ca-certificates , for example newfile.crt , every certificate need to start with

and end with

run the command to update the cert
# update-ca-certificates
Updating certificates in /etc/ssl/certs...
WARNING: Skipping duplicate certificate py509.pem
WARNING: Skipping duplicate certificate py509.pem
1 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...

the error will disappear , for redhad there is a similar procedure

SSL error problem

if you have this error

$ aws s3 ls

[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

probably you have a proxy in the middle and you need to use the option to avoid ssl verification —no-verify-ssl
aws s3 ls --no-verify-ssl
/usr/local/lib/python2.7/dist-packages/botocore/vendored/requests/packages/urllib3/ InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See:

it will be showed a warning but it will work.

in this guide instead there is how to resolve this problem for java





RDS cli


advance cli use

aws rds describe-db-snapshots --query 'DBSnapshots[*].[SnapshotCreateTime,DBSnapshotIdentifier]' --snapshot-type manual --output text --db-instance-identifier $DBNAME

the AWS CLI has a couple of nifty options that might make it easier to parse the data from delete-db-snapshots and other commands when determining which snapshots to remove based on your needs. For example there is an option:

—output=<json, text or table>
which can be used to change the format of the returned data and in this way you can use the text output and can potentially use grep or awk to accomplish the same task. By default the output would be in JSON and generally would require a JSON parser to accurately get the information from the returned output.

The other cool option is:
—query <JMESPath query>
this option can be used to only return the selected fields from the output(as there maybe data returned that you don't need for this specific script), it can also be used to change the ordering of the outputted data, for example:
aws —output=text —query 'DBSnapshots[*].[DBSnapshotIdentifier,InstanceCreateTime,SnapshotType,DBInstanceIdentifier]'
rds describe-db-snapshots
will return only:
rds:db-2014-07-05-01-10 2014-07-05T01:10:35.140Z automated db
test 2014-06-02T22:59:48.701Z manual db
instead of returning all the data from all the possible fields that you would see if the —query option and settings weren't specified.
The JMESPath query syntax can be a bit interesting but the documentation at the following link should be able to provide some help if you wish to try some additional changes to the data returned:

Cloud Formation CLI



list all the stack in state completed

aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE

create a stack

aws cloudformation create-stack --stack-name mystackname --template-body file:////home//giuseppe//work…

delete a stack

 aws cloudformation delete-stack --stack-name mystackname

info of stacks using the command line

aws --region us-west-2 cloudformation list-stacks | grep 'Status":' | sort | uniq -c | sort -rn
    470             "StackStatus": "DELETE_COMPLETE",
     89             "StackStatus": "UPDATE_COMPLETE",
     14             "StackStatus": "CREATE_COMPLETE",
      3             "StackStatus": "DELETE_FAILED",

route 53 CLI
list the zone

aws route53 list-resource-record-sets --hosted-zone-id A32XVQKQML6W55

add a new A record to a zone

create a json file like this I chose the name of the documentation change-resource-record-sets.json

  "Comment": "optional comment about the changes in this change batch request",
  "Changes": [
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "",
        "Type": "A",
        "TTL": 1800,
        "ResourceRecords": [
            "Value": ""

and run the command
aws route53 change-resource-record-sets --hosted-zone-id A32XVQKQML6W55 --change-batch file://change-resource-record-sets.json

ElasticBeanstalk CLI


list all the environments

 aws elasticbeanstalk describe-environments --query 'Environments[*].EnvironmentName'

change the min size for the autoscaling value

aws elasticbeanstalk update-environment --environment-name Default-Environment --option-settings Namespace=aws:autoscaling:asg,OptionName=MinSize,Value=1

S3 Cli


To many request

If during a run of many creation request in the same time you obtain an error similar to "‘Request' limit on EC2 service"
it is because you need to introduce some kind of delay between the request creation.

This is caused by :
As we are quickly launching many no. of EB environments via no. of CFN templates in a loop, which as a result collectively creating “many API requests” in a short time to various EC2 resources like Instance, Volumes, IP/EIP, ASG, Security Groups, Volumes etc. etc., there is a high probability of hitting the API limit (whatever that could be at any given time) for a given service.

Why I said so is because these global services (or API end-points) are being shared by all customers, so the load/limit per customer really varies at a given time.

There is no such limit like as 10 instances in one minute , 100 instances in 10 minutes, but we do have some shared limits like max API calls/seconds or Max Instance creations/call, this is so that:

a.) It prevents Customers from accidentally launching too many instances via automated API calls, which may lead to huge unwanted charges
b.) It allows AWS to maintain performance & availability of the AWS services like EC2/EB etc.


two commands to accept a vpc peering generated from another tool/component

aws ec2 accept-vpc-peering-connection --region=eu-west-1 --vpc-peering-connection-id $(aws ec2 describe-vpc-peering-connections --region=eu-west-1 --filters Name=status-code,Values=pending-acceptance --query='VpcPeeringConnections[0].VpcPeeringConnectionId' --output=text)

the inner command extract the id using the filter for pending request the external one does the action for approval

Extract the account it

don't use existing resource to extract , this was good before aws create an specific command for this

aws sts get-caller-identity --output text --query Account

other ways less correct:

  • using the default security group automatically created
aws ec2 describe-security-groups --group-names 'Default' --query 'SecurityGroups[0].OwnerId' --output text

in this post others way to do that in a original way

Download all the RDS logs of one instance

for lib in $(aws rds --region eu-west-1 describe-db-log-files --db-instance-identifier myrdsdb --query 'DescribeDBLogFiles[*].LogFileName' --output text); do
    echo ---- ;
    touch "$lib" ;
    echo "downloading $lib"
    aws rds --region eu-west-1 download-db-log-file-portion --db-instance-identifier myrdsdb --log-file-name $lib --starting-token 0 --output text > "$lib"
Salvo diversa indicazione, il contenuto di questa pagina è sotto licenza Creative Commons Attribution-ShareAlike 3.0 License