Amazon Certification Solution Architect Associate

Amazon Web Services Essentials

Security on AWS

from us

  • password and key rotation
  • trusted advisor to have an analisys of the situation
  • hypervisor isolated , the machine can't see each other either if they stay in the same hypervisor

from amazon

  • protect against DDOS and spoofed data check the mac address

shared

  • port scanning is not permitted either in your environment, you need to ask and them give you a permission for one time

Understanding AWS Global Infrastructure

Edge Location sono le zone utilizzate in cloudfront to realize the cdn , we can choose to have a global location or a location for example only in the us
There are a lots of edge locations around the world external to the region and avalability zones

CSA Certified Solution Architect

Overview Of Required Service

Introduction To Amazon Web Services Part 1

Terminology

CSA Certified Solution Architect

  • Proactive Cycle Scaling: scale during fixed time period.
  • Proactive Event-Based Scaling: some event like election , football games
  • Auto-Scaling based on demand: cpu, memory ecc.

S3 lifecycle policies, you can send a file to glacier or delete, pay for every version of the file. You can have unlimited versions for files.

kind of instance

  • on demand
  • spot instance : for instance that after the esecution of process can be switch off
  • reserverd istance

Introduction To Amazon Web Services Part 2

Route 53

can balance the traffic in 3 ways

  • weighted: 1/4 of the total to www1.mydomain.com 3/4 to www2.mydomain.com
  • latency: to the zone more close from the requester
  • failover: don't route to the fail zones

Dynamo DB

can integrate with service like Elastic Map Reduce (Hadop)

SWF Simple Forkflow Service

  • run, and scale background jobs that have parallel or sequential steps. You can think of SWF as a fully-managed state tracker and task coordinator in the Cloud.
  • reiterate the steps

If your app's steps take more than 500 milliseconds to complete, you need to track the state of processing, and you need to recover or retry if a task fails,

SQS Simple Queue Service

  • the new ec2 ask to the queue what to do.
  • message size up to 256 KB

Amazon S3 (Simple Storage Service)

S3 essencials

  • obejct from 1 byte to 5 TB
  • objects containts meta data
  • bucket name must be unique in all s3 in the world

Storage types

  • s3 durability 99.99999999999
  • RRS Reduced Redondancy: 99.99 for files that can be regenerated.
  • glacier

Versioning

  • store all version of object deleted/write
  • once that versioning is enable can't be disable
  • you can't have versioning anc lifecycle

permissions

  • default are privates
  • you can use encryption an either an you encryption key.
  • multiupload are suggest for objects more the 5GB

multiregion

  • every object is multi az
  • every object except us-east1 permit read immediattly after write

Getting Started With S3 And RRS Storage Class

  • US Standard is the only Region that not provide read after write
  • when you upload you can select the options to RRS Reduce Redundancy Storage, and/or encryption
  • you can select to upload a metadata info, contain meta data (not encrypted) that describe the object. Do not store private data in meta data
  • if you use RRS you will see in the "Storega Class" and will pay a bit less.
  • in the property of the bucket we need add the notification, in this way if we loose a file inside the RRS we receive a notification. To do this we need to create a SNS notifications.
  • We can create a notification by email , http, sms, SQS. If we want automate the process can be good use the SQS in this way an our process take the message and upload again the bucket.
  • Key names (object names) can only be 1024 bytes long

S3 Bucket/Object Versioning And LifeCycle Policies

  • when a versioning is enable we can't disable, we can suspend versioning
  • if we delete the marker of deletion the object is restored.
  • amazon glacier can be used by archive purpose not backup easely accessed
  • we can archive object in glacier and/or delete either in the version process. For example we can archive the last version of a versioned file in the glacier after 60 days from the creation and delete all the object more holeder than 60 days.

command line

pip install aws-commandlineinterface

aws s3 ls s3://mybucketname

aws s3 cp nameofafile s3://mybucket

we can exclude or include some kinds of files

Architecting Applications With CloudFront

  • if the object is alreay cached the retrive from a customer is more fast
  • max cache is 24 hours in a edge locations

but this is bad for updates because is updated every 24 hours

  • this can be resolved uploading object with different name
  • depends also from the kind objects there are objects that change often and other that change often.
  • you can mix using route 53 and cloudformation

creations

two kinds
RTMP: is for streaming
web: tipical web contents

  • ORIGIN domain name: can be an s3 buckets or an ec2 instance
  • for s3 we should think about where are the majoor of our customer
  • origin id: it is only a label
  • restrict bucket access: if say no, only cloudfront can distribuited
  • origin access identity
  • grant read permission: update the permission in the bucket to permit distribution, is more simple respect change in the bucket
  • with the path pattern we can distribute from one origin only the jpg files for example and from another origin the png file
  • ttl is expressed in seconds for the time of keep the object in cache
  • smooth streaming is the protocoll of microsoft
  • restrict viewer access : you need to add more code to permit the access to this resources, signed urls
  • cnames cloudfront.mydomain.com
  • default root object : index.html
  • are necessary 15 minutes to the propagation
  • is more convenient use an alias respect the name directly
  • we can add more origins, for example a beanstalk and choose a pattern that serve the jpg files from s3 and png files from beanstalk

Route 53

Route53 and DNS Failover

  • can be point to an instance or to a elastic load balancer
  • we can use faillover for applications or db
  • you can also have an instance in aws and one in your datacenter

steps

  • import the domains
  • create an health check , for example with http control and an ip address
  • create a recordset
  • ttl if 60 one minutes because we don't want that the dns is cached for a long time, select primary for the instance
  • the id is good to identify the datacenter, you can put the region or the name of your datacenter
  • we don't want the secondary has the health check

Latency Based Routing

  • create a cloud formation template to duplicate our infrastructure in multi regions

Weighted Routing Policies In Route53

  • the weight can be expressed in percent if you write 50 you have an half in one instance and another half in the other part.
  • for example you can begin to test the new infrastructure with real users

EC2

EC2-Classic Essentials

Purchase options

  • spot instance: you can bid on unused Amazon EC2 compute capacity, called Spot Instances, and lower your costs the Spot Price for these instances fluctuates periodically depending on the supply of and demand for Spot Instance capacity.
  • reserved instance: gurantee us the access the access to that resource, this means if another AV zone has the resource to guarantee the same amount of resources.
  • on demand instances: regular hourly price

Service limits by default: 20 ec2 instance and 5 eip

EBS Elastic Block Store

For ec2 instance that contains db is good to add IOPS options to calculate this you can measure in this way
16KB chunks
IOPS*16kb/1024=MB trasfer per second

  • from 1GB to 1 TB
  • ESB RAID 0 for redundancy
  • pre warm ebs volume, it is necessary because with the first mount of a storage we can loose from 5 to 50% of performance. In linux we can do this with dd command.
  • snapshots: are good to move in other region and manage the storage. The snapshot decrease performance.

Launching An EC2 Instance Into The Default VPC

Selecting And Building EC2 Instances

  • Cloud wathc monitor every 5 minutes, if you want every minute you need to pay an extra cost
  • in the advanted dettail I can add a bash script that is run when the istance start
  • if you loose the keypair of an amazon machine you need to create an image of that machine and recreate the machine.
  • private ip of an istance change like pubblic when you stop the instance. The same for the public ip.

EBS Volumes And Snapshots

  • the av zone of the volume must be the same of the ec2 instance
  • we can prewar touch every single block to the device. dd if=/dev/zero of=/dev/xvdf bs=1MB
  • I can change the kind of volume using snapshot and recreate the volume.The snapshot has the durabitlity of s3.
  • a new snapshot of the same disk has only the difference, this permit to save money in the backup.

Cloud-init And User Data

Working With EC2-VPC Security Groups

  • by default in the account created from december 2013 the default is use the vpc and not the ec2 classic
  • max 5 default security groups for instance, ec2-classic was unlimited
  • in ec2 you need to recreate the machine to change the security group in vpc you can do

CloudWatch And EC2

  • to have more detail monitor you need to install an agent in your machine, becuse the default monitor is outside the machine
  • the only thing that you can change is the frequency from 1 to 5 minutes

What Are EC2 Placement Groups?

  • cluster of instance
  • customize instance for clustering
  • low latency between the instance

Controlling EC2 With Amazon Command Line Interface

there are two version of the command line, you need to install in your mac/linux machine

apt-get update
apt-get install python-pip
pip install aws-cli
aws configure

not good practice store your credential inside machine

the default for output is the json

aws ec2 describe-instances

aws ec2 run-instances image-id ami-56965jy —count 2
count is the number of istances that I want

aws ec2 terminate-instancess —instance-ids id-554jg9 id-8u8tudl

aws ec2 start-instances —instace-ids i-gt59jgtgt

Instance Metadata And User Data

it is possible access to the metadata from inside the instance

curl http://169.254.169.254/latest/meta-data/

we can have a list of things for examples: ami-id instance-id ecc.

with the command
curl http://169.254.169.254/latest/meta-data/ami-id

I obtain the value of my ami-id

curl http://169.254.169.254/latest/meta-data/user-data
this is the script executed at the first execution of the instance.
it is possible run a script using the user data funciond and permit to this script to execute some commands using the info of the machine obtained with the curl command.

RDS

RDS Essentials

  • min 5 GB max 3 TB of storage for an instance
  • if you enable multi az and one your az become unavaileble the dns pass to the instance replicated in the other az
  • innodb is the only engine that support a good backup system.

it is possible use a read replica to do all the heavy operations like:

  • funciont datawarehousing
  • import/export data
  • sharding
  • rebuilding indexes

we can promote our read replica to primary

it is possible enable many kinds of notification event like:

  • snapshots
  • parameter group changes
  • options changes
  • security group changes

cloud wathc can monitor

  • cpu/memory/swap
  • log disk usage
  • read/write iops
  • read replica latency log

By default, customers are allowed to have up to a total of 40 Amazon RDS DB instances. Of those 40, up to 10 can be Oracle or SQL Server

IAM (Identity Access Management) Essentials

  • you can setup a password rotation policy
  • integrate with your active directory

roles

  • don't pass credential to an ec2 machine
  • you can elevate an user to a role for a temp time

VPC

difference from default vpc and non-default vpc

  • non default have ip private but not public ip
  • non default not have a internet gw assigned by default

vpc peering

  • direct network route from different vpc

vpc scenario

  • vpc with public subnet only, this is the default vpc setting

Building A Non-Default VPC

Tennancy

  • default our instance stay with other
  • dedicated our instance can stay in the same hardware. We can select instance by instance if we want in the same hardware or not.

Subnets

  • name the subnet with the same ip addres that we more simple see immediatly the configuration
  • all the subnet has the route of all the others
  • when you create the subnet them can't access to and from the internet. To do it is necessary attache an internet gw.
  • it is possible attach a single route table to a subnet
  • a vpc can have only one internet gw

VPC Security

  • with a network acl I can deny access from an ip or group that are doing ddos attach for example.
  • the rule in network acl is from lowest to heigther, it is convenient increment using 5 increments 100 - 105 - 110 .

Configuring A NAT Instance

  • choose a private network , no internet gw in this net
  • create a security group called nat for example
  • permit inboud rule for 80 and 443 with source ip the network choosen before
  • permit outboud 80 and 443 for all internet 0.0.0.0/0
  • create a new linux machine search in the ami this "amazon-ami-vpc-nat" preconfigured nat instance for this purpose, select micro, and this instance needs to stay in a public internet. Assign the security group created before.
  • assign an elastic ip to this new instance. We also need to disable the field for this instance "Change Source/Dest Chack" to permit to traslate the request.
  • for the private network we need to enable the route destination 0.0.0.0/0 target i-984484tjfgf that is the nat instance identifier.

Elastic IP Addresses And Elastic Network Interfaces

  • the number of network interfaces that you can attach to a instance depends from the type of instance, for example the micro can have max 3 network interfaces
  • if you stop an instance in vpc

Concept To Production: Building A Highly Available Wordpress App

Configuring The AMI For Our Web Application

we need a script to syncro ec2 directory with s3

aws s3 sync  /var/www/wordpress/wp-content/uploads s3://mybucketname/uploads

the s3 bucket is connecte to our cloudfront cdn

we need to put in crontab the command of sync every minute and add a rewrite url in the apache of ec2

a2enmode rewrite

# and in the bottom of the files of sites add
RewriteRule ^/wp-content/uploads(.*)$ http://ghufuhr.cloudfront.net/uploads$1 [R=302]

after that we create an ami and when we use at the startup we always use this script in the User-data field:

#!/bin/bash 
mkdir -p /var/www/wordpress 
aws s3 cp --recursive s3://wordpressapp/wordpress/ /var/www/wordpress/ 
aws s3 cp s3://wordpressapp/wordpress_vhost /etc/apache2/sites-enabled/ 
chown -R www-data.www-data wordpress 
service apache2 graceful

this configuration permit to have alway the code date updated for the wordpress.

to strees a machine we can use the command stress to hight the cpu percent

Elastick beanstalk

Using The EB Command Line Tool To Manage Our Code And Beanstalk Environments

apt-get install git python27
cd /usr/bin
#and fix the symbolic links to point python and python2 
python --version

download AWS-ElasticBeanstalk-Cli fix the path directory

mkdir myapp 
git init
eb init
#inizialize with the credentials , name, 
# a bit of time to create the environment
vi index.html
git add index.html
git commit -a -m "first commit"
git aws.push 
# will be necessary some minutes

new branch
[[code]]
git chechout -b prod

  1. we are working in production branch
  • everything we do for git we need to to for elastick beanstalk

eb branch

eb start

  1. this start a new environment
  1. to update are necessary some minutes.

git branch
git chechout master
eb branch

eb stop

  1. kill the environment

Amazon CloudFormation

  • CloudFormer create a template for you from an environment already created.

Distributed Services

An Overview Of Amazon SWF

  • step to step workflow
  • task coordination and state management device service
  • a workflow can consit of human events
  • the queue service garantee the execution of task but not the order of run, swf instead gurantee order
  • we have workers and deciders
  • worker perform an activity
  • a worker pull continuosly to see if there is a task for it and after the exctution report back to swf
  • for example run some code in ec2 machines and returt the result to swf , this decide what to do after.
  • a part of workflow can be done from humans, like in the ecommerce process when it is necessary ship objects
  • SQS message can live 14 days, SWF workflow can live for 1 year
  • SWF synchronous and asynchronuous

Amazon Simple Queue Service (SQS)

  • to decoupling and avoid loosely
  • it gurantee if there is a message in the queue it is at least delivery once.
  • not gurantee not duplicate messages
  • multiple readers/writers , locks gurantee
  • message max 256 kb , attach instruction to permit access to s3 or dynamo db, for example a location of an image
  • they charge per request
  • no order gurantee

Message Lifecycle

  • message put in the queue, ha for messages, in multiple sqs servers
  • message taken from the queue and become unavailable for the "visibily timeout" to avoid to be taken another time from another process. After this time the message become again visible
  • after that the process finish its work, delete the message from the queue.

polling

  • short polling: a subset of sqs servers are polled, but can happen that a message is not found
  • long polling: wait untill the message is available, unless connetion timeout , reduce request and decrease costs

terminology

  • delaty queue: the amount of time to delay the first message of all the message added to the queue
  • message retention period: amount of time live in the queue is if not deleted, from 1 minutes to 14 days
  • receive message wait time : if set to a value greater than 0 then long polling is enable. This is the maximum amount of time that a long polling call will wait for a message to become available before returining empty.

lmitations

  • delay queues: 0 seconds, 15 minutes.
  • message retention period: 1 min to 14 days
  • visibility timeout: 0 seconds 12 hours

if you want order in the message you need to put inside info about time of processing

DynamoDB (NoSQL Service)

DynamoDB Essentials

  • auto scale
  • ssd storage devices
  • provision throughput,
  • horizontal scaling, based on the request
  • run acros multi az
  • backup automatically
  • privary key is mandatory for tableau,
  • atomic counters, read and write actions are performed inline
  • cost effective, not provision instance size, you pay for requestes and storage. Not reserve requests
  • integrate with amazon redshift and elastic mapreduce

service limits

  • 256 tableau per region, it is necessary increase the limits
  • tables name/secondary indexes 265 charaters long
  • us east region virginia has limit different from other region
Salvo diversa indicazione, il contenuto di questa pagina è sotto licenza Creative Commons Attribution-ShareAlike 3.0 License