Amazon Ops Works

from book Learning AWS OpsWorks

CHAPT 1 A New Way to Scale

with OpsWorks, AWS services such as EC2, ELB, EBS, Elastic IP, Security Groups, Route 53, CloudWatch,
and IAM can all play a part in its configuration.

areas:

  • Stacks: high level component, it is a container , like an environment (staging , production ecc).
  • Layers: blueprint for EC2 instances, EBS volumes, load balancers and so on,
  • Instances: EC2
  • Apps: the code of an application

The importance of OpsWorks:

  1. well integrate with EC2, ELB, EIP, CloudWatch
  2. create full stacks, which can then be cloned into other stacks
  3. OpsWorks is auto scaling. with two options time-based and load-based
  4. deploy application code from source code repositories hosted in Git and Subversion repositories, as well as S3 and HTTP archives.
  5. disaster recovery (DR). With OpsWorks, systems can be architected so that they're region agnostic
  6. When auto healing is enabled, instances can fail and OpsWorks will automatically replace them in their entirety

CHAP 2

not interesting is the creation of amazon account

CHAP 3 Stack it Up!

interesting note about ELB

Algorithms

ELB provides Round Robin (RR) load-balancing and can include sticky session handling. There is no current ability to balance traffic based on at least connections or pre-determined weight using ELB, but these features could become available in the near future. If you currently require options of the latter, HAProxy will be your best alternative.

Protocols

HTTP, Secure HTTP, TCP, and Secure TCP. works with the following ports: 25, 80, 443, and 1024-65535. If for some reason you do require available ports between 443 and 1024, HAProxy will be your best alternative.

Traffic spikes

ELB is designed to handle a very large number (20K+) of concurrent requests per second with a gradually increasing traffic pattern. ELB is not designed to handle sudden heavy spikes of traffic unless the ELB is pre-warmed first.If frequent sudden heavy spikes of traffic are anticipated for your application, HAProxy will be the better option

Timeouts

ELB will timeout persistent socket connections after 60 seconds if left idle. This condition could present a problem if you are waiting for large files to be generated by a system that doesn't provide frequent callbacks.

HAProxy

HAProxy is an excellent alternative to ELB. HAProxy is designed for load balancing
and it follows the UNIX philosophy of "do one thing and do it well". The use of
HAProxy will allow you to overcome almost any constraint that ELB presents,
with the exception of SSL termination.
if HAProxy is used, both the financial and administration costs will be significantly higher. m1.large instances at the very least, Scaling an infrastructure over multiple AWS regions is also much more of a challenge
when using HAProxy.

CHAP 4 Layers – the Blueprint for Success

Configuring layers

  • Load Balancer: HAProxy or Elastic Load Balancer
  • App Server : Static Web Server, Rails App Server, PHP App Server, Node.js App Server
  • DB: MySQL
  • Other: Memcached, Ganglia, Custom

Built-in Chef recipes

It's not an absolute requirement that you know Chef in order to work with OpsWorks.
It's totally possible to build an entire infrastructure without any Chef knowledge;
however, if you require any type of custom layer, Chef will more than likely become
a requirement.

lifecycle of an instance using OpsWorks and Chef recipes
  • setup: event runs on a new instance once it's been successfully booted.
  • Configure: when an instance either enters or leaves an online state. The Configure event is a good time to re-generate configuration files.
  • Deploy: when a deploy command is run two types of deployments: Application and Command
  • Undeploy when an app is deleted or with the goal of removing an app from application instances.
  • Shutdown takes place after instances are instructed to be stopped, but before the instance is actually terminated, 45 seconds time windows

Custom Chef recipes

for specific configuration

EBS volumes

where the volume
should be mounted, the RAID level, number of disks, and the size per disk If EBS volumes are to be assigned to a high-performance
SQL database system, it's important to use RAID 10 for speed and redundancy

Elastic IPs

App Server layer that will have instances behind an Elastic Load Balancer, no Elastic IPs are required here

OS Packages

you start typing the name of a package, you should be presented with a list of
package names that meet the keyword

IAM instance profile

When the staging stack was first created, a default IAM instance profile called
aws-opsworks-ec2-role was automatically set up by OpsWorks.
If you are developing applications that have a requirement to make use of other
AWS resources such as S3 buckets, RDS databases, and so on, it is a good idea to
construct IAM policies in advance that allow this access,

Auto healing

When auto healing is enabled at the layer, any attempt to shut down
or terminate an instance outside of OpsWorks, such as using the
EC2 console, or the command line interface, will trigger OpsWorks
to automatically heal the instance.
During
the auto healing process, OpsWorks will re-attach any EBS volumes
that were attached to the unhealthy instance. If you decide to delete
an EBS volume using the EC2 console, OpsWorks will replace the
EBS volume automatically, but it will not replace the data.

chap 5 In an Instance

Instance types

  • Micro: low traffic sites, blogs, and small administrative applications
  • Standard 1st Gen: small and midsize databases, data processing, encoding, and caching. m1 class
  • Standard 2nd Gen: greater number of virtual CPU m3 class
  • HighMEM : high performance databases as well as caches such as Memcached. m2 class
  • HighCPU: c1 class, are compute optimized
  • HighIO instances, hi1 class,NoSQL databases such as Cassandra and MongoDB, as well as scale out transactional databases such as MySQL and PostgreSQL.
  • HighStorage: hs1 class, have high network performance, and offer a throughput performance of up to 2.6 GB/s. Hadoop and other clustered file systems.

Instance scaling types

  • 24/7 instances: default scaling type If you plan on building an auto scaling array behind a load balancer, at least two

24/7 instances are required.

  • Time-based instances: you program the moment of the day or week
  • Load-based instances load metrics which you can define

Adding instances

The Root device type should be set to EBS backed, which will
provide faster boot times and persistent data when stopped and started again.
When an instance is first added, it's put into a stopped state. The reason for this is for
additional control, or so that you have the option of adding several instances,
which you can bring online all at the same time

After scaling up/down, ignore metrics for
This is a very important setting. If this setting is improperly
configured, real-world traffic could lead to an excessive
number of instances booting up,
By default, OpsWorks provides 80 percent up and 30 percent down for CPU, with
a server batch of 1, with 5 and 10 minute threshold windows, and 5 and 10 minute
metrics windows. By using the default settings, if a layer experiences 80 percent CPU
utilization across all instances for 5 consecutive minutes, one additional instance will
be started. Once the instance is started, OpsWorks will ignore metrics for a period
of 5 minutes. After the 5 minutes have elapsed, OpsWorks will begin to monitor the
metrics again, and if the CPU threshold still exceeds 80 percent for the layer, another
instance will be launched, and so on.
The reverse action will take place according to
the parameters in the Down row if CPU utilization falls below 30 percent.

One thing to note with automatic load-based scaling is that it does not create new
instances. Automatic load-based scaling only starts and stops those instances which
you have already created, and that are in a stopped state.

CHAPT 6 Bring the Apps!

Deploy

Chef JSON, in the event that you need to pass variables to the stack that should override any preset custom Chef JSON

CHAP 7 Big Brother

OpsWorks monitoring

it's possible to view metrics for an entire stack, a specific layer, or a
specific instance.

Stack metrics

the max time is 2 weeks
things to know:

  • Graphs are updated every 120 seconds.
  • The graphs display average values for layers that have more than one instance.
  • Graphing time periods can be specified from 1 hour to 2 weeks.

metrics:

  • cpu
  • memory
  • load averages
  • number of active process

CHAP 8 Access Control

Is possible import IAM Users in the machines converting in ssh users, it is necessary an ssh keypair

CHAP 9 Instance Agent Command Line Interface

Instance Agent CLI

after login into an instance you can obtain some info from the command line agent

displays a report about the status of the agent

sudo opsworks-agent-cli agent_report

outputs the configuration of the stack based on a lifecycle event such as setup, configure, deploy

sudo opsworks-agent-cli get_json [activity] [date]

displays extended information about the instance

sudo opsworks-agent-cli instance_report

list the time for each activity that has been executed on this instance.

sudo opsworks-agent-cli list_commands [activity] [date]

run a lifecycle event such as setup, configure, deploy, undeploy, start, stop, or restart. you can JSON-formatted configuration options

sudo opsworks-agent-cli run_command [activity] [date] [/path/to/valid/json.file]

tail the most recent OpsWorks agent log file.

sudo opsworks-agent-cli show_log [activity] [date]

outputs the state and configuration of the stack.

sudo opsworks-agent-cli stack_state.

CHAP 10 Multi-region Architecture

A region can be go down, the idea is create a copy of the entire environment in another region and use Route 53 to manage this.

Amazon Route 53

  • Low query and record update latency
  • At $0.50 per zone per month, and at $0.50 per million
  • Route 53 offers the ability to be integrated with IAM, and by doing this, it's possible to create JSON-formatted policies so that you can control access to zones and records.
  • Route 53 also offers AWS endpoint health checking, which is a great solution when paired with a failover record policy

OpsWorks and Route 53

  • Route 53 can balance traffic across multiple regions

routing policy

  • Simple: round robin
  • Weighted: every region has a weight and the load is statically distributed.
  • Latency: the traffic go to the region more close
  • Failover: creating basic HTTP or TCP-related health checks, and choose how to use , balance from healthy or use one region only if the other is down ecc.
Salvo diversa indicazione, il contenuto di questa pagina è sotto licenza Creative Commons Attribution-ShareAlike 3.0 License