AWS CloudFormation Tutorial - Part 6 | Agile Partner
share on

AWS Cloudformation – Part 6

We started from scratch in Part 1 and made the infrastructure evolve since then. If you followed up to part 5, we created new stacks, updated existing ones, had errors leading to rollbacks, … By now, I am sure you have a good grasp of how Cloudformation ticks.

For the last stretch, we are just going to add scaling rules and a few commodities, like choosing the instance type in the Cloudformation interface instead of having it hardcoded in a file. A few final touches.

If we would go to the Auto Scaling group interface in the AWS console, we could change the settings manually, change the desired min, max, desired number of instances. We could add scaling policies as well. Say, if the instance CPU is greater than 80% for 2 consecutive periods of 5 minutes, we add an instance. We would remove an instance if the CPU drops below 80% for 2 consecutive periods of 5 minutes. So we would need an up and down scale policy, to add and remove one instance respectively. And we would need a trigger: a Cloudwatch alarm when CPU thresholds are reached. We can add that to the app.yaml file.

  AppScaleUpPolicy:
    Type: AWS::AutoScaling::ScalingPolicy
    Properties:
      AdjustmentType: ChangeInCapacity
      AutoScalingGroupName: !Ref DeployAppASG
      Cooldown: '60'
      ScalingAdjustment: 1

  AppScaleDownPolicy:
    Type: AWS::AutoScaling::ScalingPolicy
    Properties:
      AdjustmentType: ChangeInCapacity
      AutoScalingGroupName: !Ref DeployAppASG
      Cooldown: '300'
      ScalingAdjustment: -1
  
  CPUAlarmHigh:
    Type: AWS::CloudWatch::Alarm
    Properties:
      AlarmDescription: Scale-up if CPU > 80% for 5 minutes
      MetricName: CPUUtilization
      Namespace: AWS/EC2
      Statistic: Average
      Period: 300
      EvaluationPeriods: 2
      Threshold: 80
      AlarmActions: [!Ref AppScaleUpPolicy]
      Dimensions:
      - Name: AutoScalingGroupName
        Value: !Ref DeployAppASG
      ComparisonOperator: GreaterThanThreshold

  CPUAlarmLow:
    Type: AWS::CloudWatch::Alarm
    Properties:
      AlarmDescription: Scale-down if CPU < 60% for 5 minutes
      MetricName: CPUUtilization
      Namespace: AWS/EC2
      Statistic: Average
      Period: 300
      EvaluationPeriods: 2
      Threshold: 80
      AlarmActions: [!Ref AppScaleDownPolicy]
      Dimensions:
      - Name: AutoScalingGroupName
        Value: !Ref DeployAppASG
      ComparisonOperator: LessThanThreshold

Update the master stack and have a look at the Auto Scaling group scaling policies in the AWS console.

If you fancy, you can go stress the one running instance and see if it works. It does! Take AWS’ word for it.

We are in good shape and if we scale our “app” to a minimum of 2 instances and a to be defined maximum, we could sleep through the night like babies. Two instances would be load balanced all the time. If we’d loose one, the Auto Scaling group would launch a new one. If we get a little spike of activity, the Auto Scaling group would add instances to lower the pressure. This would ensure service continuity instead of a service crash because the load got too high. Once the pressure is gone, the Auto Scaling group would remove one instance at a time until it reaches the desired minimum. That would save us money. During all these changes, the Application Load Balancer automatically finds its targets. Nice! Isn’t it?

We can improve the app stack by removing the hard coded instance type and AMI ID.

For the first part, it’s just about modifying the parameters to offer various instance choices. For the second part, we’ll introduce Mappings that will determine in which conditions one or the other AMI ID should be used.

Look in the mappings section. It really just is a simple way to select the AMI ID according to the Region and environment we want. We could have a much more complicated Mappings section, with many regions, and environments, but for the time being, this moves the AMI ID out of the Launch Configuration in a place that is more fitting. Plus, it’s going to be easier to maintain.

Mappings: 
  RegionMap:
      eu-west-1:
        dev: ami-0172c7e8739ccb954

We now need to select the right AMI ID in the Launch Configuration. The intrinsic function FindInMap is going to help with that.

  DeployAppLaunchConfig:
    Type: AWS::AutoScaling::LaunchConfiguration
    Properties:
      ImageId:
        Fn::FindInMap:
          - RegionMap
          - Ref: AWS::Region
          - Ref: EnvironmentType
      InstanceType: !Ref ServerInstanceType
      KeyName: !Ref KeyPairName
      SecurityGroups:
        - !Ref PrivateHostSecurityGroup

EnvironmentType? Shouldn’t that be “dev”? Yes, it should be. But we are going to use a parameter for the environment type. This way, we can re-use the templates to deploy in the testing, security, demo and production environments. Sure, we will have to select the VPC and subnets. But the AMI ID for each environment will be committed to our code repository and that selection is going to be mapped for us. A simple extension of the current mapping would be to add a test, sec, demo and prod AMI IDs. It could look something like this:

Mappings: 
  RegionMap:
      eu-west-1:
        dev: ami-0172c7e8739ccb954
        test: ami-...
        sec: ami-...
        demo: ami-...
        prod: ami-...

For the EnvironmentType parameter, we are going to offer a selection of environments.

  EnvironmentType: 
      Description: The environment type
      Type: String
      Default: dev
      AllowedValues: 
        - prod
        - dev
      ConstraintDescription: must be a prod or dev

We can now update the app stack in the master stack with the new EnvironmentType.

If you update the master stack after these successive modifications, you will notice nothing really updates. This is because we defined defaults and these defaults are in place right now. So, there is nothing to change. But the templates are easier to maintain and use.

Note as well that app.yaml and master.yaml have a lot of parameters in common. This could be seen as useless duplication. And frankly, sometimes it is. It is not always needed. You could strip the app.yaml file from some parameters in the Parameters and Metadata sections that are provided by the master stack anyway. But having it all laid out like that in the app.yaml file makes the app stack autonomous. You could use it as a standalone instead of a nested stack. It’s a really good way to determine what parameters will be needed instead of having to analyse the file and pick out the references to parameters everywhere.

Check out part6 from the Github repository to verify your configuration if you tried it on your own.

If you read the series up to this point, …, kudos, you really wanted to play along and give it a shot.

I hope you enjoyed this introduction to AWS Cloudformation and coding and versioning your infrastructure.

Have a good one 🙂

Want to know more? The experts of our Agile Software Factory are here to help you!

share on