Ansible and AWS ASG, a (really) dynamic inventory

Datetime:2016-08-23 03:30:25          Topic:          Share

I found myself searching ridiculously too long to achieve what I believed was a simple task: to apply an Ansible role to newly created instances… started by an Auto Scaling Group . If you’re used to Ansible you know that it relies on an inventory to apply a playbook , but obviously, when you’re firing up EC2 instances with the same playbook , you are not able to know what will be your virtual machines IP addresses, nor can , the recommended method to deal with dynamic inventories.

I read that refreshing the inventory can be achieved using the following instruction:

meta: refresh_inventory

Yet I wanted to try a fully dynamic and self-contained method without the need of an external helper.

When starting an EC2 instance with the Ansible ec2 module, you’re able to retrieve those datas dynamically via the ec2 registered variable and then add the hosts to the inventory using the add_host module. Strangely enough, the ec2_asg module does not provide informations about the created instances, this is where the ec2_remote_facts comes into the play.

Consider the following playbook :

# deploy.yml

- hosts: localhost
connection: local
gather_facts: no

- foo

# roles/foo/tasks/main.yml

- name: Create Launch Configuration
region: "{{ region }}"
name: "{{ dname }}"
image_id: "{{ ami_result.results[0].ami_id }}"
key_name: "{{ keypair }}"
instance_type: "{{ instance_type }}"
security_groups: "{{ security_groups }}"
when: "{{ curstate == 'present' }}"

- name: Fire up ASG
region: "{{ region }}"
name: sandbox
launch_config_name: "{{ dname }}-lc"
availability_zones: "{{ azs }}"
vpc_zone_identifier: "{{ subnets }}"
desired_capacity: 2
min_size: 2
max_size: 2
state: "{{ curstate }}"
- "env": red
register: asg_result

I naively thought the asg_result variable would hold needed informations, but it actually doesn’t. So I had to add the following task:

- ec2_remote_facts:
"tag:env": "red"
register: instance_facts

Which apply the tag filter and adds the newly created instances metadatas into the instance_facts variable.

Here’s an example of such gathered data:

ok: [localhost] => {
"msg": {
"changed": false,
"instances": [
"ami_launch_index": "0",
"architecture": "x86_64",
"client_token": "foobarfoobar",
"ebs_optimized": false,
"groups": [
"id": "sg-2bd06143",
"name": "ICMP+SSH"
"hypervisor": "xen",
"id": "i-845e1238",
"image_id": "ami-02724d1f",
"instance_profile": null,
"interfaces": [
"id": "eni-0638f67a",
"mac_address": "01:1b:11:1f:11:a1"
"kernel": null,
"key_name": "foofoo",
"launch_time": "2016-08-05T07:09:59.000Z",
"monitoring_state": "disabled",
"persistent": false,
"placement": {
"tenancy": "default",
"zone": "eu-central-1b"
"private_dns_name": "",
"private_ip_address": "",
"public_dns_name": "",
"ramdisk": null,
"region": "eu-central-1",
"requester_id": null,
"root_device_type": "ebs",
"source_destination_check": "true",
"spot_instance_request_id": null,
"state": "running",
"tags": {
"aws:autoscaling:groupName": "sandbox",
"env": "red"
"virtualization_type": "hvm",
"vpc_id": "vpc-11111111"

Thanks to this very well written blog post , I learned how to extract the information I needed, the instances private IP addresses:

- name: group hosts
add_host: hostname={{ item }} groups=launched
with_items: "{{ instance_facts.instances|selectattr('state', 'equalto', 'running')|map(attribute='private_ip_address')|list }}"

Quite a filter huh? :) Here we select only the instances with a running state, look up for the private_ip_address attributes and make them a list which then can be processed as items (more on Jinja 2 filters ).

These items are added to the hosts inventory via the add_host module in a group named launched . We will use that group name in the main deploy.yml file:

- hosts: launched
gather_facts: no

- name: wait for SSH
wait_for: port=22 host="{{ inventory_hostname }}" search_regex=OpenSSH delay=5

And voila! The launched group now holds our freshly created instances with which you can now interact with from your playbook .

Another great read on the subject: Using Ansibles in-memory inventory to create a variable number of instances