Home > Net >  Ansible Dry Run not catching a created resource in a previous task
Ansible Dry Run not catching a created resource in a previous task

Time:11-18

I'm archiving a file and then transferring it to s3:

- name: Compress a directory
  archive:
    path: "local/usr/directory"
    dest: "local/user/directory.tgz"
  register: registered_directory

- name: Transfer archived directory to s3
  aws_s3:
    bucket: "{{ bucket }}"
    object: "{{ bucket_folder }}/directory.tgz"
    src: "{{ registered_directory.dest }}"
    mode: put
    region: "{{ aws_region }}"

This is only running as a dry run, and there is a requirement where the dry run needs to run successfully to be able to do the actual build. But, the dry run fails in the second step (transferring to s3) with the following error:

"msg": "Local object for PUT does not exist"

I'm aware that the actual physical tgz object doesn't exist in this point in time since this is only a dry run and the first task isn't actually being executed. I'm looking for some way for the second task to recognize that it is using the output from the first task to grab the tar file. Any way to set this dependency?

CodePudding user response:

When working with check mode you have to decide for yourself how to deal situations like this where a step depends on a previous one working. You have three basic options.

Run the first task even in check mode:

- name: Compress a directory
  archive:
    path: "local/usr/directory"
    dest: "local/user/directory.tgz"
  register: registered_directory
  check_mode: false

Skip the second task when in check mode:

- name: Transfer archived directory to s3
  aws_s3:
    bucket: "{{ bucket }}"
    object: "{{ bucket_folder }}/directory.tgz"
    src: "{{ registered_directory.dest }}"
    mode: put
    region: "{{ aws_region }}"
  when: not ansible_check_mode

Or ignore errors from the second task when in check mode:

- name: Transfer archived directory to s3
  aws_s3:
    bucket: "{{ bucket }}"
    object: "{{ bucket_folder }}/directory.tgz"
    src: "{{ registered_directory.dest }}"
    mode: put
    region: "{{ aws_region }}"
  ignore_errors: "{{ ansible_check_mode }}"

You can also do more complex things like change the module arguments, but I think that's getting too far away from the spirit of a dry run.

- name: Transfer archived directory to s3
  aws_s3:
    bucket: "{{ bucket }}"
    object: "{{ bucket_folder }}/directory.tgz"
    src: "{{ ansible_check_mode | ternary(registered_directory.dest, '/etc/hostname') }}"
    mode: put
    region: "{{ aws_region }}"
  • Related