Category Archives: Ansible

Testing newly create execution environment

Based on previous posts, you probably have created an ansible execution environment.

After you have created an execution environment, how do you test it?

The new ansible CLI tool to run an ansible playbook is called “ansible-navigator”.
It’s not only a ansible-playbook execution binary, but it has more features to it such as:

  • Review and explore available collections
  • Review and explore current Ansible configuration
  • Review and explore Ansible documentation
  • Review execution environment images available locally
  • Review and explore an inventory
  • Run and explore a playbook

But for the testing, this is what you need to do to test the newly created execution environment.

$ ansible-navigator run -eei localhost/<newly created EE name> --pp never <test ansible playbook>.yml

From above, the important option is “–pp never” which stands for “pull policy” “never”.
This means that, since it’s already on the localhost, please don’t download the image.
If for some reason, you forget the option, you will see the following error;

Trying to pull localhost/<new ee name>:latest...
WARN[0000] failed, retrying in 1s ... (1/3). Error: initializing source docker://localhost/<new ee name>:latest: pinging container registry localhost: Get "https://localhost/v2/": dial tcp connect: connection refused
WARN[0001] failed, retrying in 1s ... (2/3). Error: initializing source docker://localhost/<new ee name>:latest: pinging container registry localhost: Get "https://localhost/v2/": dial tcp connect: connection refused
WARN[0002] failed, retrying in 1s ... (3/3). Error: initializing source docker://localhost/<new ee name>:latest: pinging container registry localhost: Get "https://localhost/v2/": dial tcp connect: connection refused
Error: initializing source docker://localhost/<new ee name>:latest: pinging container registry localhost: Get "https://localhost/v2/": dial tcp connect: connection refused
[ERROR]: Execution environment pull failed

Also, the ansible-navigator’s default UI mode is “TUI” mode rather than printing out errors to stdout. If you would like to run ansible-navigator in stdout mode, just add an option;

$ ansible-navigator run --eei localhost/<newly created ee> --pp never <test ansible>.yml --mode stdout

Ansible Automation Platform – What is Ansible Automation Execution Environment i.e. EE?

With Red Hat’s Ansible Automation Platform 2.x, one of the big change is the introduction of Ansible Execution Environment.

Then questions rise;
* What is an Ansible Execution Environment?
* What is it for?

What is an Ansible Automation Execution Environment?

Below is a simple diagram summarising what it is.

High level overview of Automation Execution Environment

It is an optimised container environment that contains required “binaries”, “python+other Libraries” and ansible collections to execute an Ansible playbook(s).

Business/Technical Problems to solve:
To Provide a simplified & consistent execution environment to enhance automation development experiences

When a developer/user develops automation on their own environment and shares their own ansible playbooks with other team members, depending on their own development environment vs others, the automation experience could be very different. (Developing Ansible playbooks in a Mac vs Linux)

By creating and using the Ansible Automation Execution Environment, it provides the same development/execution experience.

– Multiple python environments to manage that creates maintenance overhead.

One of the main struggles was that Ansible Tower users had requirements for multiple python virtual environments as the number of users or number of use cases increased. E.g. for use cases, requirements of python 2.7 vs python 3.x or some modules requiring specific versions of python modules.

Above has resulted, within Ansible tower creating multiple python virtual environments, and if you have a cluster of ansible tower nodes, the administrator had to ensure all tower nodes have exactly the same python virtual environment configurations.

Ansible Automation Platform Execution Environment & tzdata

With a colleague of mine, we were testing migration of a ServiceNow – system provisioning ansible workflow from Ansible 2.9 -> Ansible Automation platform 2.x and hit an issue.

No such file or directory: '/usr/share/zoneinfo/

My immediate thought would be that “tzdata” pkg is not installed on the UBI.

So added “tzdata” into bindep.txt and rebuild the EE image, but it didn’t work with error saying that nothing to install. (i.e. its already installed, just the file is not available, tested this with “append” in execution-environment.yml

[3/3] STEP 6/6: RUN ls -la /usr/share/zoneinfo/
ls: cannot access '/usr/share/zoneinfo/': No such file or directory
Error: error building at STEP "RUN ls -la /usr/share/zoneinfo/": error while running runtime: exit status 2

To get rid of this issue, I had to just use append to “reinstall” using microdnf tzdata as below;

$ cat execution-environment.yml
version: 1
  galaxy: requirements.yml
  system: bindep.txt

    - RUN microdnf reinstall -y tzdata
    - RUN ls -la /usr/share/zoneinfo/
[3/3] STEP 6/7: RUN microdnf reinstall -y tzdata
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Downloading metadata...
Package                                Repository       Size
 tzdata-2021e-1.el8.noarch             ubi-8-baseos 485.0 kB
   replacing tzdata-2021e-1.el8.noarch
Transaction Summary:
 Installing:        0 packages
 Reinstalling:      1 packages
 Upgrading:         0 packages
 Obsoleting:        0 packages
 Removing:          0 packages
 Downgrading:       0 packages
Downloading packages...
Running transaction test...
Reinstalling: tzdata;2021e-1.el8;noarch;ubi-8-baseos
--> 0a9e9e0a1ed
[3/3] STEP 7/7: RUN ls -la /usr/share/zoneinfo/
-rw-r--r--. 1 root root 19419 Sep 20 16:34 /usr/share/zoneinfo/
[3/3] COMMIT servicenow-ee-29

Ansible Automation Platform – developer high-level workflow

For a customer recently, I had to talk about with Ansible Automation 2.x, what is required to develop ansible playbooks.

Here is a high-level workflow diagram that I drew;

Ansible Automation Platform – developer high-level workflow

So what it is that… When you are writing a playbook and testing it, you need the following components:

  • Ansible IDE tool – my current favourite is VSCode, because there are so many nice extensions + Red Hat recently have released ansible extension
VSCode Ansible extension
  • Ansible-Core – the command line tool, the language and framework that makes up the foundational content before you bring in your customized content.
  • Ansible-Builder – to build execution environments
  • Ansible-navigator – to run, test playbooks with execution environments

If you haven’t built an execution environment, the very first thing that you need to do is to build an execution environment, as below:

4 files that you need to create are;

  • bindep.txt – Bindep is a tool for checking the presence of binary packages needed to use an application / library, so whatever is defined in this file will be installed.
  • requirement.txt – The python entry points to a Python requirements file for pip install -r …
  • requirement.yml – Outlines ansible collection requirements for galaxy to download and include into the execution environment.
  • execution-environment.yml – A definition file as an input and then outputs the build context necessary for creating an Execution Environment image

Detailed examples can be found in:

Once the required execution environment is ready, it can be shared across your colleagues to enhance the collaboration experiences through consistencies.

Also, now you can start to develop an ansible playbook;

Finally, once you are happy with the playbook and the execution environment, it should be uploaded and managed in source management systems:

  • playbooks – Source Control Management Systems – e.g. github, gitlab….
  • EE image – e.g.) Automation hub,, artifactory…

Then those can be properly leveraged by Ansible Automation Platform.

Ansible Automation Platform (AAP) 2.1 – released

Last week finally, AAP 2.1 was released.
Here is the release note:
Here is a blog post from Red Hat:

So to recap some of highlights are;

What’s included in AAP 2.1 –

Automation Mesh:
This is the newest addition to Ansible Automation Platform, and replaces the isolated nodes feature in 1.2. By combining automation execution environments in version 2.0 with automation mesh in version 2.1, the automation control plane and execution plane are fully decoupled, making it easier to scale automation across the globe. You can now run your automation as close to the source as possible, without being bound to running automation in a single data center. With automation mesh, you can create execution nodes right next to the source (for example, a branch office in Johannesburg, South Africa) while execution is deployed on our automation controller in Durham, NC.

Automation mesh adds:

  • Dynamic cluster capacity. You can increase the amount of execution capacity as you need it.
  • Global scalability. The execution plane is now resilient to network latency and connection interruptions and improves communications.
  • Secure automation. Bi-directional communication between execution nodes and control nodes that include full TLS authentication and end-to-end encryption. 

Uploading an Ansible execution environment to a private automation hub

With Red Hat’s new Ansible Automation Platform 2.

Now you have to create an execution environment for you to run your automation.

If you are interested in knowing more about what’s Ansible Execution Environment? and how to build the execution environment please read more on;

  • What’s Ansible Execution Environment?
  • How to build the execution environment?
  • How to run Ansible automation with the new ansible-navigator?

To get into the topic now.

With the new Ansible Private Automation, Red Hat has incorporated a “container registry” into the Ansible Private Automation hub.

Ansible Automation Private Hub 2.0 – Container Registry

So how do we upload the execution environment that you built locally using “ansible-builder”?

Make sure you have your locally built image ready.

For this example, the image name is “my_first_ee_image

(builder) # podman images
REPOSITORY                                                                         TAG         IMAGE ID      CREATED       SIZE
localhost/my_first_ee_image                                                        latest      18ee9d1f8d86  2 weeks ago   747 MB
<none>                                                                             <none>      f64cafba5f7b  2 weeks ago   740 MB
<none>                                                                             <none>      2ee39fed1806  2 weeks ago   747 MB
<none>                                                                             <none>      36ea822bf2b4  2 weeks ago   740 MB                                                     latest      a24b29574c26  2 weeks ago   725 MB
<none>                                                                             <none>      d5790b11bfe2  2 weeks ago   747 MB
<none>                                                                             <none>      20839e67474e  2 weeks ago   647 MB

1. Log in to your automation hub from CLI

(builder) # podman login -u=admin https://<FQDN or IP>/ --tls-verify=0
Login Succeeded!

Because, I am currently running my Private Automation Hub locally, I had to add “–tls-verify=0” or “–tls-verify=false” option.

2.  Create a repository structure + tag on the Private Automation Hub using the local repository structure.

(builder) # podman tag localhost/my_first_ee_image <FQDN/IP>/my_first_ee_image

(builder) # podman images
REPOSITORY                                                                         TAG         IMAGE ID      CREATED       SIZE
localhost/my_first_ee_image                                                        latest      18ee9d1f8d86  2 weeks ago   747 MB
<IP/FQDN>/my_first_ee_image                                                        latest      18ee9d1f8d86  2 weeks ago   747 MB
<none>                                                                             <none>      f64cafba5f7b  2 weeks ago   740 MB
<none>                                                                             <none>      2ee39fed1806  2 weeks ago   747 MB
<none>                                                                             <none>      36ea822bf2b4  2 weeks ago   740 MB

3. Upload your build EE to the private automation hub.

The format is that;

# podman push <Image ID> <URL>/<Image name>:<tag>
(builder) # podman push --tls-verify=false 18ee9d1f8d86 <FQDN/IP>/my_first_ee_image:latest
Getting image source signatures
Copying blob 32e86e0f9e53 done
Copying blob f47aeb60ec80 done
Copying blob c6ecf1ab50fb done
Copying blob ba176c23a887 done
Copying blob bb33f1f5d1b5 done
Copying blob f481c8dd5cb9 done
Copying blob 1ad9df8c4500 done
Copying blob 205c0028fa95 done
Copying blob 21adbb7c8fd5 done
Copying blob 3f69621a2c45 done
Copying blob b121b5075674 done
Copying blob 50de17c442cc done
Copying blob 39ac6dd69b36 done
Copying blob 9c229aeded24 done
Copying blob 2653d992f4ef done
Copying blob 48c89702f240 done
Copying blob 07805e10d0e1 done
Copying blob 5f70bf18a086 done
Copying blob 58e24711c0de done
Copying config 18ee9d1f8d done
Writing manifest to image destination
Storing signatures

4. Check it in UI;

Now, this image can be used in Ansible Automation Platform 2.x, as a preview;

How do I know whether “EE” is properly created and destroyed throughout AAP2’s automation execution?

So in the previous series of articles, I have discussed what Ansible Execution Environment (EE) is, and how it is being consumed in the Automation Controller.

But really, how can I tell whether it is being ran or not?

This can be validated by running “watch podman ps” on (a) execution node(s).

Below are 3 screenshots from moments of “before”, “during”, “after” a sample automation execution.

Command to run:

# su - awx
# watch podman ps


Before the automation execution


During the automation execution


After the automation execution

As you can see from the above, execution environment is dynamically spun up as a container and cleaned up right after the execution is completed.

Ansible Core & contents

Ansible Core is the command-line tool that is installed from either community repositories or the official Red Hat repositories for Ansible.

With Ansible Automation Platform 2 release, few terminology changes were made.
One of those are, Ansible Engine as we know which included ansible binaries, modules are replaced with “Ansible-Core”.

Ansible Core is the foundational part of the Ansible Automation Platform.  It’s the command line tool, the language and framework that makes up the foundational content before you bring in your customized content.

The main differences between ansible-engine and ansible-core are “contents” e.g. modules and plugins.

Ansible Core only comes with limited number of contents.
(Number of ansible modules comparison between ansible 2.9 vs 2.11 can be found here.)

By moving contents out of the ansible-core, this provides following benefits:

  • Agility
    – Currently ansible contents are being developed and managed by Open Source Communities, partners and Red Hat. Now modules can be updated and managed through developers-driven and ansible-independent schedules.
  • A lean & Purpose driven execution environment
    – By only incorporating required plugins and modules, it bring the focus back into the users’ automation environment, rather than overloading with unnecessary contents.

As you can guess, this change was another foundation for Ansible Automation Execution Environment a.k.a ansible EE.

Ansible module numbers difference between ansible 2.9 vs ansible 2.11

From, Red Hat’s Ansible Automation platform 2, one of the big changes is that the “ansible engine” is now changed to the “ansible core”.

With the change, the biggest change is that number of ansible modules that were shipped with ansible has dropped significantly.

This change was made to decouple ansible binary with modules.

Ansible modules development is mainly driven by;

  • Community
  • Red Hat & partners

This decoupling would provide shorter and community-driven release schedules, which usually doesn’t sync with actual ansible binary release.

More details Ansible Core change details can be found in HERE.

However, coming back to the original question, so how many ansible modules are included in ansible 2.9 and ansible 2.11?

N.B. This is based on my current test environments, where I have ansible 2.9.22 and 2.11.1. Also two environments’ installation methods are different. 

2.9.22 – RPM -> 3232 modules

2.11.1 – venv/pip -> 71 modules

2.12.0-2 – RPM -> 70 modules

ansible 2.9.22

[root@insights-client2 modules]# rpm -qa |grep ansible
[root@insights-client2 modules]# pwd
[root@insights-client2 modules]# ls -lR |grep ".py" | awk '{print $9}' |grep -v ^_ |grep -v "pyc" |grep -v "pyo" |wc -l

ansible 2.11.1

(py36-venv) [djoo@insights-client2 modules]$ ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.12
(default, Sep 15 2020, 12:49:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-37)]. This feature will be removed from ansible-core in version 2.12.
Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible [core 2.11.1]
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/djoo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/djoo/py36-venv/lib64/python3.6/site-packages/ansible
  ansible collection location = /home/djoo/.ansible/collections:/usr/share/ansible/collections
  executable location = /home/djoo/py36-venv/bin/ansible
  python version = 3.6.12 (default, Sep 15 2020, 12:49:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-37)]
  jinja version = 3.0.1
  libyaml = True
(py36-venv) [djoo@insights-client2 modules]$ pwd
(py36-venv) [djoo@insights-client2 modules]$ ls -lR |grep ".py" | awk '{print $9}' |grep -v ^_ |grep -v "pyc" |grep -v "pyo" |wc -l


[root@ip-172-31-14-126 modules]# rpm -qa |grep ansible-core
[root@ip-172-31-14-126 modules]# pwd
[root@ip-172-31-14-126 modules]# ls -lR |grep ".py" | awk '{print $9}' |grep -v ^_ |grep -v "pyc" |grep -v "pyo" |wc -l

Ansible Automation Platform 2

For the last couple of days, Red Hat just had AnsibleFest 2021. (

Ansible Blog | | AnsibleFest

The biggest announcement would be on the Ansible Automation Platform 2.

Last July, there was a sneak preview + early access program for Ansible Automation Platform 2.0. (Link)

N.B. This is an “early access” program, which means?

Early access means that any Red Hat Ansible Automation Platform subscriber has the ability to download, install, and file support cases against this newly released 2.0 version of the product. Because there are additional core features and functionality that are slated for the 2.1 release later this year, the formal marketing launch for both 2.0 and 2.1 versions will happen later this year at AnsibleFest. Therefore, many of the typical resources (such as documentation, blogs, etc.) will only be made available on the Red Hat Customer Portal until formal launch at AnsibleFest.

So with the release of Ansible Automation Platform 2.1 in later 2021, Ansible Automation Platform 2 will be properly GA’ed.

However, in this article, I am going to focus on three main announcements:

1. Ansible Tower and Ansible Engine are no more.

=> Its replaced with Red Hat Ansible Automation Platform.

More details to be followed below.

2. Ansible Core – “Batteries are not included”

Ansible engine is now replaced with a component from Red Hat Ansible Automation Platform called “ansible-core”

Different to the Ansible Engine, the “ansible-core” will only include a limited number of core ansible modules.
(Number of ansible modules comparison between ansible 2.9 vs 2.11 can be found here.)

It seems like the changes were brought in for two reasons:

  • To provide agility in ansible module development.
  • To provide a lean ansible execution environment to end-users/developers.

    More information will be covered in a separate blog HERE.

3. Ansible Tower is split into smaller bits and utilises containers.

With the announcement of NO MORE ANSIBLE TOWER, the detail is that the Ansible tower is split into two separate components.

As the above shows, the Ansible Tower was in a single monolithic architecture. This works great. However, when there are multiple organizations/teams with multiple python virtual environments requirements, it started to get complicated really quickly. 

To address the above, Red Hat has replaced the execution/virtual environments with “Execution Environment”.

The execution environment is a container with various required components.

More information can be found HERE.

The rest of WebUI/API/RBAC/Workflows and Audit components are grouped into “Control Plane”/“Automation Controller”.

4. Red Hat Ansible Automation Platform has a lot more features/components

Red Hat Ansible Automation Platform features/components

Colour boxed ones are new components and features brought into the Red Hat Ansible Automation Platform.

* Ansible Platform Operator
– Red Hat Ansible Automation Platform is available on the OpenShift Container Platform as an Operator. 

This makes installation, operation tasks such as upgrade, easy. Also, provide the high availability capability automatically.

* Automation Controller & Automation execution Environments

As explained above, these two components replaced the old “Ansible Tower”.

* Ansible-Builder

This is a new component that enables an ansible content creator to build a custom/purpose-fit ansible automation execution environment (a.k.a. ansible EE).

* Ansible-Navigator

This is another new command-line component added to enhance ansible content creator experiences. With new the ansible EE, ansible-navigator should be used as a replacement for all too familiar “ansible” and “ansible-playbook”.

This is one tool that you can run, debug even introspect ansible EEs.

This will be covered in another article, HERE.

* REST of components

So until now, Red Hat Ansible had main 2 components as #1 suggests. On top of that there were few additions to that;
 * Automation Services Catalog – Service Catalog as a SaaS service on
 * Ansible Content Collection – Red Hat and partners co-developed and co-supported ansible contents available
There are currently 102 partners and Red Hat have contributed content.
 * Automation hub – a single location where public and private ansible content collections will be hosted.
 * Red Hat Insights for Ansible Automation Platform – where it provided overall information of an organization’s Ansible automation platform usage as a dashboard.

Definitely, with the new version, the above features got richer.