Skip to main content

Practice Exam for RHCE 9(EX294)

· 20 min read

Lab Environment

FQDNDescriptionIP AddressesRoles
control.lab.example.comcontrol172.25.250.254ansible control node
classroom.lab.example.comclassroom172.25.250.254materials
content.lab.example.comcontent172.25.250.254YUM repo
node1.lab.example.comnode1172.25.250.9ansible managed node
node2.lab.example.comnode2172.25.250.10ansible managed node
node3.lab.example.comnode3172.25.250.11ansible managed node
node4.lab.example.comnode4172.25.250.12ansible managed node
node5.lab.example.comnode5172.25.250.13ansible managed node
utility.lab.example.comutilit172.25.250.220utility

1. Install and configure Ansible

Install and configure Ansible on the control node control.lab.example.com as follows:

  • Install the required packages.
  • Create a static inventory file called /home/greg/ansible/inventory so that:
    • node1 is a member of the dev host group
    • node2 is a member of the test host group
    • node3 and node4 are members of the prod host group
    • node5 is a member of the balancers host group
    • The prod group is a member of the webservers host gorup
  • Create a configuration file called /home/greg/ansible/ansible.cfg so that:
    • The host inventory file is /home/greg/ansible/inventory
    • The default roles directoty is /home/greg/ansible/roles
    • The default content collections directory is /home/greg/ansible/mycollection
ssh greg@control
sudo dnf -y install ansible-automation-platform-common.noarch ansible-navigator

mkdir -p /home/greg/ansible/roles
mkdir /home/greg/ansible/mycollection
cd ansible/

ansible-config init --disabled > /home/greg/ansible/ansible.cfg
vim ansible.cfg

[defaults]
inventory = /home/greg/ansible/inventory
remote_user = greg
host_key_checking = False
roles_path = /home/greg/ansible/roles:/usr/share/ansible/roles
collections_path = ./mycollection/:.ansible/collections:/usr/share/ansible/collections
[privilege_escalation]
become=True

ansible --version
ansible-galaxy list

vim /home/greg/ansible/inventory

[dev]
node1
[test]
node2
[prod]
node3
node4
[balancers]
node5
[webservers:children]
prod

ansible-inventory --graph
ansible all -m ping

During the exam, ansible-navigator is available for use. When using ansible-navigator, make sure to log in to Podman beforehand:

podman login utility.lab.example.com -u admin -p redhat
ansible-navigator images
ansible-navigator collections

2. Create yum repositories

As a systemm adminstrator, you will need to install software on the managed nodes.

Create the playbook /home/greg/ansible/yum_repo.yml which creates the following yum repositories on each of the managend

ansible-doc -l | grep yum
ansible-doc yum_repository

vim /home/greg/ansible/yum_repo.yml

---
- name: Configure YUM repositories
hosts: all
tasks:
- name: Configure EX294_BASE repository
yum_repository:
file: EX294_BASE
name: EX294_BASE
description: "EX294 base software"
baseurl: http://content/rhel9.0/x86_64/dvd/BaseOS
gpgcheck: yes
gpgkey: http://content/rhel9.0/x86_64/dvd/RPM-GPG-KEY-redhat-release
enabled: yes

- name: Configure EX294_STREAM repository
yum_repository:
file: EX294_STREAM
name: EX294_STREAM
description: "EX294 stream software"
baseurl: http://content/rhel9.0/x86_64/dvd/AppStream
gpgcheck: yes
gpgkey: http://content/rhel9.0/x86_64/dvd/RPM-GPG-KEY-redhat-release
enabled: yes
ansible-navigator run yum_repo.yml -m stdout

ansible all -a 'yum repoinfo'
ansible all -a 'yum -y install ftp'
ansible all -a 'rpm -q ftp'

3. Install packages

Create a playbook called /home/greg/ansible/packages.yml that:

  • Installs the php and mariadb packages on hosts in the dev, test and prod host groups
  • Install the RPM develpment Tools packages group on hosts in the dev host group
  • Updates all packages to the latest version on hosts in the dev host group
ansible-doc yum
vim /home/greg/ansible/packages.yml
---
- name: Install php and mariadb
hosts: dev,test,prod
tasks:
- name: Install required packages
yum:
name:
- php
- mariadb
state: present

- name: Install RPM Development Tools and upgrade packages
hosts: dev
tasks:
- name: Install RPM Development Tools group
yum:
name: "@RPM Development Tools"
state: present

- name: Upgrade all packages to the latest version
yum:
name: "*"
state: latest
ansible-navigator run packages.yml -m stdout

ansible dev,test,prod -a 'rpm -q php mariadb'
ansible dev -a 'yum grouplist'
ansible dev -a 'yum update'

4.Use a role

Create a palybook called /home/greg/ansible/selinux.yml that:

  • Runs on all managed nodes
  • Uses a selinux role
  • Configures SElinux policy as targeted
  • Sets SElinux state as enforcing
yum search role
sudo yum -y install rhel-system-roles
ansible-galaxy list
cp /usr/share/doc/rhel-system-roles/selinux/example-selinux-playbook.yml /home/greg/ansible/selinux.yml
vim selinux.yml
# Enable line numbers and delete unnecessary content (since line numbers may vary depending on the version, verify and delete manually).
:set nu
:43,51d
:11,39d

final content:

---
- hosts: all
become: true
become_method: sudo
become_user: root
vars:
# Use "targeted" SELinux policy type
selinux_policy: targeted
# Set "enforcing" mode
selinux_state: enforcing

# Prepare the prerequisites required for this playbook
tasks:
- name: execute the role and catch errors
block:
- name: Include selinux role
include_role:
name: rhel-system-roles.selinux
rescue:
# Fail if failed for a different reason than selinux_reboot_required.
- name: handle errors
fail:
msg: "role failed"
when: not selinux_reboot_required

- name: restart managed host
reboot:

- name: wait for managed host to come back
wait_for_connection:
delay: 10
timeout: 300

- name: reapply the role
include_role:
name: rhel-system-roles.selinux
# Roles installed via RPM packages should be executed using ansible-playbook, while roles installed as collections should be executed using ansible-navigator.
ansible-playbook selinux.yml

ansible all -m shell -a 'grep ^SELINUX= /etc/selinux/config; getenforce'

node3 | CHANGED | rc=0 >>
SELINUX=enforcing
Enforcing
node2 | CHANGED | rc=0 >>
SELINUX=enforcing
Enforcing
node5 | CHANGED | rc=0 >>
SELINUX=enforcing
Enforcing
node1 | CHANGED | rc=0 >>
SELINUX=enforcing
Enforcing
node4 | CHANGED | rc=0 >>
SELINUX=enforcing
Enforcing

5.Install a Collection

  • Install the following collection artifacts available from http://classroom/materials/ to control
    • redhat-insights-1.0.7.tar.gz
    • community-general-5.5.0.tar.gz
    • redhat-rhel_system_roles-1.19.3.tar.gz
  • The collections should be installed into the default collections directory /home/greg/ansible/mycollection
vim requirements.yml
---
collections:
- name: http://classroom/materials/redhat-insights-1.0.7.tar.gz
- name: http://classroom/materials/community-general-5.5.0.tar.gz
- name: http://classroom/materials/redhat-rhel_system_roles-1.19.3.tar.gz
ansible-galaxy collection install -r requirements.yml -p /home/greg/ansible/mycollection

ansible-navigator collections
ansible-navigator doc community.general.filesystem -m stdout

6. Install roles using Ansible Galaxy

Use Ansible Galaxy with a requirements file called /home/greg/ansible/roles/requirements.yml to download and install roles to the default /home/greg/ansible/roles from the following URLs:

vim /home/greg/ansible/roles/requirements.yml
---
- src: http://classroom/materials/haproxy.tar
name: balancer
- src: http://classroom/materials/phpinfo.tar
name: phpinfo
ansible-galaxy install -r /home/greg/ansible/roles/requirements.yml
ansible-galaxy list

7.Create and use a role

Create a role called apache in /home/greg/ansible/roles with the following requirements:

  • The httpd package is installed enabled on boot, and started
  • The firewall is enabled and running with a rule to allow access to the web server
  • A template file index.html.j2 exists and is used to create the file /var/www/html/index.html with the following output: Welcome to HOSTNAME on IPADDRESS
    where HOSTNAME is the fully qualified domain name of the managed node and IPADDRESS is the IP address of the managed node.

Create a playbook called /home/greg/ansible/apache.yml that uses this role as follows:

  • The playbook runs on hosts in the webservers host group
ansible-galaxy role init --init-path /home/greg/ansible/roles apache
vim /home/greg/ansible/roles/apache/tasks/main.yml
---
- name: Install Apache
yum:
name: httpd
state: latest

- name: Start and enable Apache service
systemd:
name: httpd
state: started
enabled: yes

- name: Start and enable firewalld
systemd:
name: firewalld
state: started
enabled: yes

- name: Configure firewalld to allow HTTP
firewalld:
service: http
permanent: yes
state: enabled
immediate: yes

- name: Deploy index.html template
template:
src: index.html.j2
dest: /var/www/html/index.html
vim /home/greg/ansible/roles/apache/templates/index.html.j2
Welcome to {{ ansible_fqdn }} on {{ ansible_default_ipv4.address }}

vim /home/greg/ansible/apache.yml
---
- name: Deploy Apache role
hosts: webservers
roles:
- apache
ansible-navigator run apache.yml -m stdout

ansible webservers -a 'systemctl status httpd'
ansible webservers -a 'firewall-cmd --list-all'
ansible webservers --list-hosts
curl http://node3
curl http://node4

8.Use roles from Ansible Galaxy

Create a playbook called /home/greg/ansible/roles.yml with the following requirements:

  • The playbook contains a play that runs on hosts in the balancers host group and uses the balancer role
    • This role configures a service to load balance web server requests between hosts in the webservers host group.
    • Browsing to hosts in the balancers host group(for example http://172.25.250.13) produces the following output:
      Welcome to node3.lab.example.com on 172.25.250.11
      Reloading the browser produces output from the alternate web server:
      Welcome to node4.lab.example.com on 172.25.250.12
  • The playbook contains a play that runs on hosts in the webservers host group and uses the phpinfo role
    • Browsing to hosts in the webservers host group with the URL /hello.php produces the following output:
      Hello PHP World from FQDN
      wherer FQDN is the fully qualified domain name of the host. For example, browsing to http://172.25.250.11/hello.php produces the following output:
      Hello PHP World from node3.lab.example.com
      along with various details of the PHP configuration including the version of PHP that is installed.
      Similarly, browsing to http://172.25.250.12/hello.php produces the following output:
      Hello PHP World from node4.lab.example.com
      along with various details of the PHP configuration including the version of PHP that is installed
vim /home/greg/ansible/roles.yml
---
- name: Use phpinfo role
hosts: webservers
roles:
- phpinfo

- name: Use balancer role
hosts: balancers
roles:
- balancer
ansible-navigator run /home/greg/ansible/roles.yml -m stdout

curl http://172.25.250.13
curl http://node3/hello.php
curl http://node4/hello.php

9. Create and use a logical volume

Create a playbook called /home/greg/ansible/lv.yml that runs on all managed nodes that does the following:

  • Create a logical volume with these requirements:
    • The logical volume is created in the research volume gorup
    • The logical volume name is data
    • The logical volume size is 1500 MiB
  • Formats the logical volume with the ext4 filesystem
  • If the requested logical volume size cannot be created the error message 'Cloud not create logical volume of that size' should be displayed and the size 800 MiB should be used instead
  • If the volume group research does not exist, the error message 'Volume group does not exist', should be displayed
  • Does NOT mount the logical volume in any way
ansible-doc community.general.lvol
ansible-doc community.general.filesystem
ansible-doc debug
ansible-doc stat

vim /home/greg/ansible/lv.yml
---
- name: Create LVM
hosts: all
tasks:
- block:
- name: lv 1500M
community.general.lvol:
vg: research
lv: data
size: 1500
- name: Create ext4
community.general.filesystem:
fstype: ext4
dev: /dev/research/data
rescue:
- name: Could not create lvm
ansible.builtin.debug:
msg: Could not create logical volume of that size
- name: lv 800M
community.general.lvol:
vg: research
lv: data
size: 800
- name: Create ext4
community.general.filesystem:
fstype: ext4
dev: /dev/research/data
when: ansible_lvm.vgs.research is defined
- debug:
msg: Volume group done not exist
when: ansible_lvm.vgs.research is not defined
ansible-navigator run /home/greg/ansible/lv.yml -m stdout

# check the execution process to ensure it proceeds as expected.
PLAY [Create LVM] ***************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************************************************************************************************************************
ok: [node3]
ok: [node5]
ok: [node4]
ok: [node2]
ok: [node1]

TASK [lv 1500M] *****************************************************************************************************************************************************************************************************************************************************************
skipping: [node1]
fatal: [node3]: FAILED! => {"changed": false, "err": " Volume group \"research\" has insufficient free space (31 extents): 47 required.\n", "msg": "Creating logical volume 'data' failed", "rc": 5}
changed: [node2]
changed: [node5]
changed: [node4]

TASK [Create ext4] **************************************************************************************************************************************************************************************************************************************************************
skipping: [node1]
changed: [node5]
changed: [node4]
changed: [node2]

TASK [Could not create lvm] *****************************************************************************************************************************************************************************************************************************************************
ok: [node3] => {
"msg": "Could not create logical volume of that size"
}

TASK [lv 800M] ******************************************************************************************************************************************************************************************************************************************************************
changed: [node3]

TASK [Create ext4] **************************************************************************************************************************************************************************************************************************************************************
changed: [node3]

TASK [debug] ********************************************************************************************************************************************************************************************************************************************************************
skipping: [node2]
skipping: [node5]
ok: [node1] => {
"msg": "Volume group done not exist"
}
skipping: [node3]
skipping: [node4]

PLAY RECAP **********************************************************************************************************************************************************************************************************************************************************************
node1 : ok=2 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
node2 : ok=3 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
node3 : ok=4 changed=2 unreachable=0 failed=0 skipped=1 rescued=1 ignored=0
node4 : ok=3 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
node5 : ok=3 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0

# LVS Check
# - node3, LV creation of 800M due to insufficient VG free space
# - node1, LV creation failed because the VG could not be found
ansible all -m shell -a 'lvs'

node3 | CHANGED | rc=0 >>
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data research -wi-a----- 800.00m
node5 | CHANGED | rc=0 >>
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data research -wi-a----- <1.47g
node2 | CHANGED | rc=0 >>
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data research -wi-a----- <1.47g
node4 | CHANGED | rc=0 >>
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data research -wi-a----- <1.47g
node1 | CHANGED | rc=0 >>

# check fstype
ansible all -m shell -a 'blkid /dev/research/data'

node2 | CHANGED | rc=0 >>
/dev/research/data: UUID="83229b4c-dcba-4dcb-aab3-fe8601d3c75a" BLOCK_SIZE="4096" TYPE="ext4"
node3 | CHANGED | rc=0 >>
/dev/research/data: UUID="cdd60647-19a7-48e7-969f-9bd685fcc718" BLOCK_SIZE="4096" TYPE="ext4"
node5 | CHANGED | rc=0 >>
/dev/research/data: UUID="1ee8f698-7b79-410f-bc48-0a1ab781a542" BLOCK_SIZE="4096" TYPE="ext4"
node4 | CHANGED | rc=0 >>
/dev/research/data: UUID="71b9cf21-efae-4926-aef2-80ef3c74b8d2" BLOCK_SIZE="4096" TYPE="ext4"
node1 | FAILED | rc=2 >>
non-zero return code

10. Generate a hosts file

  • Download an initial template file from http://classroom/materials/hosts.j2 to /home/greg/ansible
  • Complete the template so that it can be used to generate a file with a line for each inventory host in the same format as /etc/hosts
  • Download the file from http://classroom/materials/hosts.yml to /home/greg/ansible. This playbook will use the template to generate the file /etc/myhosts on hosts in the dev host group.

Do not make any changes the play book

When the playbook is run, the file /etc/myhosts on hosts in the dev host group should have a line for each managed host:

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.250.9 node1.lab.example.com node1
172.25.250.10 node2.lab.example.com node2
172.25.250.11 node3.lab.example.com node3
172.25.250.12 node4.lab.example.com node4
172.25.250.13 node5.lab.example.com node5

NOTE: The order in which the inventory host names appear is not important.

wget http://classroom/materials/hosts.j2
vim hosts.j2
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
{% for i in groups.all %}
{{ hostvars[i].ansible_facts.default_ipv4.address }} {{ hostvars[i].ansible_facts.fqdn }} {{ i }}
{% endfor %}
ansible-navigator run hosts.yml -m stdout

ansible dev -m shell -a 'cat /etc/myhosts'

node1 | CHANGED | rc=0 >>
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.250.9 node1.lab.example.com node1
172.25.250.10 node2.lab.example.com node2
172.25.250.13 node5.lab.example.com node5
172.25.250.11 node3.lab.example.com node3
172.25.250.12 node4.lab.example.com node4

11. Modify file content

Create a playbook called /home/greg/ansible/issue.yml as follows:

  • The playbook runs an all inventory hosts
  • The playbook replaces the contests of /etc/issue with a single line of text as follow:
    • On hosts in the dev hosts group, the line reads: 'Development'
    • On hosts in the test host group, the line reads: 'Test'
    • On hosts in the prod host group, the line reads: 'Production'
ansible-doc copy
ansible-doc stat

vim /home/greg/ansible/issue.yml
---
- name: Modify /etc/issue file content
hosts: all
tasks:
- name: Update content for dev
ansible.builtin.copy:
content: 'Development'
dest: /etc/issue
when: inventory_hostname in groups.dev

- name: Update content for test
ansible.builtin.copy:
content: 'Test'
dest: /etc/issue
when: inventory_hostname in groups.test

- name: Update content for prod
ansible.builtin.copy:
content: 'Production'
dest: /etc/issue
when: inventory_hostname in groups.prod
ansible-navigator run issue.yml -m stdout

ansible dev -a 'cat /etc/issue'
ansible test -a 'cat /etc/issue'
ansible prod -a 'cat /etc/issue'

12. Create a web content directory

Create a playbook called /home/greg/ansible/webcontent.yml as follows:

  • The playbook runs on managed nodes in the dev host group
  • Create the directory /webdev with the following requirements:
    • It is owned by the webdev group
    • It has regular permissions: owner=read+write+execute, group=read+write+execute, other=read+execute
    • It has special permissions: set group ID
  • Symbolically link /var/www/html/webdev to /webdev
  • Create the file /webdev/index.html with a single line of text that reads: 'Development'
  • Browsing this directoryu on hosts in the dev host group (for example http://172.25.250.9/webdev/) produces the following output: 'Development'
ansible-doc file
ansible-doc copy

ansible dev -a 'ls -ldZ /var/www/html'

vim /home/greg/ansible/webcontent.yml
---
- name: Create Web Content Directory
hosts: dev
tasks:
- name: Create /webdev directory
ansible.builtin.file:
path: /webdev
state: directory
group: webdev
mode: '2775'

- name: Create symbolic link for /webdev
ansible.builtin.file:
src: /webdev
dest: /var/www/html/webdev
state: link

- name: Create /webdev/index.html file
ansible.builtin.copy:
content: |
Development
dest: /webdev/index.html
setype: httpd_sys_content_t
ansible-navigator run webcontent.yml -m stdout

curl http://172.25.250.9/webdev/

13. Generate a hardware report

Create a playbook called home/greg/ansible/hwreport.yml that produces an output file called /root/hwreport.txt on all managed nodes with the following information:

  • Inventory host name
  • Total memory in MB
  • BIOS version
  • Size of disk device vda
  • Size of disk device vdb
  • Each line of the output file contains a single key = value pair.

Your Playbook should:

  • Download the file from http://classroom/materials/hwreport.empty and save it as /root/hwreport.txt
  • Modify /root/hwreport.txt with the correct values
  • If a hardware item does not exist, the associated value should be set to NONE
ansible all -m setup | grep mem
ansible all -m setup | grep bios
ansible all -m setup -a 'filter=*device*'

curl http://materials/hwreport.empty

vim /home/greg/ansible/hwreport.yml
---
- name: Generate hardware report
hosts: all
tasks:
- name: Download empty report template
ansible.builtin.get_url:
url: http://materials/hwreport.empty
dest: /root/hwreport.txt

- name: Add hostname to the report
ansible.builtin.lineinfile:
path: /root/hwreport.txt
regexp: '^HOST='
line: "HOST={{ inventory_hostname }}"

- name: Add memory size to the report
ansible.builtin.lineinfile:
path: /root/hwreport.txt
regexp: '^MEMORY='
line: "MEMORY={{ ansible_memtotal_mb | default('NONE', true) }}"

- name: Add BIOS version to the report
ansible.builtin.lineinfile:
path: /root/hwreport.txt
regexp: '^BIOS='
line: "BIOS={{ ansible_bios_version | default('NONE', true) }}"

- name: Add vda disk size to the report
ansible.builtin.lineinfile:
path: /root/hwreport.txt
regexp: '^DISK_SIZE_VDA='
line: "DISK_SIZE_VDA={{ ansible_devices.vda.size | default('NONE', true) }}"

- name: Add vdb disk size to the report
ansible.builtin.lineinfile:
path: /root/hwreport.txt
regexp: '^DISK_SIZE_VDB='
line: "DISK_SIZE_VDB={{ ansible_devices.vdb.size | default('NONE', true) }}"
ansible-navigator run hwreport.yml -m stdout

ansible all -a 'cat /root/hwreport.txt'

node4 | CHANGED | rc=0 >>
# Hardware report
HOST=node4
MEMORY=960
BIOS=1.15.0-1.el9
DISK_SIZE_VDA=10.00 GB
DISK_SIZE_VDB=1.00 GB
node2 | CHANGED | rc=0 >>
# Hardware report
HOST=node2
MEMORY=960
BIOS=1.15.0-1.el9
DISK_SIZE_VDA=10.00 GB
DISK_SIZE_VDB=1.00 GB
node5 | CHANGED | rc=0 >>
# Hardware report
HOST=node5
MEMORY=960
BIOS=1.15.0-1.el9
DISK_SIZE_VDA=10.00 GB
DISK_SIZE_VDB=1.00 GB
node3 | CHANGED | rc=0 >>
# Hardware report
HOST=node3
MEMORY=960
BIOS=1.15.0-1.el9
DISK_SIZE_VDA=10.00 GB
DISK_SIZE_VDB=1.00 GB
node1 | CHANGED | rc=0 >>
# Hardware report
HOST=node1
MEMORY=5668
BIOS=1.15.0-1.el9
DISK_SIZE_VDA=20.00 GB
DISK_SIZE_VDB=NONE

14. Create a password vault

Create an Ansible vault to store user passwords as follows:

  • The name of the vault is /home/greg/ansible/locker.yml
  • The vault contains two variables with names:
    • pw_developer with value Imadev
    • pw_manager with value Imamgr
  • The password to encrypt and decrypt the vault is whenyouwlshuponastar
  • The password is stored in the file /home/greg/ansible/secret.txt
echo "whenyouwishuponastar" > /home/greg/ansible/secret.txt

vim ansible.cfg
vault_password_file=/home/greg/ansible/secret.txt

ansible-vault create /home/greg/ansible/locker.yml
---
pw_developer: Imadev
pw_manager: Imamgr

cat /home/greg/ansible/locker.yml

$ANSIBLE_VAULT;1.1;AES256
32316462663839316261653164376664376432313863333238383462396230663138323362363132
3361363734323065373531343431303234616232333135380a396530626436383566356337633966
64393365623237303333373037366461646638376164376130613637646434383537383636336265
3061666131656238320a303337366163633337313533376632646631316434323765326135396562
32393031383338386533643865653965366264653034633132396666666331663064626337333734
6136653065306631643466356531393031666339346165316637

15. Create user accounts

  • Download a list of users to be created from http://classroom/materials/user_list.yml and save it to /home/greg/ansible
  • Using the password vault /home/greg/ansible/locker.yml created elsewhere in this exam create a playbook called /home/greg/ansible/users.yml that creates user accounts as follows:
    • Users wich a job description of developer should be:
      • created on managed nodes in the dev and test host group
      • assigned the password from the pw_developer variable and should have password that expire after 30 days
      • a member of supplementary group devops
    • Users with a job description of manager should be:
      • created on managed nodes in the prod host gorup
      • assigned the password from the pw_manager variable should have password that expire after 30 days
      • a member of supplementary group opsmgr
  • password should use the SHA512 hash format
  • Your playbook should work using the vault password file /home/greg/ansible/secret.txt created elsewhere in this exam.
wget http://classroom/materials/user_list.yml
cat user_list.yml

vim /home/greg/ansible/users.yml
---
- name: Create User1
hosts: dev,test
vars_files:
- /home/greg/ansible/locker.yml
- /home/greg/ansible/user_list.yml
tasks:
- name: Add group1
group:
name: devops
state: present

- name: Add user1
user:
name: "{{ item.name }}"
groups: devops
password: "{{ pw_developer | password_hash('sha512') }}"
password_expire_max: "{{ item.password_expire_max }}"
loop: "{{ users }}"
when: item.job == 'developer'

- name: Create User2
hosts: prod
vars_files:
- /home/greg/ansible/locker.yml
- /home/greg/ansible/user_list.yml
tasks:
- name: Add group2
group:
name: opsmgr
state: present

- name: Add user2
user:
name: "{{ item.name }}"
groups: opsmgr
password: "{{ pw_manager | password_hash('sha512') }}"
password_expire_max: "{{ item.password_expire_max }}"
loop: "{{ users }}"
when: item.job == 'manager'
ansible-navigator run users.yml -m stdout

PLAY [Create User1] *************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************************************************************************************************************************
ok: [node2]
ok: [node1]

TASK [Add group1] ***************************************************************************************************************************************************************************************************************************************************************
ok: [node2]
ok: [node1]

TASK [Add user1] ****************************************************************************************************************************************************************************************************************************************************************
changed: [node2] => (item={'name': 'bob', 'job': 'developer', 'password_expire_max': 10, 'uid': 3000})
skipping: [node2] => (item={'name': 'sally', 'job': 'manager', 'password_expire_max': 20, 'uid': 3001})
changed: [node1] => (item={'name': 'bob', 'job': 'developer', 'password_expire_max': 10, 'uid': 3000})
skipping: [node1] => (item={'name': 'sally', 'job': 'manager', 'password_expire_max': 20, 'uid': 3001})
changed: [node2] => (item={'name': 'fred', 'job': 'developer', 'password_expire_max': 30, 'uid': 3002})
changed: [node1] => (item={'name': 'fred', 'job': 'developer', 'password_expire_max': 30, 'uid': 3002})

PLAY [Create User2] *************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************************************************************************************************************************
ok: [node3]
ok: [node4]

TASK [Add group2] ***************************************************************************************************************************************************************************************************************************************************************
changed: [node3]
changed: [node4]

TASK [Add user2] ****************************************************************************************************************************************************************************************************************************************************************
skipping: [node3] => (item={'name': 'bob', 'job': 'developer', 'password_expire_max': 10, 'uid': 3000})
skipping: [node4] => (item={'name': 'bob', 'job': 'developer', 'password_expire_max': 10, 'uid': 3000})
changed: [node4] => (item={'name': 'sally', 'job': 'manager', 'password_expire_max': 20, 'uid': 3001})
skipping: [node4] => (item={'name': 'fred', 'job': 'developer', 'password_expire_max': 30, 'uid': 3002})
changed: [node3] => (item={'name': 'sally', 'job': 'manager', 'password_expire_max': 20, 'uid': 3001})
skipping: [node3] => (item={'name': 'fred', 'job': 'developer', 'password_expire_max': 30, 'uid': 3002})

PLAY RECAP **********************************************************************************************************************************************************************************************************************************************************************
node1 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node2 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node3 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node4 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible dev,test -m shell -a 'id bob; id fred'
ansible prod -m shell -a 'id sally'
ssh bob@node1
bob@node1\'s password: Imadev
ssh sally@node3
sally@node3\'s password: Imamgr

16. Rekey an Ansible vault

Rekey an existing ansible vault as follows:

  • Download the Ansible vault from http://classroom/materials/salaries.yml to /home/greg/ansible
  • The current vault password is insecure4sure
  • The new vault password is bbe2de98389b
  • The vault remains in an encrypted state with the new password
wget http://classroom/materials/salaries.yml

ansible-vault rekey --ask-vault-pass /home/greg/ansible/salaries.yml

Vault password: insecure8sure
New Vault password: bbs2you9527
Confirm New Vault password: bbs2you9527
Rekey successful

ansible-vault view --ask-vault-pass salaries.yml

Vault password: bbs2you9527
haha

17. Configure a cron job

Create a playbook called /home/greg/ansible/cron.yml that runs on hosts in the dev host group and creates a cron job for user natasha as follows:

  • The user natasha must configure a cron job that runs every 2 minutes and executes logger "EX294 in progress"
ansible-doc cron

vim /home/greg/ansible/cron.yml
---
- name: cron
hosts: test
tasks:
- name: cron job
cron:
name: "cron job1"
minute: "*/2"
job: 'logger "EX200 in progress"'
user: natasha
ansible-navigator run cron.yml -m stdout
ansible test -a 'crontab -l -u natasha'