Using Ansible Automation Platform to maintain a directory | Cloud Resolve LLC

Maintaining an up-to-date directory listing of files and folders can be a daunting task, especially when you have to keep track of permissions, ownership, and changes. Automating the process with Ansible can make it simpler and less error-prone. In this post, we’ll show you how to use Ansible automation platform to maintain a directory on CentOS 7 server. We’ll also discuss some benefits of using Ansible for this purpose. Stay tuned!

What is Ansible and what are its key features?

Ansible is an open-source software provisioning, configuration management, and application-deployment tool. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows. Ansible was written by Michael DeHaan and acquired by Red Hat in 2015.

Ansible is agentless, which means that it does not need to install any software on the nodes that it manages. Ansible uses SSH for communication with managed hosts and does not require a centralized management server. It also has a low overhead because it runs in a push mode, meaning that it will only send the necessary commands to the hosts.

Ansible is idempotent, which means that it can run multiple times on the same host without changing the desired state of the system. This is because Ansible will first check the current state of the system before making any changes.

How can Ansible be used to maintain a directory?

Ansible can be used to maintain a directory by ensuring that the files and folders are present, have the correct permissions, and are owned by the correct user. Ansible can also be used to monitor changes to files and directories, and revert back to previous versions if necessary.

What are the benefits of using Ansible for directory maintenance?

Some benefits of using Ansible for directory maintenance include:

– Reduced chance of human error: Since Ansible is automated, there is less chance for human error when compared to manual tasks.

– Idempotency: As mentioned earlier, Ansible is idempotent which means it can run multiple times on the same host without changing the desired state of the system. This is important when maintaining a directory because it ensures that the changes you make will be consistent across all hosts.

– Agentless: Ansible does not require any agent software to be installed on the nodes it manages which means it has a lower overhead.

How can you get started with Ansible for directory maintenance?

If you want to get started with Ansible for directory maintenance, we recommend checking out the Ansible documentation. The documentation is comprehensive and will give you all the information you need to get started. You can also find many resources and tutorials online. Once you have a basic understanding of how Ansible works, you can start writing your own playbooks

In this example we will maintain a directory on a fictional server called where we will need to maintain some dated but useful bash scripts that are used for monitoring. While we fully understand this is bad practice in general to have these kinds of things just sitting around in production, we also know that change is hard.

First a few assumptions are made in that we will assume that Ansible is already installed on your client. Our remote server allows SSH connectivity using only the key. This means your public key shall be defined in the remote server’s authorized_users file. Additionally, you will need a text editor like vim or nano installed on your client machine. While this guide will not go into the configuration of individual editors I will provide the plugins used to configure my vim Editor.

Vim Plugins:

indentLine

vim-yaml-folds

ALE

Finally, I highly recommend installing yamllint to validate your YAML files.

With these dependencies out of the way, let’s get started!

Example using Ansible playbook to maintain a directory on a remote server

The first thing we need to do is create our YAML file in our project directory. As this should ideally be self-documenting, I tend to use filenames that make identification easier for me and others who will work on the same files at some point later in time.

My chosen format for this example is as follows:

Grumpy_UAT_orauat_MonitorCopy.yml

The name is irrelevant for the most part but I like to add a little extra information in there. The important thing is the .yml extension as this is what Ansible will be looking for. Keep in mind this will not be a detailed document but should provide an example of the most basic requirements for a Playbook.

The next thing we need to do is open our new file in our text editor and start populating it with some content. The first few lines will be dedicated to what is called ‘YAML Front Matter’. This section is used by many systems to provide metadata about the document. For our purposes, it will look something like this:


---
- name: Testing file copy to grumpy     # Should be run with "-u mariadb"
  become_user: false
  hosts: grumpy
  vars:
  - src_path: /home/delliott/ansible/scripts/grumpy/mariadb/
  - dest_path: /home/mariadb/.crllc/monitor/
  - valid_path: /home/mariadb/.crllc/monitor/logs
  - owner: mariadb
  - group: dba

Inside this file, we notice the hosts is defined. This tells Ansible what host to validate against. Next we notice the vars section containing a list of variables such as src_path, dest_path and valid_path. These are used to define the source, destination and target for our file copy. The last two lines of interest in the vars section are owner and group. These variables will be used when we need to change the ownership of the files being copied over.

The next section is where we start to get into the ‘meat’ of our Playbook. The tasks section. In here we define the name of the task and actions taken.

The first task is a simple file validation. This is to check if the directory exists on the remote server in the defined location. If it does not exist, we want Ansible to create it for us using the parameters listed in the vars section above. To do this, we use the ‘file’ module and set the state to ‘directory’. But first we set the path, owner, group and mode for the permissions.


 tasks:
    - name: Creates the installation directory if not exist
      file:
        path: '{{ valid_path }}'
        owner: '{{ owner }}'
        group: '{{ group }}'
        mode: 0755
        state: directory

With our new task, we can now copy the files from their local directory over to a remote server. This is done using Ansible’s file module and setting it as “copy” with options for creating backups so that we don’t accidentally overwrite any existing data. We also specify where these copies should go by using the previously configuring dest_path from vars.


    - name: Copies the scripts
      copy:
        src: '{{ src_path }}'
        dest: '{{ dest_path }}'
        owner: '{{ owner }}'
        group: '{{ group }}'
        backup: true

To copy any files you wish to maintain, simply place them in the src_path destination and run ansible-playbook using the below example.

Example: ansible-playbook Grumpy_UAT_orauat_MonitorCopy.yml -u mariadb

You will see a summary screen of the actions taken.


PLAY [Testing file copy to GRUMPY] *************************************************************************************

TASK [Gathering Facts] *************************************************************************************************
ok: [grumpy.cloud-resolve.com]

TASK [Creates the installation directory if not exist] *****************************************************************
ok: [grumpy.cloud-resolve.com]

TASK [Copies the scripts] **********************************************************************************************
ok: [grumpy.cloud-resolve.com]

TASK [Just a dummy example] ********************************************************************************************
ok: [grumpy.cloud-resolve.com] => {
    "msg": "I'm a dummy task"
}

PLAY RECAP *************************************************************************************************************
grumpy.cloud-resolve.com : ok=4    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

As mentioned this was not meant to be an all inclusive tutorial but an overview of the capabilities of Ansible. I hope this provided some clarity into how you can use it maintain files and directories. Stay tuned for more tips and tricks! Thanks for reading! – Cloud Resolve LLC.

Finally, I must place credit squarely on the following site for providing the blueprint to make Vim YAML capable:

https://www.arthurkoziel.com/setting-up-vim-for-yaml/

How to enable Amazon SSM for Group access and EC2 management | Cloud Resolve LLC

If you’re running a group of Amazon EC2 instances, or just a single instance, you may want to take advantage of Amazon SSM (Systems Manager). This service can help automate the management of your instances, and it’s available for free to all AWS customers. In this article, I’ll show you how to enable Amazon SSM for your group or individual EC2 instances. Let’s get started!

 

Login

Login to your AWS account

Access AWS IAM

 

Proceed to Services –> All Services.

Alternatively, just type in ‘IAM’ in the search field then hit Enter.

Click on the “Access Management” button from within Identity and Access Management (IAM) menu.
From there, click onto Policies to view a list of all available policies.

 

 

Create a Group that allows Amazon Systems Manager access

 

Select Access Management –>Policies

You should now see a list of policies. Instead of parsing this entire list we can search for the required policy.
Enter the following policy name in the search field ‘AmazonSSMManagedInstanceCore’ then hit Enter.

Select the “AmazonSSMManagedInstanceCore” policy in the search results.

Located on the right side of your screen you will notice a button labeled ‘Actions’.
Select the ‘Actions’ button and a drop-down list appears. From the drop-down list, select the ‘Attach’ option. 

Select Actions –>Attach

The previous ‘Attach’ action will open a new page allowing you to attach your policy to users, groups, or roles in your account. From this list, select the group or user you wish to attach your policy to. 
Note: Best Practice suggests that policies should be attached to groups instead of users.

From the list of policies, select the Group or User of your choice.

Once you have made your selection, select the “Attach Policy” button.

You will then see a message stating the policy attachment was successful. 

You can now use this group to control who has access to manage Amazon Systems Manager.
Place IAM users into this newly created group. They will have access to AWS Systems Manager unless explicitly denied in another policy.

 

 

Create a Role for your EC2 instances

 

If you are not already there, return to the Identity and Access Management (IAM) screen.
This time we wlll select the “Roles” option from the Access management menu.

 

Select Access management –> Roles

 

 

Located on the right side of your screen you will notice a button labeled ‘Create role’.

Select the Create role button. A new page will load with the following options:

  1. AWS Service
  2. AWS Account
  3. Web Identity
  4. SAML 2.0 federation
  5. Custom trust policy

The next step requires selecting the appropriate AWS service. For our purposes, we will choose ‘AWS Service’ then select ‘EC2’ from its list of options in the ‘Common use cases’ section.

Select AWS Service then EC2 from the Use case, Common use cases section.

 

 

Select the Next button located on the bottom-right side of your screen to apply the selections.

 

 

After the previous section a new “Add permissions” page loads.
From the list of Permission policies search for the policy named “AmazonSSMManagedInstanceCore”

Once found, select the Policy “AmazonSSMManagedInstanceCore”

 

Select the Next button located on the right side of your screen to apply the selections.

Supply a name for the new Role in Role details, Role name field.
Note: The json content in ‘Select trusted entities’ should not be modified

 

Find the “Create Role” button in the lower-right portion of the page.
Select Create Role to apply changes

 

The new Role is now created and may be added to your EC2 instances for management in AWS Systems Manager.
From the EC2 console you will Select Actions–>Security–>Modify IAM role

From the “Modify IAM role” page you will select your IAM role then select Save.

If the Amazon SSM Agent has been properly configured, instances will be accessible from AWS Systems Manager.

Setup a one-way sFTP connection | Cloud Resolve

The following instructions show how to set up a one-way secure ftp connection from your computer. The process is simple and only takes about 3 minutes!

 

Actions performed on the Local Host

1. Select the user account you wish to receive the connection then create the ssh key
       a. ssh-keygen
       b. When prompted, enter a filename such as /home/$USER/.ssh/sftpTransfer

2. Two files will be created in the following locations
      a. /home/$USER/.ssh/sftpTransfer
      b. /home/$USER/.ssh/sftpTransfer.pub

3. Copy the public key to the recipient account on the remote host who will make the connection
      a. /home/$USER/.ssh/sftpTransfer.pub

 

Actions performed on the Remote Host

Please ensure you have already received the public key from the client machine

4. Create authorized_keys file on remote server. (this server will initiate the connection)
      a. mkdir -p /home/$USER/.ssh/authorized_keys

5. Create a backup then copy the contents of the public key to the authorized keys.
      a. cp /home/$USER/.ssh/authorized_keys /home/$USER/.ssh/authorized_keys.ORG
b. cat /home/$USER/.ssh/sftpTransfer.pub >> /home/$USER/.ssh/authorized_keys

6. Secure the authorized_keys file
      a. chmod 400 /home/$USER/.ssh/authorized_keys

TEST the connection

1. Attempt to establish a connection using the following command
      a. sftp -o IdentityFile=/home/$USER/.ssh/sftpTransfer user@remotehostname
      b. Once the connection has established you should be able to securely browse folders and directories.

How do I create a Pre-signed S3 URL? | Cloud Resolve LLC

In this article, we show you how to generate a pre-signed URL using the AWS CLI. This is an important step in providing secure access to your S3 bucket contents.

Use the AWS CLI get a list of your buckets:

aws s3 ls
Locate the correct bucket
aws s3 s3://bucketname
Find the correct object within that bucket
aws s3 s3://bucketname/objectname
Share that object using the presigned URL format giving a timeout number in seconds (e.g –expires-in 60)
aws s3 presign s3://bucketname/objectname –expires-in 60
Share the generated URL with client.
Note: you may want to shorten the URL using a service such as tinyurl or bit.ly
Also note that you must download the object within the specified time. Otherwise the object will timeout and your session will be denied.
The AWS CLI reference document will provide more information for those who are interested in learning more about this process.

AWS CLI Installation Linux | Cloud Resolve LLC

The AWS CLI client is a tool that makes it easy to work with AWS services using the command line. It allows you automate tasks, create outputs in different formats and much more!

You will find information about downloading, installing, configuring and using the AWS Command Line Interface in this article.

  • To extract the AWS CLI Package, you need to use either a built-in unzip command on your operating system or an equivalent option. Check for the presence of unzip using the following command:

    $ which -a unzip
    /usr/bin/unzip
    /bin/unzip

  • The AWS CLI uses a combination of glibc, groff and less. These software packages are included by default in most major distributions Linux operating systems such as Red Hat, CentOS, Ubuntu, Amazon Linux, etc.

  • The following information is not required to install. However, in order to configure the AWS CLI you should create an IAM account with the proper permissions to access the Amazon service in question. At the completion of this IAM account you will be presented with the below connectivity information that may be used to configure the AWS CLI.

    AWS Access Key ID
    AWS Secret Access Key
    Region name
    Desired output format

 

Installation instructions

Follow these steps from the command line to install the AWS CLI on Linux.

 

Download the installation file to your host

You may download the AWS client using the web Browser of your choice from the following URL:

https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip

Alternatively, you may prefer using the command line utility curl to perform the download. When using curl, we can add the -o option to the command to specify a name for the downloaded package. See the example below.

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

Extract the installer

Once the download has completed we will need to extract the contents. Unzip by default will extract the installer to a locally created folder on your computer named aws under your current location.

Begin the extraction using the following command:

unzip awscliv2.zip

Continue the installation

After the files have been extracted you will need to execute the installation using the following command:

Note: sudo may or may not be required depending on your user account privileges.

sudo ./aws/install

Once the install has completed, you should see a prompt stating “You can now run: /usr/local/bin/aws –version”

At this point the AWS CLI has been installed. However, keep in mind it will still need to be configured.  You may check your AWS CLI version using the following command

aws --version

aws-cli/2.5.3 Python/3.9.11 Linux/4.1.12-124.48.6.el6uek.x86_64 exe/x86_64.oracle.6 prompt/off

Installation completed

Although not configured, you may now search help topics using CLI commands.

For general help use:

aws help

 If you wish to get help with accessing a specific service such as Amazon S3 you could type:

aws s3 help

 

 

 

 

AWS CLI Configuration

 

Begin the configuration process using the aws configure command:

unzip aws configure

 

During configuration of the AWS CLI, you need to provide a few pieces of security information that was provided during setup of your IAM role.
The first is your AWS Access Key ID.

It then asks for AWS Secret Access Key ID that is used to provide authentication when logging into Amazon Web Services via this tool.

The remaining information is optional. You may hit enter on each field to skip or choose to populate them.

Provide your default region name: us-west-2 for example.

Finally Enter the default output format such as json or text

By default this information is stored in the following location ~/.aws/credentials. Please note that if you provide incorrect information you may run configure again to make changes or add profiles using the –profile profilename option

 

Optionally The aws configure import command lets you bring information from .csv files. This can be useful if there are multiple copies of your credentials that need updating or you just want an easy way to update them all at once!
Configuration using a .csv file is beyond the scope of this documentation. Please see the AWS CLI User Guide for more details.

Once the configuration has been completed you are ready to start using the AWS CLI.
You can use the aws configure list command to validate what settings are in place for your account. This will give a report of all available configuration options.

aws configure list