I've been playing around with HashiCorp Vault for a few weeks now. I wanted to create something repeatable that I could use to test different ideas around using it with Chef in an environment that would mimic the one I mostly find myself working in. That is, one where I'm using Chef to do server configuration management and application deployment. While Vault is fairly straight forward, it has some pretty complex ideas that require some head wrapping, at least for me, and having a place where I can spin up new ideas in a quick, repeatable way is pretty important. So here is my attempt at that.
The Basics
I think that, for a testbed to be successful, it should at least attempt to mimic something that I'd use in real life. To that end, I start of with a few givens, such as:
- I need something that will be the stand-in for a server deployment. Think Terraform provisioning with VMWare.
- I'm using Chef cookbooks as a incredibly simple Terraform stand-in
- Vagrant will be doing the "virtual machine deployments"
- I need a configuration manager
- Here, Chef will be doing the configuration management. So it's being itself.
- I need secrets storage that is not specific to any one platform (in the way Chef Vault is pretty much Chef specific)
- Vault will be playing the role of itself also, providing secrets
That's pretty much it in a nutshell. So here's how I went about my attempt at creating it.
The Mechanics
Since I need this to be repeatable, I made it code. Here is the repository where I keep it. There are three directories: files, scripts, and cookbooks. Super high level, here is what exists in the base repo.
files
├── app1-nodes.json
├── hcls
│ ├── app1-arstart.hcl
│ ├── app1-ro.hcl
│ ├── app1-secret1-rw.hcl
│ ├── approle-maintain.hcl
│ └── infra-ro.hcl
├── notes.txt
└── secrets
├── app1-config.json
├── app1-secret1.json
├── infra-dns.json
├── infra-ntp.json
└── secrets.map
app1-nodes.json
- Initial configuration information about the environment that I am provisioning. Defaults to a two server application, called 'app1'hcls
- HashiCorp Configuration Language formatted files for defining poiciessecrets
- A few secrets to play around with
scripts
├── basic_setup
├── basic_setup_undo
└── vagrant_undo
basic_setup
- Does the work of making sure Vault has the right stuff, and the app1-nodes.json file gets updated with some important informationbasic_setup_undo
- Undoes the things for cleanupvagrant_undo
- Stops Vagrant boxes, and removes everything when I'm done
cookbooks
├── app1_stack
├── vagrant_node
└── vaultron
vagrant_node
- This is the one that does the initial lifting of creating Vagrant configurations, and a few things within Vaultapp1_stack
- This is the cookbook that will interact with the running Vagrant boxes via the chef-zero provisionervaultron
- This is a cookbook that I've been working on as a provider for Vault things
The Basic Execution
In it's most basic form, the files, scripts and cookbooks in this repo will configure two Vagrant boxes (bento/centos-7.4): app1node1.mustach.io, app1node2.mustach.io. These virtual servers are intended to represent a multi-server application. App1node1 plays the role of a web server, and will create an index page that displays some secrets from Vault. App1node2 will act as the application server, and update a value in the Vault. The web server as read-only access to all of the app1 secrets, while the application server cannot read any of the secrets, but has update rights to one, app1secret1
. There are several steps that get the pieces in place and running, so lets step through them.
Setup
- Confirm that the prereqs are met. Details can be found on the README of the repository, but they are basically: Vault is configured and current user has admin level rights to create things therein, Vagrant is configured and available, a fairly recent Chef Client is installed on the machine from which things are being run.
- Clone the repository to a directory of your choosing. All further steps are assumed to be completed using that location as the base directory, and so paths are relevant thereto.
- If necessary modify the base configuration file,
./file/app1-nodes.json
to meet the specific needs of your environment. More details about that can be seen in the README of the repository, but may include: Vagrant box type, IP scheme, Vault access (from local machine and Vagrant boxes), domain, etc. - Now comes the actual executing of things. Start with running
./scripts/basic_setup
. This does several things, again, documented in the repositories README, but they are:- Mount a Vault kv backend called app1
- Write secrets to
app1/config
andapp1/secret1
- Create a policy called
app1-ro
, which allows read-only access to all app1 secrets - Create a policy called
app1-secret1-rw
, which allows updating ofapp1/secret1
in Vault - Create a policy called
approle-maintain
with associated token. This will be used to create AppRoles later from a cookbook. The token is saved toapp1-nodes.json
- Create a policy called
app1-arstart
with associated token. This will be used by the Vagrant boxes to accomplish the AppRole authentication that will let them do theirr Vault things. The token is saved toapp1-nodes.json
- Updates the current in
app1-nodes.json
so that the cookbook can write stuff out to the expected locations.
scripts/basic_setup
=== Setup for Vault ===
+ Mount kv called app1, error ok if exists
Successfully mounted 'kv' at 'app1/'!
+ Secerets setup
+ Write app1/config secret
Success! Data written to: app1/config
+ Write app1/secret1 secret
Success! Data written to: app1/secret1
+ Policy to read app1 secrets: app1-ro
Policy 'app1-ro' written.
+ Policy to allow upate to app1/secret1: app1-secret1-rw
Policy 'app1-secret1-rw' written.
+ AppRole setup
+ Enable approle, error ok if already enabled
Error: Error making API request.
URL: POST http://127.0.0.1:8200/v1/sys/auth/approle
Code: 400. Errors:
* path is already in use
+ Policy for AppRole create/update: approle-maintain
Policy 'approle-maintain' written.
+ Create token for approle-maintain. Will be placed in files/app1-nodes.json
8d2898e2-9a2f-cca2-e6c2-fc0c7458d52f
+ Create policy for starting AppRole authentication for app1 nodes
Policy 'app1-arstart' written.
+ Create token for app1-arstart policy. Will be placed in files/app1-nodes.json
3d68a08d-6e58-0dd6-f48b-ebddd34cfe2a
+ Place working directory into files/app1-nodes.json
/Users/alan/projects/create-nodes-with-approles
Now that things are set up, we have what I would consider the pre-existing infrastructure upon which servers and applications would be deployed and configured.
Configure virtual servers, including Vault access
Since our "infrastructure" is now in place, it's time to deploy our application. As mentioned before, the basic setting in the repo is a two server application, known as app1. So to get those deployed, run the vagrant_node cookbook with the app1-nodes.json, which was completed by the basic-setup script above:
chef-client -z -o vagrant_node -j files/app1-nodes.json
[2017-12-30T21:09:31-05:00] WARN: No config file found or specified on command line, using command line options.
Starting Chef Client, version 13.4.19
[2017-12-30T21:09:37-05:00] WARN: Run List override has been provided.
[2017-12-30T21:09:37-05:00] WARN: Run List override has been provided.
[2017-12-30T21:09:37-05:00] WARN: Original Run List: []
[2017-12-30T21:09:37-05:00] WARN: Original Run List: []
[2017-12-30T21:09:37-05:00] WARN: Overridden Run List: [recipe[vagrant_node]]
[2017-12-30T21:09:37-05:00] WARN: Overridden Run List: [recipe[vagrant_node]]
resolving cookbooks for run list: ["vagrant_node"]
Synchronizing Cookbooks:
- vagrant_node (0.1.0)
- vaultron (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 5 resources
Recipe: vagrant_node::default
* template[/Users/alan/projects/create-nodes-with-approles/vagrant/Vagrantfile] action create
- create new file /Users/alan/projects/create-nodes-with-approles/vagrant/Vagrantfile
- update content in file /Users/alan/projects/create-nodes-with-approles/vagrant/Vagrantfile from none to 005424
--- /Users/alan/projects/create-nodes-with-approles/vagrant/Vagrantfile 2017-12-30 21:09:41.000000000 -0500
+++ /Users/alan/projects/create-nodes-with-approles/vagrant/.chef-Vagrantfile20171230-18048-177vri6 2017-12-30 21:09:41.000000000 -0500
@@ -1 +1,24 @@
+# -*- mode: ruby -*-
+# # vi: set ft=ruby :
+
+Vagrant.configure("2") do |config|
+ config.vm.define "app1node1" do |n|
+ n.vm.box = "bento/centos-7.4"
+ n.vm.hostname = "app1node1.mustach.io"
+ n.vm.network "private_network", ip: "10.1.1.70"
+ config.vm.synced_folder ".", "/vagrant", disabled: true
+ end
+ config.vm.define "app1node2" do |n|
+ n.vm.box = "bento/centos-7.4"
+ n.vm.hostname = "app1node2.mustach.io"
+ n.vm.network "private_network", ip: "10.1.1.71"
+ config.vm.synced_folder ".", "/vagrant", disabled: true
+ end
+
+ config.vm.provision "chef_zero" do |chef|
+ chef.cookbooks_path = "../cookbooks"
+ chef.nodes_path = "../nodes"
+ chef.add_recipe "app1_stack"
+ end
+end
* vault_approle[Vault AppRole: app1node1.mustach.io] action create
* template[/Users/alan/projects/create-nodes-with-approles/nodes/app1node1.mustach.io.json] action create
- create new file /Users/alan/projects/create-nodes-with-approles/nodes/app1node1.mustach.io.json
- update content in file /Users/alan/projects/create-nodes-with-approles/nodes/app1node1.mustach.io.json from none to eae628
--- /Users/alan/projects/create-nodes-with-approles/nodes/app1node1.mustach.io.json 2017-12-30 21:09:41.000000000 -0500
+++ /Users/alan/projects/create-nodes-with-approles/nodes/.chef-app1node120171230-18048-1jqrbq9.mustach.io.json 2017-12-30 21:09:41.000000000 -0500
@@ -1 +1,14 @@
+{
+ "name": "app1node1.mustach.io",
+ "normal": {
+ "vault_addr": "http://10.1.1.1:8200",
+ "app1": {
+ "node_role": "web",
+ "arstart_token": "3d68a08d-6e58-0dd6-f48b-ebddd34cfe2a"
+ },
+ "tags": [
+
+ ]
+ }
+}
* vault_approle[Vault AppRole: app1node2.mustach.io] action create
* template[/Users/alan/projects/create-nodes-with-approles/nodes/app1node2.mustach.io.json] action create
- create new file /Users/alan/projects/create-nodes-with-approles/nodes/app1node2.mustach.io.json
- update content in file /Users/alan/projects/create-nodes-with-approles/nodes/app1node2.mustach.io.json from none to bbd9f4
--- /Users/alan/projects/create-nodes-with-approles/nodes/app1node2.mustach.io.json 2017-12-30 21:09:41.000000000 -0500
+++ /Users/alan/projects/create-nodes-with-approles/nodes/.chef-app1node220171230-18048-1xz97a4.mustach.io.json 2017-12-30 21:09:41.000000000 -0500
@@ -1 +1,14 @@
+{
+ "name": "app1node2.mustach.io",
+ "normal": {
+ "vault_addr": "http://10.1.1.1:8200",
+ "app1": {
+ "node_role": "app",
+ "arstart_token": "3d68a08d-6e58-0dd6-f48b-ebddd34cfe2a"
+ },
+ "tags": [
+
+ ]
+ }
+}
[2017-12-30T21:09:41-05:00] WARN: Skipping final node save because override_runlist was given
[2017-12-30T21:09:41-05:00] WARN: Skipping final node save because override_runlist was given
Running handlers:
Running handlers complete
Chef Client finished, 5/5 resources updated in 10 seconds
[2017-12-30T21:09:41-05:00] WARN: No config file found or specified on command line, using command line options.
[2017-12-30T21:09:41-05:00] WARN: No config file found or specified on command line, using command line options.
[2017-12-30T21:09:41-05:00] FATAL: Cannot load configuration from files/app1-nodes.json
[2017-12-30T21:09:41-05:00] FATAL: Cannot load configuration from files/app1-nodes.json
Here is a rundown of what just happened:
- If the Vault gem was not already installed into the chef-client gems, it is installed by the vaultron cookbook
./vagrant/Vagrantfile
is configured for the two nodes, including the hostnames, IPs, box types, and setup for chef-zero provisioning. By default, that file looks like this:
-*- mode: ruby -*-
# # vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.define "app1node1" do |n|
n.vm.box = "bento/centos-7.4"
n.vm.hostname = "app1node1.mustach.io"
n.vm.network "private_network", ip: "10.1.1.70"
config.vm.synced_folder ".", "/vagrant", disabled: true
end
config.vm.define "app1node2" do |n|
n.vm.box = "bento/centos-7.4"
n.vm.hostname = "app1node2.mustach.io"
n.vm.network "private_network", ip: "10.1.1.71"
config.vm.synced_folder ".", "/vagrant", disabled: true
end
config.vm.provision "chef_zero" do |chef|
chef.cookbooks_path = "../cookbooks"
chef.nodes_path = "../nodes"
chef.add_recipe "app1_stack"
end
end
- Chef node files are created, which will be used by the cookbooks in the Vagrant chef-zero provisioning. The are in the
./nodes
directory and are named: app1node1.mustach.io.json and app1node2.mustach.io.json. The contents configure the role the server will play in the application, and the Vault settings. These are also derived from the app1-nodes.json file in the previous steps.
{
"name": "app1node1.mustach.io",
"normal": {
"vault_addr": "http://10.1.1.1:8200",
"app1": {
"node_role": "web",
"arstart_token": "3d68a08d-6e58-0dd6-f48b-ebddd34cfe2a"
},
"tags": [
]
}
}
{
"name": "app1node2.mustach.io",
"normal": {
"vault_addr": "http://10.1.1.1:8200",
"app1": {
"node_role": "app",
"arstart_token": "3d68a08d-6e58-0dd6-f48b-ebddd34cfe2a"
},
"tags": [
]
}
}
- Parallel to creating the files above, is a little Vault magic. ( TL;DR - An CIDR bound AppRole with policies from app1-nodes.json is created for each node, see code inserts below )I had some previous musings about the AppRole authentication mechanism available in Vault. I'm not saying it's a great piece as far as AppRole authentication information goes, but if that term is new to you, it's probably worth a read before continuing. At any rate, I truly think this type of authentication is tops, at leat when it comes to usage with applications such as Chef, which use the Vault on a regular but relatively infrequent basis. What happens during this step, is that an AppRole is created for each node in the app1 configuration file. I created a custom provider called
vault_approle
in the vaultron cookbook specifically for the management of AppRoles. Technically, creating, updating, and deleting and AppRole is just a simple write or delete action to the proper path. However, it seemed cleaner to have a provider for something that is as specific as AppRoles, since they have a well defined set of parameters. Anyway, like I was saying, each node get's it's own AppRole. The policies each node requires are defined in the app1-nodes.json configuration file, and so granular access is pretty easily set up. Each nodes AppRole is also CIDR bound to it's specific IP. To me, this is really a great aspect of that type of authentication: without any added complexity in Vault configuration, or extra setup requirements, the policies defined the the AppRole are only available to that one IP address. That is, of course, assuming the IP can be known in advance. In the environments I deal with, that's not an issue, and I'd think is something accessible in most setups that are deploying virtual or physical machines. The node attributeapp1['arstart-token']
is set in each nodes Chef node JSON file. It is going to be the same for all nodes. Since we have the luxury of specific AppRoles that can only be used from a specific IP, then we can make a more generic token that can start that authentication process. The arstart_token has no rights to read or update any secrets, except to start the authentication, and that specifically for only the app1 nodes defined above. If control is kept over the Vault environment, meaning AppRoles are CIDR bound, or otherwise hidden, then this is a fairly secure method, I think. Here are the AppRoles that are defined based on the default setup:
$ vault read auth/approle/role/app1node1.mustach.io
Key Value
--- -----
bind_secret_id true
bound_cidr_list 10.1.1.70/32
period 0
policies [app1-ro]
secret_id_num_uses 1
secret_id_ttl 5
token_max_ttl 30
token_num_uses 0
token_ttl 30
$ vault read auth/approle/role/app1node2.mustach.io
Key Value
--- -----
bind_secret_id true
bound_cidr_list 10.1.1.71/32
period 0
policies [app1-secret1-rw]
secret_id_num_uses 1
secret_id_ttl 5
token_max_ttl 30
token_num_uses 0
token_ttl 30
The important part of the AppRole settings:
* bound_cidr_list
- This is the part that keeps other nodes from using each other's AppRoles, which would be rude (and insecure)
* policies
- The access, defined in the app1-nodes.json setup file. Node1 can read all the secrets, while Node2 can only write to the one secret, but not read any of the others
* secret_id stuff - The secret_id, which is generated as part of the AppRole authentication chain, can only be used once, and must be so used within 5 seconds
* token stuff - The end game of the AppRole authentication is a token. Said token is usable for only 30 seconds (remember how Chef runs work; token lifetime can be short)
Run the virtual servers, including Chef cookbook
As mentioned a few times, the default configuration creates a super simple two node application analog, where there is a web server with a single page displaying the secrets it has access to, and a app server that can update a single secret. This is just to demonstrate that things are working, and be a good starting point for testing and developing new stuff. I purposefully put the web server first in the list so that it would come up with the app1/secret1
default value, and when the app server comes up and updates the value, it can be updated on the web page, proving that the whole thing is working as designed. First thing, we validate that the app1/config
and app1/secret1
are at their default values:
vault read app1/config
Key Value
--- -----
refresh_interval 768h0m0s
password p@ssw0rd!
username admin_user
vault read app1/secret1
Key Value
--- -----
refresh_interval 768h0m0s
value foo
And now we vagrant up the nodes. There is a lot of output, naturally. The important parts, for our current concern are what Chef does. Learning even the basics of Vagrant and it's chef-zero provider is probably a whole blog post on it's own. From within the project, get into the vagrant
directory, and run vagrant up
. This will bring up app1node1.mustach.io first then app1node1.mustach.io second.
Bringing machine 'app1node1' up with 'virtualbox' provider...
Bringing machine 'app1node2' up with 'virtualbox' provider...
...
## CHEF STUFF FOR APP1NODE1
==> app1node1: Recipe: app1_stack::default
==> app1node1: * yum_package[httpd] action install
==> app1node1: [2018-01-03T01:34:53+00:00] INFO: yum_package[httpd] installing httpd-2.4.6-67.el7.centos.6 from updates repository
==> app1node1: [2018-01-03T01:35:02+00:00] INFO: yum_package[httpd] installed httpd at 2.4.6-67.el7.centos.6
==> app1node1:
==> app1node1: - install version 2.4.6-67.el7.centos.6 of package httpd
==> app1node1: * template[/etc/httpd/conf/httpd.conf] action create
==> app1node1: [2018-01-03T01:35:02+00:00] INFO: template[/etc/httpd/conf/httpd.conf] backed up to /var/chef/backup/etc/httpd/conf/httpd.conf.chef-20180103013502.715140
==> app1node1: [2018-01-03T01:35:02+00:00] INFO: template[/etc/httpd/conf/httpd.conf] updated file contents /etc/httpd/conf/httpd.conf
==> app1node1:
==> app1node1: - update content in file /etc/httpd/conf/httpd.conf from 3f002b to f9dcf4
==> app1node1: --- /etc/httpd/conf/httpd.conf 2017-10-19 16:44:27.000000000 +0000
==> app1node1: +++ /etc/httpd/conf/.chef-httpd20180103-4459-ndca5u.conf 2018-01-03 01:35:02.698501241 +0000
==> app1node1: @@ -39,7 +39,7 @@
==> app1node1: # prevent Apache from glomming onto all bound IP addresses.
==> app1node1: #
==> app1node1: #Listen 12.34.56.78:80
==> app1node1: -Listen 80
==> app1node1: +Listen 0.0.0.0:80
==> app1node1:
==> app1node1: #
==> app1node1: # Dynamic Shared Object (DSO) Support
==> app1node1:
==> app1node1: - restore selinux security context
==> app1node1: * service[httpd] action start
==> app1node1: [2018-01-03T01:35:02+00:00] INFO: service[httpd] started
==> app1node1:
==> app1node1: - start service service[httpd]
==> app1node1: * service[httpd] action enable
==> app1node1: [2018-01-03T01:35:03+00:00] INFO: service[httpd] enabled
==> app1node1:
==> app1node1: - enable service service[httpd]
==> app1node1: * template[/var/www/html/index.html] action create[2018-01-03T01:35:03+00:00] INFO: template[/var/www/html/index.html] created file /var/www/html/index.html
==> app1node1:
==> app1node1: - create new file /var/www/html/index.html[2018-01-03T01:35:03+00:00] INFO: template[/var/www/html/index.html] updated file contents /var/www/html/index.html
==> app1node1:
==> app1node1: - update content in file /var/www/html/index.html from none to 21b77f
==> app1node1: --- /var/www/html/index.html 2018-01-03 01:35:03.129269829 +0000
==> app1node1: +++ /var/www/html/.chef-index20180103-4459-cfymbi.html 2018-01-03 01:35:03.125271977 +0000
==> app1node1: @@ -1 +1,9 @@
==> app1node1: +Updated from Chef run:
==> app1node1: +
==> app1node1: +app1/config Values
==> app1node1: + password: p@ssw0rd!
==> app1node1: + username: admin_user
==> app1node1: +
==> app1node1: +app1/secret1 Value
==> app1node1: + value: foo
==> app1node1:
==> app1node1: - restore selinux security context
==> app1node1: [2018-01-03T01:35:03+00:00] INFO: template[/etc/httpd/conf/httpd.conf] sending restart action to service[httpd] (delayed)
==> app1node1: * service[httpd] action restart
==> app1node1: [2018-01-03T01:35:04+00:00] INFO: service[httpd] restarted
==> app1node1:
==> app1node1: - restart service service[httpd]
==> app1node1:
==> app1node1: [2018-01-03T01:35:04+00:00] INFO: Chef Run complete in 47.592553216 seconds
## CHEF STUFF FOR APP1NODE2
==> app1node2: Converging 1 resources
==> app1node2: Recipe: app1_stack::default
==> app1node2: * vault[update app1/secret1] action write
==> app1node2:
==> app1node2: [2018-01-03T01:39:46+00:00] INFO: Chef Run complete in 21.64854999 seconds
Of course, that output was highly truncated. Here's a quick TL;DR rundown.
- app1node1.mustach.io
- lines 7-11 - httpd package is installed
- lines 12-29 - httpd.conf updated
- lines 30-37 - httpd is started and enabled
- lines 38-55 - index.html file created to display the default secret vaules
- app1node2.mustach.io
- line 67 -
app1/secret1
value updated
- line 67 -
When we curl index.html from the web server, we can validate that it was able to retrieve the Vault secrets:
curl 10.1.1.70
Updated from Chef run:
app1/config Values
password: p@ssw0rd!
username: admin_user
app1/secret1 Value
value: foo
Now we can confirm from vault that the app1/secret1
vaulue was updated by node2:
vault read app1/secret1
Key Value
--- -----
refresh_interval 768h0m0s
value cookbook update: 20180103013946
It was. And finally, we can rerun chef-zero on node1, and update index.html with the new value:
==> app1node1: Compiling Cookbooks...
==> app1node1: Converging 4 resources
==> app1node1: Recipe: app1_stack::default
==> app1node1: * yum_package[httpd] action install
==> app1node1: (up to date)
==> app1node1: * template[/etc/httpd/conf/httpd.conf] action create
==> app1node1: (up to date)
==> app1node1: * service[httpd] action start
==> app1node1: (up to date)
==> app1node1: * service[httpd] action enable
==> app1node1: (up to date)
==> app1node1: * template[/var/www/html/index.html] action create
==> app1node1: [2018-01-03T02:27:09+00:00] INFO: template[/var/www/html/index.html] backed up to /var/chef/backup/var/www/html/index.html.chef-20180103022709.857395
==> app1node1: [2018-01-03T02:27:09+00:00] INFO: template[/var/www/html/index.html] updated file contents /var/www/html/index.html
==> app1node1:
==> app1node1: - update content in file /var/www/html/index.html from 21b77f to 16f581
==> app1node1: --- /var/www/html/index.html 2018-01-03 01:35:03.125271977 +0000
==> app1node1: +++ /var/www/html/.chef-index20180103-4152-1p3aetu.html 2018-01-03 02:27:09.853602424 +0000
==> app1node1: @@ -5,5 +5,5 @@
==> app1node1: username: admin_user
==> app1node1:
==> app1node1: app1/secret1 Value
==> app1node1: - value: foo
==> app1node1: + value: cookbook update: 20180103013946
==> app1node1:
==> app1node1: - restore selinux security context
==> app1node1:
==> app1node1: [2018-01-03T02:27:10+00:00] INFO: Chef Run complete in 6.175986911 seconds
And on lines 23-24, we see that it was indeed updated, so the curl will show us:
curl 10.1.1.70
Updated from Chef run:
app1/config Values
password: p@ssw0rd!
username: admin_user
app1/secret1 Value
value: cookbook update: 20180103013946
That's it for the basic usage.
So what next
Now that a repeatable testing scenario is available, imagination is the limit. I can think of a few things off the top of my head, that I haven't worked much on:
- Adding new nodes
- Decommissioning nodes
- Adding policies to the AppRoles for nodes
- Changing IPs
- Creating tokens to store on the node, for when AppRoles may not make sense
I'm sure there are many more things as well. Happy hunting.