Main Terraform Workflow Repo Structure

5 minute read

Description

Continuing from my previous post, I wanted to go into more details about my repo structure because it is highly versatile and worth explaining in my opinion.

Note: You can see the code for this post on my Github repo.

Steps

  1. This is the folder as it exists in Github and on the Github Runner right at the checkout stage before any modifications.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    
    C:.
    │   README.md
    │
    ├───.github
    │   └───workflows                               => This is where all pipelines are created. Github calls these "workflows"
    │           main_protector.yaml                 => These are workflow files covered in other posts.
    │           main.yml
    │
    ├───config                                      => This folder exists to make changes for a specific environment that is based on folder path. We call these "configs" because each dir has its own state file based on paths.
    │   └───nonprd                                  => Environment based, nonprd or prd. Could also add 'uat', 'maint', or some other stage by copying/pasting and changing the variables.tf and/or terraform.tfvar values underneath.
    │       └───spoke                               => Create one folder for as many Azure subscriptions that you want to switch on.
    │           └───scus                            => Create one folder for as many regions as you want to deploy to.
    │               └───stage1                      => We need to create one folder for each stage. For example, it's common to create an AKV in stage1 and then read in a cert from that AKV in stage2.
    │                   ├───blue                    => Last, create a sub folder for blue, green, or none. This is really only for deploying services that will do "blue green cutovers".
    │                   │       backend.tf          => State file location as well as building providers from the secrets from a Key Vault being passed to terraform by workflow.
    │                   │       terraform.tfvars    => Any environment specific vars.
    │                   │       variables.tf        => Global variable definitions for this stage.
    │                   │
    │                   └───green
    │                           backend.tf
    │                           terraform.tfvars
    │                           variables.tf
    │   └───prd                                     => Same tree as above but for prod instead of non-prod.
    │       └───spoke
    │           └───eus
    │               └───stage1                      
    │                   ├───blue
    │                   │       backend.tf          
    │                   │       terraform.tfvars    
    │                   │       variables.tf        
    │                   │
    │                   └───none
    │                           backend.tf
    │                           terraform.tfvars
    │                           variables.tf
    │
    └───source                                      => These are static files that should be like modules where almost all values are vars being passed in from above. Avoid hard coding anything here as it will apply to all environments!
       ├───common
       │   └───stage0                              => These should define resources that goes in stage0 for all subscription, regions, and environments.
       │           rg.tf
       │
       └───modules                                 => These are local module calls you can make at run time. During execution, our worflow will copy these files recursively to the `./live/*` folder.
          └───rand
                   random_string.tf
                   variables.tf
    
  2. Now that we have these files in the repo and have explained their purpose, let’s examine what happens at run time. There is a critical step where we copy the files in a specific way:

    • First, we create a folder called ./live that exists on the Github Runner only during execution, it does not exist anywhere in our stored repo.
    • Next, since we are using a matrix workflow, this specific Github run will be running in parrallel with all the changes you made for this Pull Requests.

    • This means we could be running one, two, or twenty parrallel exucutions but each one with a specific ${/{ matrix.directories }} value that coorelates to something like config/nonprd/hub/east/stage2/none or config/prd/hub/east/stage2/none or config/prd/hub/scus/stage2/none for example.

    • NOTE: Jekyll Liquid Filters clash with Github Variables so replace all instances of ${/{ by removing the forward slash :)

    • Next, we have a parse script that is simply a bash script that looks at those paths and creates outputs dynamically as explained in my post here to create outputs. We are just getting the stage number in this case.

    • Next, we recursively copy all files under ./source/modules so that all local module calls will be ./modules/$moduleName as seen in rg.tf
  3. Here is the task in the pipeline that will modify file placement so that terraform can run in a single directory:

  4. This is the folder as it exists during a workflow execution AFTER the Copy Files task:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    
    C:.
    │   README.md
    │
    ├───.github
    │   └───workflows
    │           main_protector.yaml
    │           main.yml
    ├───config
    │   └───nonprd
    │       └───spoke
    │           └───eus
    │               └───stage1
    │                   ├───none
    │                   │       backend.tf
    │                   │       terraform.tfvars
    │                   │       variables.tf
    ├───live                                              => This is a brand new directory created at run time that contains all files needed for Terraform to run.
    │   │   backend.tf                                    => This came from config/nonprd/spoke/eus/stage1/none
    │   │   rg.tf                                         => This came from the source/common/stage1
    │   │   terraform.tfvars                              => This came from config/nonprd/spoke/eus/stage1/none
    │   │   variables.tf                                  => This came from config/nonprd/spoke/eus/stage1/none
    │   │
    │   └───modules                                       => This got copied from source/modules/rand. Files like `./rg.tf` above reference these locally like `./modules/moduleName
    │       └───rand
    │               random_string.tf
    │               variables.tf
    │
    └───source
       └───common
          └───stage1
    
  5. You then see in subsequent steps that we continue to cd $GITHUB_WORKSPACE/live before running any terraform commands.

  6. That’s it! So really this workflow solves so many problems and allows a large enterprise to use Terraform effectively:

    • You can deploy to multiple Azure subscriptions in a single run. Example: “I need to deploy a key vault to prd-hub in the southcentralus region and I need to do the same thing to nonprd-spoke in the eastus region”. Done, just:
      • Modify the variables.tf by adding a line break in config/prd/hub/scus/stage1/none folder and then ensure that you are creating a Key Vault in ./source/common/stage1/akv.tf for example.
      • Modify the variables.tf by adding a line break in config/nonprd/spoke/eus/stage1/none folder and then ensure that you are creating a Key Vault in ./source/common/stage1/akv.tf for example.
    • Each deployment will have a separate state file as seen in backend.tf and backend.tf here

    • You could easily modify the workflow for a specific run, like adding a new variable for example, by changing ./.github/workflows/main.yml to workflow dispatch and then creating a new file with similar contents that runs based on a specific path like discussed in previous post of how I used to do it. For example:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    
    on:
    push:
       branches:
          - "develop"
       paths:
          - "config/nonprd/hub/east/stage1/none/*"
    pull_request:
       types: [opened, edited, synchronize]
       branches:
          - "develop"
       paths:
          - "config/nonprd/hub/east/stage1/none/*/*"
    
    • You could easily expand this template to have new stages, new subscriptions, new environments, or anything really. The magic lies in the parsing script that sets outputs dynamically, so be sure to read my post covering this to get an idea for your organization.

    • I’m sure there are other perks but overall this template is very powerful as I have ran it hundreds of times for hundreds of scenarios!

Comments