Master Terraform - Notes and Real-World Practices
Master Terraform - Notes and Real-World Practices
Naveen R
1
Terraform Guide
What is Terraform?
Terraform is an IT infrastructure automation tool for building, changing, and versioning infrastructure
safely and efficiently
Terraform can manage existing as well as custom in-house solutions
It turns your infrastructure as cod IAC i.e your computing environment has some of the same
attributes as your application
a. Your infrastructure is version able
b. Your infrastructure is reusable
c. Your infrastructure is testable
d. Minimizes errors and security violations
You only need to tell what the desired state should be, not how to achieve it
Terraform is cloud-agnostic
Features of Terraform
Infrastructure as Code
Infrastructure is described using a high-level configuration syntax
Provides single unified syntax
Executes Plans
Terraform has a “planning” step where it generates an execution plan
The execution plan shows what Terraform will do before making the actual changes
Resource Graph
Terraform builds a graph of all your resources, and parallelizes the creation and modification of
any non-dependent resources
Terraform builds infrastructure as efficiently as possible
Change Automation
Complex change sets can be applied to your infrastructure with minimal human interaction
With the previously mentioned execution plan and resource graph, you know exactly what
Terraform will change and in what order, avoiding many possible human errors
Terraform Workflow
2
Terraform Guide
Terraform code is written in a language called HCL in files with the extenstion .tf
Terraform can also read JSON configurations that is named with the .tf json
It is declarative language, so your goals is to describe the infrastructure you want, and
Terraform will figure out how to create it
HCL syntax is composed of blocks that define a configuration in Terraform
Blocks are comprised of key = value pairs
Single line comments start with # % multi-line are commented with /* and */
Strings are in double-quotes, Boolean values: true, false
Lists values are made with square brackets ([]), for example [“foo”,”bar”,”baz”]
Maps can be made with braces ({}) and colons (:) for example {“foo”:”bar”,”bar”:”baz”}
Strings can interpolate other values using syntax wrapped in ${}, for example ${[Link]}
Infrastructure as Code (IaC) tools like Terraform provide several key benefits:
Eliminates Human Error: By defining infrastructure in code, you reduce the chances of errors that
typically occur during manual configuration.
Version Control: Infrastructure configurations are stored in source control (e.g., Git), making it easy
to track changes, roll back, and collaborate with teams.
Automated Provisioning: Terraform allows for the automated provisioning of resources across
multiple cloud providers, reducing the time and effort required for manual setup.
Repeatable Deployments: You can easily spin up identical environments for different stages of
development (e.g., dev, staging, production), ensuring consistency.
Infrastructure Scaling: With IaC, you can scale resources automatically based on changing needs by
modifying the configuration and running terraform apply to update the infrastructure.
Multi-cloud Support: Terraform supports multiple providers (AWS, Azure, Google Cloud,
Kubernetes, etc.), enabling a multi-cloud or hybrid infrastructure without vendor lock-in.
Collaboration: Teams can collaborate on infrastructure changes through pull requests and code
reviews, enhancing team communication and workflows.
Versioned Infrastructure: By using version control, changes to infrastructure can be tracked over
time, and specific versions of infrastructure can be deployed.
3
Terraform Guide
5. Documentation
Declarative Configuration: The infrastructure code itself serves as documentation for how the
system is set up and configured. This is much clearer and more up-to-date than manually written
documentation.
Easier Onboarding: New team members can review the code to quickly understand how the
infrastructure is set up.
6. Cost Control
Cost Management: With Terraform’s ability to define and manage infrastructure in code, you can
easily view the resources you’ve provisioned and ensure that only necessary resources are being
used, reducing waste and avoiding over-provisioning.
Plan and Review: Before applying changes, Terraform can show a plan of what will be changed,
added, or destroyed, allowing teams to review and assess any cost impacts.
Enforcement of Standards: Using IaC allows teams to enforce security and compliance policies as
part of the infrastructure configuration (e.g., ensuring encryption is enabled, access controls are set
up).
Automated Audits: With version-controlled infrastructure, it's easy to conduct audits and track
who made what changes, helping to maintain security and compliance standards.
State Management: Terraform keeps track of the current state of the infrastructure, making it
easier to manage complex dependencies and make incremental changes without manually
updating resources.
Easy Updates: You can apply changes incrementally with terraform plan, ensuring only the
necessary updates are made to your infrastructure.
These benefits combine to make infrastructure management more efficient, reliable, and easier to
maintain, especially as environments grow more complex.
The core items of Terraform are the fundamental components that make up the infrastructure as code
(IaC) workflow. These components are essential for defining, managing, and deploying infrastructure in a
consistent and automated manner.
1. Providers
Definition: Providers are responsible for interacting with external APIs to manage the
infrastructure. Each provider is specific to a cloud platform or service (e.g., AWS, Azure, Google
Cloud, Kubernetes, etc.).
4
Terraform Guide
Function: They define the resources you can manage, such as EC2 instances in AWS or virtual
machines in Azure.
Example:
provider "aws" {
region = "us-east-1"
}
2. Resources
Definition: Resources represent individual infrastructure components that you want to manage
with Terraform, such as virtual machines, networks, or storage buckets.
Function: Resources are the building blocks of your infrastructure and are the actual items that are
created, modified, or destroyed.
Example:
3. Variables
Definition: Variables allow you to parameterize your Terraform configurations, making them
reusable and dynamic by defining values that can be passed into your configuration.
Function: They allow you to customize values like region, instance type, etc., without hardcoding
them into the Terraform configuration.
Example:
variable "instance_type" {
type = string
default = "[Link]"
}
4. Outputs
Definition: Outputs are used to expose important information about the infrastructure after a
Terraform run, such as IP addresses or resource IDs.
Function: They allow you to capture and display values for use in other processes, like showing the
public IP of an EC2 instance.
Example:
output "instance_ip" {
value = aws_instance.example.public_ip
}
5. Modules
5
Terraform Guide
Definition: A module is a container for multiple Terraform resources that are used together.
Modules can be reused across different configurations, allowing for code organization and sharing.
Function: Modules help in organizing complex Terraform configurations and in reusing code. You
can use both local modules (in your own codebase) and external modules (from the Terraform
Registry).
Example:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "[Link]/16"
}
6. State
7. Data Sources
Definition: Data sources allow you to query existing infrastructure outside of Terraform’s
management. This can be data from the cloud provider that you want to reference in your
configuration.
Function: They are used when you need to get information about existing resources (e.g., the ID of
a subnet or a security group).
Example:
8. Backend
Definition: The backend in Terraform determines where the Terraform state is stored. It defines
how and where the state file is managed (locally or remotely).
Function: It helps with state management, enables collaboration, and provides features like state
locking to prevent concurrent changes.
6
Terraform Guide
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "state/[Link]"
region = "us-east-1"
}
}
9. Provisioners
Definition: Provisioners allow you to execute scripts or commands on the resources after they are
created or updated. They are typically used for bootstrapping instances or configuring
infrastructure after it's been provisioned.
Function: Provisioners are generally used to install software, configure services, or run initialization
tasks on newly created resources.
Example:
provisioner "remote-exec" {
inline = [
"echo Hello, world!"
]
}
}
Conclusion
The core items of Terraform are the fundamental components that enable the creation, modification, and
management of infrastructure. These include:
7
Terraform Guide
1. Write Code: Developers and operators write Terraform configuration files to define the
infrastructure.
2. Version Control: These files are stored in a version control system (like Git), ensuring changes are
tracked and versioned.
3. Plan: The terraform plan command is used to preview changes before applying them, which helps
teams assess the impact of changes.
4. Review: Changes are reviewed by team members through pull requests or code reviews, ensuring
correctness, security, and alignment with best practices.
5. Apply: Once reviewed, the terraform apply command is run to provision or update the
infrastructure.
State Management: The state of the infrastructure is managed and updated in a shared backend (e.g.,
Terraform Cloud, S3), ensuring all team members work with the most up-to-date infrastructure.
Separate Configurations: You might want to manage different environments (such as development,
staging, and production) that require separate configurations, resources, or access controls. Using
multiple provider instances, you can define different provider configurations for each environment
to ensure isolation and proper resource management.
Environment-specific Variables: Each environment might use different credentials, regions, or
resource configurations. Multiple provider instances allow you to handle these differences without
affecting other environments.
Region-Specific Resources: For cloud providers like AWS, Azure, or Google Cloud, resources are
often created within specific regions. You may need to manage resources in multiple regions within
the same provider, and using multiple provider instances allows you to do this seamlessly.
Cross-Account/EndPoint Management: You can configure different provider instances to manage
resources in different cloud accounts or across different endpoints (e.g., managing resources on-
premises in addition to the cloud). This flexibility is key for organizations that operate across
multiple accounts or have hybrid cloud environments.
Multi-cloud Architecture: In modern infrastructures, organizations often leverage more than one
cloud provider (e.g., using AWS for compute, Azure for networking, and Google Cloud for machine
8
Terraform Guide
learning). With multiple provider instances, Terraform allows you to manage resources across these
different platforms within a single project.
Consistency Across Providers: You can use a unified Terraform workflow to ensure that
infrastructure is defined in code and deployed consistently across multiple clouds.
Clear Separation of Responsibilities: If you're managing resources in different contexts (e.g., one
provider for core infrastructure and another for a specific service), using multiple provider instances
helps maintain a clean separation of configurations. This can help avoid accidental resource
conflicts or mismanagement.
Granular Control: Each provider instance can have its own set of configurations, such as
authentication credentials, access keys, or region settings. This level of granularity makes it easier
to control and manage your infrastructure.
Example:
To manage resources in both AWS and Azure, you might define two provider instances:
provider "aws" {
region = "us-west-2"
profile = "my-aws-profile"
}
provider "azurerm" {
features {}
}
In this case, you have separate provider blocks for AWS and Azure, each managing its respective resources.
9
Terraform Guide
The lifecycle of a Terraform resource describes how Terraform manages the creation, modification, and
destruction of infrastructure resources over time. It consists of the following phases:
1. Configuration
You define the desired infrastructure in .tf files using HCL (HashiCorp Configuration Language).
Terraform initializes the working directory, downloads provider plugins, and prepares the environment.
Terraform compares the current state (in the .tfstate file) to your configuration and shows an execution
plan.
Create (+)
Update (~)
Destroy (-)
5. State Update
After applying, Terraform updates the state file ([Link]) to reflect the real infrastructure.
10
Terraform Guide
lifecycle {
prevent_destroy = true
create_before_destroy = true
ignore_changes = [tags]
}
}
Terraform tears down all managed resources and updates the state file accordingly.
Summary Table
Phase Description
terraform state list lists all the resources currently tracked in the Terraform state file. These are the
resources Terraform is managing.
11
Terraform Guide
Example:
Output:
aws_instance.example
aws_s3_bucket.my_bucket
aws_vpc.main
1. terraform show – Displays the full state file content, including values, but not as a simple list.
3. terraform validate – Only checks whether the syntax and configuration are valid. It doesn’t list
resources.
4. terraform output – Shows the outputs defined in the configuration, not a list of resources.
Here are a few scenarios where terraform plan may pass, but terraform apply can still fail:
1. Permissions Issues
Scenario: You have valid configurations, and terraform plan shows no errors, but when you apply,
Terraform tries to create or modify resources, and it encounters permission issues in your cloud
provider (e.g., AWS IAM).
Example:
o terraform plan successfully shows what will be created.
o When you run terraform apply, Terraform tries to create an EC2 instance, but the IAM role
used by Terraform doesn't have sufficient permissions to create EC2 instances.
Error during terraform apply:
Scenario: You may have valid provider configurations, and terraform plan may succeed. However,
there could be an issue with provider authentication or other settings that only become evident
when you attempt to apply the changes.
Example:
o terraform plan successfully generates a plan for resource creation.
o When you run terraform apply, Terraform can't authenticate with the provider because
your credentials have expired or are not set up properly.
Error during terraform apply:
12
Terraform Guide
Scenario: You might have defined resources with dependencies, but those dependencies aren't
satisfied at the time of terraform apply.
Example:
o terraform plan runs successfully because Terraform just checks the configuration and finds
no issues.
o When you run terraform apply, Terraform tries to create a resource that depends on
another resource that hasn't been created yet due to an unmet dependency or missing
configuration.
Error during terraform apply:
Scenario: Infrastructure that was manually modified or deleted (drift), but terraform plan doesn't
detect it, or it doesn't show up in the state, causing terraform apply to fail when it tries to reconcile.
Example:
o terraform plan runs and shows no changes, but manually deleting a resource (e.g., an EC2
instance) doesn't update the state.
o terraform apply tries to modify or delete the resource, and AWS fails because the resource
doesn't exist anymore.
Error during terraform apply:
Scenario: There is a discrepancy between the local state file and the actual infrastructure due to
state file corruption or manipulation.
Example:
o terraform plan runs and assumes the state is up-to-date.
o During terraform apply, Terraform detects that the state file is out of sync with the actual
infrastructure and fails to apply the changes.
Error during terraform apply:
Error: State conflict detected. The state file is out of sync with real infrastructure.
Scenario: Certain resources might have limitations when it comes to updates. Terraform can plan
updates, but the provider might prevent it.
Example:
o You modify a resource that has restrictions on in-place updates (like changing the size of a
managed disk in Azure).
o terraform plan shows the change.
13
Terraform Guide
o When you run terraform apply, it fails because the resource can't be updated in-place, and
Terraform tries to recreate it.
o
Error during terraform apply:
Scenario: If there are multiple teams or processes interacting with the same infrastructure (e.g.,
creating resources in parallel), Terraform's plan might pass, but terraform apply might fail due to a
resource being modified or deleted by another process in the meantime.
Example:
o terraform plan shows that everything is ready.
o In the middle of applying, a resource like an S3 bucket is deleted manually or by another
process, causing the apply step to fail.
Error during terraform apply:
Scenario: You modify a resource in a way that is technically valid in the configuration, but it violates
constraints in the cloud provider or system that can only be detected during execution.
Example:
o You change the instance type for an EC2 instance in your configuration.
o terraform plan shows the planned change.
o During terraform apply, AWS fails to stop the instance to apply the change due to a
constraint (like the instance running critical workloads).
Error during terraform apply:
Summary of Scenarios:
1. Permissions Issues – Terraform has no issues planning, but lacks permissions to apply changes.
2. Provider Configuration Issues – Misconfigured provider settings.
3. Resource Dependencies Not Met – Dependencies are not in place.
4. Out-of-Sync Infrastructure – Manual changes cause discrepancies.
5. State File Mismatch – State file and infrastructure are out of sync.
6. Resource Update Limitations – Certain updates can't be performed in-place.
7. Race Conditions – Resource modified/deleted by another process.
8. Invalid Resource Modifications – Changes violate cloud provider constraints.
14
Terraform Guide
What are Provisioners? How do you define provisioners? What are the types of provisioners?
In Terraform are used to execute scripts or commands on resources as part of the resource creation or
modification process. They allow you to customize resources further after they are created, such as
installing software, configuring systems, or running initialization tasks. Provisioners can be applied to any
resource and are typically used to configure the environment once the resource itself is provisioned.
Provisioners are defined within the resource block, using a provisioner argument followed by a specific
provisioner type and configuration.
Example:
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx"
]
}
}
In this example, the remote-exec provisioner is used to execute a series of commands on the newly
created AWS EC2 instance.
Types of Provisioners
1. local-exec
Executes commands on the machine running Terraform (the local machine) during the resource
creation or modification.
Example:
provisioner "local-exec" {
command = "echo 'Resource Created' > [Link]"
}
2. remote-exec
Executes commands on a remote machine after the resource is created, typically via SSH or WinRM
(for Windows). This is commonly used to configure instances post-deployment.
Example:
15
Terraform Guide
provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx"
]
connection {
host = aws_instance.example.public_ip
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
}
}
3. file
Copies files from the local machine to the remote machine. It is useful when you need to upload
configuration files, scripts, or other resources to a remote instance.
Example:
provisioner "file" {
source = "[Link]"
destination = "/tmp/[Link]"
}
Provisioners should be used cautiously because they can create dependencies on the order of resource
creation. They should be reserved for situations where resource creation alone doesn't fully configure the
system. Additionally, it's often better to use configuration management tools (e.g., Ansible, Puppet, Chef)
for long-term infrastructure management instead of relying heavily on provisioners.
In Terraform, tainting refers to the process of marking a resource as needing to be recreated, even if it
hasn't changed according to the Terraform plan. When a resource is marked as tainted, Terraform will
destroy and recreate that resource during the next apply.
What is Taint?
A tainted resource is one that has been flagged for destruction and recreation. Tainting a resource in
Terraform means that Terraform will destroy the existing resource and re-create it on the next apply, even
if there were no changes to its configuration. This is useful in cases where the resource has been
corrupted, has become inconsistent, or needs to be re-provisioned for other reasons, even though the
configuration has not changed.
16
Terraform Guide
1. Manual Tainting:
You can manually taint a resource by using the terraform taint command. This explicitly marks the
resource for destruction and recreation on the next terraform apply.
Example:
In this case, the aws_instance.example resource will be marked as tainted, and it will be destroyed
and recreated the next time terraform apply is run.
terraform show
It will indicate which resources are tainted. To apply the changes and recreate the resource, you
simply run:
terraform apply
2. Untainting a Resource:
If you decide that a resource shouldn't be tainted anymore, you can untarget it using the terraform
untaint command:
17
Terraform Guide
A Workspace in Terraform is an isolated environment within a Terraform configuration that allows you to
manage multiple versions of infrastructure using the same codebase. Workspaces help you manage
different stages of infrastructure (e.g., development, staging, production) without needing to duplicate
your configuration files.
What is a Workspace?
A Workspace is essentially a container for Terraform state files. By default, Terraform uses a "default"
workspace, but you can create multiple workspaces to isolate state files for different environments. Each
workspace has its own state and set of variables, making it easier to manage different environments or
configurations.
Manage different environments (e.g., dev, staging, production) with the same Terraform code.
Keep state isolated per environment, preventing interference between environments.
Use the same Terraform configuration for multiple environments by using workspace-specific
variables and state.
Multi-environment deployments: When you want to use the same Terraform code to manage
different environments (e.g., separate resources for development, staging, and production).
Testing multiple versions of infrastructure: When you need to test changes to infrastructure in a
temporary isolated environment without affecting the main infrastructure.
18
Terraform Guide
Managing different configurations for a single project: When the configuration changes based on
the workspace (for example, using different resources or settings for different environments).
1. Creating a New Workspace You can create a new workspace using the terraform workspace new
command.
Example:
2. Switching Between Workspaces To switch to an existing workspace, use the terraform workspace
select command.
Example:
This command switches to the dev workspace. Terraform will now manage infrastructure for the
dev environment.
3. Listing Workspaces To view all available workspaces, use the terraform workspace list command.
Example:
This will show all the workspaces that exist in the current project.
4. Workspace-Specific Variables You can define different variables for each workspace. You can use
[Link] to reference the current workspace inside your Terraform configurations.
Example:
variable "instance_type" {
type = string
default = "[Link]"
}
tags = {
Name = "ExampleInstance-${[Link]}"
19
Terraform Guide
}
}
In this example, the [Link] variable dynamically adds the workspace name to the
Name tag, making it clear which environment the resource belongs to.
5. State Isolation Each workspace has its own state file. When you switch workspaces, Terraform uses
a different state file, ensuring that your infrastructure for different environments does not interfere
with one another.
This will display the current workspace, such as dev, staging, or production.
6. Deleting a Workspace If you no longer need a workspace, you can delete it using the terraform
workspace delete command. Note that deleting a workspace will not delete the actual resources in
your cloud provider, just the state associated with that workspace.
Example:
Let’s say you are working on a project that requires separate environments for development, staging, and
production. You can use workspaces to manage these environments:
2. Use [Link] in your configuration to set different variables or configurations for each
environment. For example, different instance types in each environment.
3. Switch between workspaces based on the environment you are working on:
20
Terraform Guide
Best Practices
Use workspaces to isolate different environments (e.g., development, staging, production) while
using the same Terraform codebase.
Avoid using workspaces for multi-region or multi-tenant configurations that have complex
dependencies across different states.
Use workspaces with caution in CI/CD pipelines, as managing state correctly in these environments
is critical to avoid accidental modifications.
If different teams are working on the same configuration. How do you make files to have
consistent formatting?
To ensure consistent formatting of Terraform configuration files when different teams are working on the
same codebase, you should use:
This command automatically formats your .tf and .tfvars files according to the standard Terraform style
conventions.
This formats all Terraform files in the current directory and subdirectories.
# .git/hooks/pre-commit
terraform fmt -recursive
Add a formatting check in your CI (e.g., GitHub Actions, GitLab CI, Jenkins) to ensure PRs have
properly formatted code:
21
Terraform Guide
Most IDEs (e.g., VS Code, IntelliJ) have Terraform plugins/extensions that format code on save using
terraform fmt.
Summary:
Using terraform fmt ensures clean, consistent formatting, reduces merge conflicts, and improves
readability and collaboration across teams.
Explain difference between terraform fmt and terraform validate with examples
Great question! Both terraform fmt and terraform validate are essential Terraform commands, but they
serve very different purposes:
Purpose: Automatically formats your Terraform configuration files to follow the standard style guide.
Example:
Run:
terraform fmt
22
Terraform Guide
Purpose: Validates that the Terraform files are syntactically valid and internally consistent.
Example:
Invalid configuration:
Run:
terraform validate
Output:
🔄 Summary Table
Feature terraform fmt terraform validate
Purpose Format .tf files to standard style Check if configuration is syntactically valid
Required before plan? Optional, for clean code Recommended to catch errors early
Typical use case Auto-format before commit Validate configuration before plan or apply
23
Terraform Guide
If Terraform crashes during execution, which of the following steps can you take to debug the
issue and gather more information?
If Terraform crashes during execution, there are several steps you can take to debug the issue and gather
more information to diagnose the cause of the crash. Here are the common steps:
Terraform provides detailed logging capabilities through the TF_LOG environment variable. This can give
you more insight into what Terraform was doing at the time of the crash.
export TF_LOG=DEBUG
export TF_LOG=TRACE
Once set, Terraform will output detailed logs to the console, which can help identify the exact point
of failure or provide more context about the issue.
If you want to store the log output in a file, use:
export TF_LOG_PATH=./[Link]
This is a powerful tool for investigating crashes and troubleshooting unexpected behavior in Terraform.
After a crash, Terraform usually prints an error message or stack trace to the console. This is often the first
place to look for clues about what went wrong. Carefully review the error message and stack trace to
determine if there's any specific resource or operation that is causing the crash.
If you're unsure about what changes Terraform is trying to make, you can run terraform plan to preview
the changes before actually applying them. This can help you catch any obvious issues before a crash
occurs during execution.
terraform plan
This command shows what Terraform intends to do, which might give insight into what causes the crash
during the apply.
24
Terraform Guide
Running Terraform with the -debug flag forces Terraform to output debug-level information, which can be
helpful in understanding the crash.
Ensure that you're using a stable version of Terraform, and check for any issues related to your
version by visiting the Terraform GitHub repository or the Terraform release notes.
Similarly, check that all the provider plugins you're using are up to date. Sometimes crashes are due
to bugs in older versions of the provider plugins.
terraform version
Corrupted or inconsistent state files can sometimes lead to crashes during execution. If you suspect the
state file is the issue, try running:
This will show the resources in your state file and allow you to verify that it hasn't become corrupted. If
you suspect corruption, you can try to manually inspect or repair the state file (e.g., using terraform state
commands).
Terraform can crash if it runs out of memory or other system resources during execution, especially when
managing large infrastructures. Check your system’s resource usage (CPU, RAM, disk) to ensure that it has
enough resources to handle the workload. You can also try running Terraform on a machine with more
memory or CPU resources if possible.
8. Update Dependencies
If the crash happens during a specific provider's resource management, ensure that all providers are
updated. Run the following command to upgrade all provider versions to the latest compatible version:
If you’re unable to identify the crash from logs or output, try isolating the issue:
25
Terraform Guide
Comment out sections of the configuration or remove some resources temporarily to narrow down
the specific resource causing the crash.
Run smaller configurations to verify if a particular resource or module is causing the crash.
After reviewing the logs, check if the error matches any known issues by searching for it in the Terraform
GitHub issues or related provider repositories.
If you're unable to resolve the issue, and you have a Terraform Enterprise subscription or support plan, you
can contact HashiCorp support for further assistance.
Summary of Steps:
What is Module? Where do you find and explore terraform modules? How do you make sure
that module have stability and compatibility? Explain with example
In Terraform, a module is a container for multiple resources that are used together. A module allows you
to group and organize resources logically and re-use them across different configurations. Modules help in
abstracting away complex infrastructure logic into reusable units of work. They can be local to your project
or sourced from public or private module repositories.
Reusability: Modules allow you to create reusable infrastructure components that can be easily
applied in different configurations.
Maintainability: You can organize your Terraform configuration into modular units, making it easier
to manage, maintain, and update.
Abstraction: Modules can abstract away complex infrastructure setups, making it easier to work
with simple interface inputs.
26
Terraform Guide
Scalability: By using modules, you can scale your infrastructure management by reusing predefined
modules in different parts of your infrastructure.
Here is an example of how you can use a module to deploy an AWS EC2 instance. This module can be
found on the Terraform Registry.
module "ec2_instance" {
source = "terraform-aws-modules/ec2-instance/aws"
instance_count = 1
ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"
name = "example-instance"
tags = {
Name = "Example Instance"
}
}
source: The source points to the module's location. In this case, it’s a publicly available module
from the Terraform AWS Modules collection on the Terraform Registry.
The module will automatically create an EC2 instance using the provided parameters like ami,
instance_type, and name.
To ensure that the module you are using is stable and compatible, you can take the following steps:
27
Terraform Guide
Many modules in the Terraform Registry support versioning. By specifying a version in the source
parameter, you ensure that you are using a stable and compatible version of the module.
Example:
module "ec2_instance" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 3.0"
}
The version constraint ~> 3.0 ensures that Terraform uses a stable version from the 3.x series, ensuring
compatibility with your setup. If you do not specify a version, Terraform will fetch the latest version, which
may introduce breaking changes.
Always check the module documentation on the Terraform Registry or the module's repository.
Well-documented modules will provide details about:
o The inputs and outputs (variables and return values).
o Examples of usage.
o Compatibility with different versions of Terraform and the providers.
o Known issues and limitations.
o Required dependencies and the module's supported features.
This helps to ensure the module meets your needs and works as expected in your environment.
Ensure the module is actively maintained by checking the commit history on GitHub (if using a
module from there). Look for recent commits and releases to verify ongoing support.
If the module has not been updated in a long time, it could be deprecated or incompatible with
newer Terraform versions.
Review community feedback, issues, and pull requests. In the Terraform Registry, you can often
find user reviews and issues posted by others who have used the module. This can help identify any
known bugs or incompatibility issues.
GitHub repositories often have an "Issues" section where users report problems, and you can gauge
if these issues are being actively addressed.
28
Terraform Guide
Ensure that the module is compatible with the version of Terraform you are using. Some modules
may require specific Terraform versions due to changes in the Terraform language or provider APIs.
Check the module’s documentation or changelog for any version restrictions.
Look for any input variables and their default values. For example, the instance_type variable in the
EC2 module might have a default value, but you can override it to better fit your infrastructure
needs.
Example:
variable "instance_type" {
description = "EC2 instance type"
default = "[Link]"
}
By reviewing and adjusting variables, you can make the module work in a variety of contexts, improving its
stability for your environment.
If you are using a module to provision an AWS RDS instance, you can ensure its stability and compatibility
by:
module "rds" {
source = "terraform-aws-modules/rds/aws"
version = "~> 4.0"
}
2. Reviewing the module’s documentation for supported AWS regions and Terraform versions.
3. Checking GitHub issues to see if others have faced compatibility issues or bugs.
4. Testing the module in a staging environment before deploying it in production.
29
Terraform Guide
Conclusion
Terraform modules help you reuse and share infrastructure code. To ensure that the modules you use are
stable and compatible, always:
Specify versions.
Review module documentation carefully.
Check the module’s maintenance status and community feedback.
Test the module in a controlled environment.
Validate compatibility with your Terraform version.
A variable in Terraform is a way to input data into your configuration. It enables you to customize the
infrastructure deployment by passing in different values when applying the configuration.
1. Input Variables
These are variables defined in Terraform configurations that allow you to pass dynamic values into
your modules or resources.
Example:
variable "region" {
description = "The AWS region to create resources in"
type = string
default = "us-east-1"
}
In this example, the region variable is defined to allow dynamic configuration of the region for AWS
resources.
2. Output Variables
These variables capture values from your Terraform resources and output them after the plan is
applied. This is useful for exporting data like IP addresses or resource IDs.
Example:
output "instance_ip" {
value = aws_instance.example.public_ip
}
The instance_ip output variable stores the public IP address of the AWS EC2 instance.
30
Terraform Guide
3. Local Values
Local values are named expressions that can be used to simplify your Terraform code. They're
similar to variables but are used within the configuration to store intermediate results.
Example:
locals {
instance_type = "[Link]"
}
The instance_type local value simplifies referencing the EC2 instance type in your configuration.
When defining input variables, you can specify a type to control the kind of data that can be passed in:
variable "environment" {
type = string
default = "production"
}
variable "instance_count" {
type = number
default = 3
}
variable "enable_logging" {
type = bool
default = true
}
4. List A variable that holds an ordered collection of values, where each value is of the same type.
variable "availability_zones" {
type = list(string)
default = ["us-east-1a", "us-east-1b"]
}
5. Map A variable that holds a collection of key-value pairs, where the key is a string, and the value
can be any data type.
variable "instance_tags" {
31
Terraform Guide
type = map(string)
default = {
Name = "my-instance"
Environment = "production"
}
}
variable "server" {
type = object({
name = string
type = string
})
default = {
name = "web-server"
type = "[Link]"
}
}
Here is an example of defining input variables, assigning them values, and using them within a resource
configuration:
variable "region" {
description = "AWS region"
type = string
default = "us-east-1"
}
variable "instance_type" {
description = "Type of EC2 instance"
type = string
default = "[Link]"
}
Terraform variables follow a precedence order when values are assigned. Here's the order in which
Terraform looks for values for a variable:
32
Terraform Guide
1. Environment Variables
You can set variables using environment variables, and these will override values set in .tfvars files
or default values in the configuration. Environment variables are named as
TF_VAR_<variable_name>.
Example: TF_VAR_region=us-west-2
2. CLI Arguments
When running terraform apply, you can pass variables via the command line using the -var flag.
These override environment variables, .tfvars files, and defaults in the configuration.
Example:
3. .tfvars Files
If you have a file named [Link] (or any .tfvars file), Terraform will automatically load the
values from that file. These values will override the default values in the configuration but are
overridden by the environment variables and CLI arguments.
4. Default Values
The default value of the variable is used if no value is provided via CLI, .tfvars files, or environment
variables.
variable "region" {
type = string
default = "us-east-1"
}
# [Link]
region = "us-west-2"
If you run terraform apply without passing any variables, it will use us-west-2 from the .tfvars file instead of
the default us-east-1.
33
Terraform Guide
A Terraform remote backend is used to manage the state files of a Terraform project in a centralized
location. The state file contains important information about the infrastructure managed by Terraform,
and using a remote backend allows teams to share and collaborate on the same infrastructure without
conflicts. Remote backends provide several benefits:
Centralized Storage: The state files are stored remotely (e.g., in AWS S3, Azure Blob Storage, etc.),
ensuring all team members are working with the same version of the state file.
Collaboration: Multiple users can work on the same infrastructure concurrently without the risk of
overwriting each other's changes.
State Locking: Some remote backends (like S3 with DynamoDB or Azure Blob Storage) support state
locking, which prevents concurrent modifications to the state file.
Security: State files may contain sensitive information, and remote backends can provide
encryption and access control to protect this data.
1. To store Terraform configuration files in a central location: Terraform configuration files (.tf files)
are typically version-controlled in a source code repository like Git, not stored in a remote backend.
2. To execute Terraform commands remotely without local setup: Terraform backends are not
designed to execute commands remotely. They only manage state storage. Terraform commands
are still executed locally on your machine or CI/CD system.
3. To store the Terraform provider plugins: Provider plugins are stored locally or downloaded from
the Terraform Registry, not within the remote backend. The backend stores the state of the
infrastructure.
So, the purpose of a remote backend is primarily to centralize the storage and management of Terraform
state files.
The purpose of using a Terraform remote backend is to store and manage Terraform state files in a
centralized, secure, and accessible location. This allows for better collaboration, state management, and
ensures that teams can safely and efficiently work together on shared infrastructure.
34
Terraform Guide
locking, which prevents concurrent updates to the state file, avoiding potential conflicts or
corruption of the state.
4. Security:
Remote backends often offer additional security features, such as encrypted storage and fine-
grained access control, ensuring that sensitive data in the state file (like resource credentials) is
protected.
5. Scalability:
Using a remote backend is essential for large, complex Terraform deployments that need to scale
across multiple teams, regions, or environments, as it centralizes and simplifies state management.
6. Integration with Version Control and CI/CD:
With remote backends, it’s easier to integrate Terraform with CI/CD pipelines and version control
systems, ensuring that infrastructure changes are tracked and versioned.
Amazon S3 with DynamoDB (for locking): Used to store state files in S3 and use DynamoDB for
state locking.
Azure Blob Storage: A scalable solution for storing Terraform state on Azure.
HashiCorp Consul: A distributed, highly-available key-value store that can be used for state storage
and locking.
Google Cloud Storage (GCS): A backend that stores state in GCS buckets.
How can you import existing infrastructure into Terraform? In which scenario you need to use import?
What are the limitations? Please explain with an example.
You can use the terraform import command to bring existing infrastructure under Terraform management.
This command allows you to take resources that were created outside of Terraform (manually or by other
means) and import them into the Terraform state so you can manage them going forward.
Where:
35
Terraform Guide
Let’s assume you have an EC2 instance in AWS and you want to bring it under Terraform management.
Once the import is successful, Terraform will have the instance in its state file, and the resource will
be managed under Terraform.
4. Run terraform plan: After importing, run terraform plan to verify that Terraform recognizes the
state of the resource and checks for any differences between the configuration and actual
infrastructure:
terraform plan
36
Terraform Guide
Example of Limitations:
Let’s say you’re importing an EC2 instance, but the instance has a custom security group attached that you
didn’t define. You’ll need to import that security group separately and ensure that it’s linked in your
configuration.
2. Manually Add Security Group to Configuration: After importing the security group, you’ll need to
manually associate it with your EC2 instance in the configuration file.
37
Terraform Guide
Conclusion
The terraform import command allows you to bring existing infrastructure into Terraform's management,
making it easier to manage and automate the configuration of pre-existing resources. However, you must
manually write the corresponding Terraform configuration files, and be aware of its limitations, such as the
inability to automatically import resource dependencies or generate configuration from existing
infrastructure.
In Terraform, lifecycle rules are used to customize the behavior of resources when performing actions like
creation, update, or destruction. These rules allow you to control how Terraform interacts with resources
during these lifecycle events.
1. create_before_destroy
Purpose: Ensures that a resource is created before the existing one is destroyed during updates.
Use Case: This is useful when you need to avoid downtime or when a resource must exist before its
replacement can be destroyed (e.g., load balancers, databases).
lifecycle {
create_before_destroy = true
}
Explanation: This rule ensures that the security group will be created first, and only once the new one is
successfully created, the old one will be destroyed.
2. prevent_destroy
Purpose: Prevents the resource from being destroyed, even when a terraform destroy or terraform
apply is executed.
Use Case: This is useful for critical resources where you do not want them to be deleted
accidentally, such as production databases or stateful applications.
38
Terraform Guide
lifecycle {
prevent_destroy = true
}
Explanation: The prevent_destroy = true setting ensures that the resource can't be destroyed, and an error
will occur if you attempt to destroy the resource.
3. ignore_changes
Purpose: Tells Terraform to ignore specific changes to resource attributes during an update,
effectively preventing Terraform from attempting to modify those attributes.
Use Case: This is useful when you have resources that are externally managed or updated, and you
don’t want Terraform to track changes to those attributes (e.g., manual changes to tags or instance
metadata).
tags = {
Name = "example-instance"
}
lifecycle {
ignore_changes = [
tags["Name"] # Ignore changes to the Name tag
]
}
}
Explanation: In this example, Terraform will not attempt to change the Name tag of the instance even if
the tag is modified outside of Terraform.
4. replace_triggered_by
Purpose: This rule causes a resource to be replaced when a change to another resource or data
source occurs, even if the change does not directly impact the resource.
Use Case: This is useful when a change in one resource triggers the need to replace another
resource, even if the two are not directly related in Terraform’s normal dependency graph.
39
Terraform Guide
lifecycle {
replace_triggered_by = [aws_security_group.[Link]]
}
}
Explanation: In this example, if any changes occur to the tags of aws_security_group.example, Terraform
will replace the security group, even if the change is unrelated to the resource's configuration.
5. destroy_before_create
Purpose: Ensures that a resource is destroyed before a new one is created when making changes.
Use Case: This rule can be useful when you need to destroy a resource first before creating a new
one, like when upgrading a resource that cannot be reused or if it requires a downtime window.
lifecycle {
destroy_before_create = true
}
Explanation: In this case, Terraform will destroy the instance first before creating the new instance, which
could be useful for certain types of infrastructure changes.
Purpose: You can use ignore_changes to ignore multiple attributes at once, preventing Terraform
from making changes to them during an update.
Use Case: This is useful when you want to prevent updates to multiple properties that may be
modified manually or managed externally.
tags = {
Name = "example-instance"
}
lifecycle {
ignore_changes = [
ami, # Ignore changes to AMI
40
Terraform Guide
Explanation: Terraform will ignore changes to the ami and instance_type properties, meaning that manual
updates to these fields outside of Terraform won’t trigger changes to the instance.
Interpolation in Terraform refers to the process of embedding dynamic expressions or values inside your
Terraform configuration files. It allows you to reference variables, attributes, and data from other
resources and pass them as input to other resources or configuration blocks.
Interpolation is typically done using ${} syntax, where you can insert expressions that will be evaluated and
replaced by actual values during the execution of terraform plan or terraform apply.
You can use interpolation to reference variables within a resource or other parts of your Terraform
configuration.
41
Terraform Guide
variable "instance_type" {
type = string
default = "[Link]"
}
Explanation: Here, the ${var.instance_type} interpolation is used to reference the instance_type variable in
the aws_instance resource.
Interpolation allows you to use attributes from one resource in another resource. This is particularly useful
when you need to configure one resource based on the attributes of another.
You can also use interpolation in outputs to reference attributes or variables and output their values.
output "instance_id" {
value = "${aws_instance.[Link]}" # Interpolating the instance ID
}
Explanation: The value of the instance_id output is interpolating the id attribute of the
aws_instance.example resource.
You can use interpolation to perform string manipulation in Terraform, like concatenating strings or
combining values.
42
Terraform Guide
output "instance_name" {
value = "example-${aws_instance.[Link]}" # Concatenating a string with instance ID
}
Explanation: The output instance_name concatenates the string "example-" with the id of the EC2
instance, producing a dynamic name based on the instance.
Terraform allows for more complex expressions within interpolations. You can use conditions and
functions to dynamically determine values.
Explanation: This example uses a ternary operator for conditional interpolation. If var.instance_type is
"[Link]", the instance_type will be "[Link]"; otherwise, it will be "[Link]".
1. Referencing Dynamic Values: When you need to inject variables, resource attributes, or data into
resource definitions.
2. Creating Dynamic Resources: When the configuration depends on other resources or variables, and
you want to automate the creation of resources dynamically.
3. Combining Values: When you need to combine different strings, variables, or values together to
form a dynamic configuration.
4. Conditional Logic: When you need to apply logic that decides which values or resources to use
based on input conditions.
5. Outputting Dynamic Data: When creating outputs that represent the results of your infrastructure,
such as resource IDs or URLs.
Starting with Terraform 0.12 and beyond, interpolation is optional in many cases. Terraform automatically
handles most situations without needing explicit interpolation. For instance:
43
Terraform Guide
Conclusion
Interpolation in Terraform allows you to create dynamic configurations by referencing variables, resource
attributes, or data sources. It is used for:
Since Terraform 0.12+, interpolation has become more intuitive, and in many cases, you don't need to use
the ${} syntax for simple references. But it’s still essential to understand how and where to use it to make
your configurations dynamic and flexible.
How can you leverage Terraform’s “count” and “for_each” features for resource iteration?
Terraform’s count and for_each are meta-arguments that allow you to create multiple instances of a
resource, module, or block in a clean, efficient, and scalable way. These features are useful when you need
to dynamically create resources based on input data like lists or maps.
Use Case:
Use count when you want to repeat the same resource multiple times, usually based on a list or number.
variable "instance_count" {
default = 3
}
44
Terraform Guide
Explanation:
Use Case:
Use for_each when you want to create one resource per item in a map or set, and you want to access
each key/value clearly.
variable "bucket_names" {
default = ["dev-bucket", "test-bucket", "prod-bucket"]
}
bucket = [Link]
acl = "private"
}
Explanation:
Each bucket will be created with the name from the list.
[Link] gives the current bucket name (like "dev-bucket").
bucket = [Link]
tags = {
Environment = [Link]
}
}
45
Terraform Guide
Explanation:
for_each A map or set of strings [Link] / [Link] ✅Yes No (but you get keys)
Notes
You can’t use both count and for_each in the same resource.
For dynamic resource creation where names/keys matter, prefer for_each.
When using for_each, your keys must be unique.
You want to create security groups for dev, test, and prod environments.
Using count
Input:
variable "environments" {
default = ["dev", "test", "prod"]
}
Terraform Code:
name = "sg-${[Link][[Link]]}"
description = "Security group for ${[Link][[Link]]}"
vpc_id = "vpc-123456"
46
Terraform Guide
tags = {
Environment = [Link][[Link]]
}
}
Key Points:
Uses a list.
Must access values using [Link].
Less readable when accessing attributes in complex structures.
Using for_each
Input:
variable "env_map" {
default = {
dev = "vpc-aaa111"
test = "vpc-bbb222"
prod = "vpc-ccc333"
}
}
Terraform Code:
name = "sg-${[Link]}"
description = "Security group for ${[Link]}"
vpc_id = [Link]
tags = {
Environment = [Link]
}
}
🚩 Key Points:
Uses a map.
Easier to read and manage key-value structures.
[Link] is the environment, [Link] is the VPC ID.
Summary Table
Feature Input Type Key Access Example Use Case
47
Terraform Guide
count List / Number [Link] Same resource repeated N times (e.g., EC2)
In Terraform, handling resource failures and retries is crucial for maintaining a reliable and idempotent
infrastructure-as-code workflow. Terraform doesn't natively offer granular retry policies like a
programming language, but it provides several mechanisms to deal with transient failures and ensure
resources are created reliably.
Terraform automatically retries failed operations like create, update, or delete for many providers
(especially AWS, Azure, GCP) when the failure is transient (e.g., throttling, rate limits, API timeouts).
You don’t have to write retries for most transient cloud issues – Terraform already does that internally.
When updating resources like security groups or IAM roles, Terraform might try to destroy and recreate
them. To avoid downtime or failure, you can use:
lifecycle {
create_before_destroy = true
}
Use when you want to ensure the new resource is created before the old one is destroyed.
Sometimes updates fail because Terraform tries to change something that should be left alone (like an
externally managed field):
lifecycle {
ignore_changes = [tags]
}
Use when external systems modify the resource and Terraform shouldn’t try to revert it.
48
Terraform Guide
For scripts or commands that need retries, you can manually implement a retry loop using null_resource
and shell scripting.
Use when calling APIs or commands that are flaky and may need retries.
Organizing infrastructure into modules and using outputs helps break down large operations and isolate
failures to smaller scopes, improving retry recovery.
Limitations
No custom retry count or backoff control for individual resources (like retry_attempts in AWS SDK).
Failures in one resource can halt the entire plan/apply unless isolated.
No native support for conditional retries (use null_resource as workaround).
Best Practices
Strategy When to Use
49
Terraform Guide
A colleague accidentally deletes the Terraform state file. How would you recover and ensure no
resources are recreated?
If you're using:
You must manually re-import all existing resources into a new Terraform state file. This avoids recreating
them.
Example:
This tells Terraform: “This real AWS resource maps to this Terraform resource block.”
Make sure the [Link] (or equivalent) file matches the actual configuration of those resources.
50
Terraform Guide
terraform plan
What Not to Do
Don’t immediately run terraform apply after deleting state — it will try to recreate all resources.
Don’t manually edit the .tfstate unless you know exactly what you're doing.
Prevention Tips
Always enable remote backends like S3 + DynamoDB for locking and versioning.
Use Terraform Cloud to store and version state safely.
Consider automating state backups in a CI/CD pipeline.
What is the purpose of Terraform’s “null_resource” , and when would you use it? Give an
example code?
The null_resource in Terraform is used to execute arbitrary actions that are not directly associated with
any specific cloud provider resource. It’s a resource placeholder that allows you to run provisioners (e.g.,
local-exec, remote-exec) when no other suitable resource exists or when you want to create custom
behavior during apply time.
51
Terraform Guide
triggers = {
always_run = "${timestamp()}" # Forces this to re-run on every apply
}
}
Explanation:
local-exec tells Terraform to run the command on the machine where terraform apply is executed.
The triggers block is used to specify conditions under which the null_resource should be re-created.
Using timestamp() ensures it always runs.
Avoid overusing null_resource for tasks that can be better modeled with proper Terraform providers (like
using aws_instance instead of using a null_resource to run an AWS CLI command).
How can you perform targeted resource deployment in Terraform? Can you give one example?
In Terraform, targeted resource deployment is achieved using the -target option with the terraform apply
or terraform plan commands. This allows you to apply changes only to a specific resource or module,
instead of running a full deployment for all resources.
Syntax:
Example:
52
Terraform Guide
If you want to apply only the S3 bucket creation and skip the EC2 instance for now:
Notes:
Project Structure:
.
├── [Link]
└── modules/
└── s3_bucket/
└── [Link]
modules/s3_bucket/[Link]
variable "bucket_name" {
type = string
}
[Link]
module "s3_bucket_module" {
source = "./modules/s3_bucket"
bucket_name = "my-app-prod-logs"
}
53
Terraform Guide
Tell me 5 scenarios where terraform apply will pass but when you run terraform destroy it will fail?
Here are five scenarios where terraform apply might pass, but terraform destroy could fail:
54
Terraform Guide
Scenario: Resources that were created by Terraform depend on other resources not managed by
Terraform (e.g., external resources or manual changes to the environment).
What Happens: terraform apply can create the resources as it doesn’t check for external
dependencies. However, when you try to run terraform destroy, Terraform fails to delete those
resources because they are dependent on something outside of its control (like external service
dependencies or missing permissions).
Scenario: If a resource was manually altered outside Terraform (for example, a resource in AWS
was modified directly through the console), Terraform might not be aware of the change.
What Happens: terraform apply will create resources based on the current state, but when it
attempts to destroy the resources, it fails because Terraform tries to remove the resource that was
modified in a way that doesn’t match the Terraform state (like a different tag, name, or security
settings).
Scenario: Some cloud providers, like AWS or Azure, allow you to apply deletion protection or set
specific policies that prevent resource destruction (e.g., aws_instance with delete_protection =
true).
What Happens: During terraform apply, the resource is created without issue. But terraform
destroy will fail if deletion protection or a similar safeguard is in place, preventing the resource
from being deleted.
Scenario: Resources such as networks, volumes, or IAM roles might have been created by
Terraform but have orphaned components (like child resources) that aren't in Terraform's state.
What Happens: terraform apply successfully creates the resource, but when running terraform
destroy, it may fail because Terraform doesn't recognize or track the orphaned child resources and
thus cannot destroy them.
Scenario: Terraform's execution role may have permissions to create resources but may not have
sufficient permissions to delete them (for example, lacking the necessary IAM permissions to delete
a security group or an EC2 instance).
What Happens: The terraform apply will succeed as it creates the resources with the available
permissions, but terraform destroy fails because Terraform cannot delete the resource due to
missing permissions or access restrictions.
55
Terraform Guide
Real-Time Example:
Scenario: You use Terraform to create an Amazon RDS instance with a VPC that was manually
created before. The RDS instance relies on this manually created VPC.
What Happens:
o terraform apply runs successfully, creating the RDS instance.
o terraform destroy fails because Terraform doesn't manage the manually created VPC, and it
cannot delete the dependent RDS instance.
Real-Time Example:
Scenario: You deploy an AWS EC2 instance using Terraform. Later, you manually modify the
instance by enabling termination protection from the AWS Console.
What Happens:
o terraform apply successfully creates the EC2 instance.
o terraform destroy fails because termination protection prevents Terraform from deleting
the EC2 instance.
Real-Time Example:
Scenario: You create an AWS S3 bucket via Terraform. After the creation, you manually enable S3
versioning or lifecycle policies outside Terraform.
What Happens:
o terraform apply successfully creates the S3 bucket.
o terraform destroy fails because the lifecycle policies or versioning settings prevent the
bucket from being deleted without manual intervention.
Real-Time Example:
Scenario: You use Terraform to create an AWS Load Balancer (ELB) and Security Groups. Later, you
manually add EC2 instances in the AWS Console to be attached to the ELB but do not manage these
instances with Terraform.
What Happens:
o terraform apply successfully creates the ELB and security groups.
o terraform destroy fails because the EC2 instances (added manually) are not managed by
Terraform and are not removed when destroying the ELB.
56
Terraform Guide
Real-Time Example:
Scenario: You deploy AWS resources (such as IAM roles and security groups) using a Terraform
service account that has only create permissions for certain resources but lacks delete permissions.
What Happens:
o terraform apply successfully creates the IAM roles and security groups.
o terraform destroy fails because the service account doesn't have sufficient permissions to
delete those resources, resulting in an error message about insufficient permissions.
Terraform locals are used to define values that can be reused throughout your configuration within a
module. They are essentially variables that help simplify code by storing values that don’t need to be
passed in as input or output. Locals are evaluated once and then can be used multiple times in the
configuration.
Locals allow for better readability, reusability, and maintainability of your Terraform code, especially in
larger projects.
Syntax:
locals {
local_variable_name = value
}
You can use locals to store values like strings, numbers, maps, or lists.
Scenario: You have multiple resources that need to use a specific region, which might change based on
environment (e.g., us-east-1, us-west-2).
Example:
locals {
region = "us-east-1"
}
57
Terraform Guide
In this example:
The region local is defined once and used in both the EC2 instance and the S3 bucket resource.
If the region needs to change (e.g., for different environments), you can just modify the local, and it
will be applied across all resources that use it.
Scenario: You need to calculate some complex values like subnet CIDR blocks based on a network size.
Example:
locals {
base_network = "[Link]/16"
subnet_mask = 8
Here:
local.subnet_cidr is calculated using the cidrsubnet() function, which is a local expression based on
a given base network and subnet mask.
This allows for easy management of network ranges, as any changes to the base network or mask
are automatically reflected in all calculations using local.subnet_cidr.
Scenario: You want to conditionally set values based on environment or other conditions.
Example:
locals {
environment = "production" # Can be "development", "staging", etc.
58
Terraform Guide
In this case:
Scenario: You want to store a list of tags and use them in multiple places.
Example:
locals {
common_tags = {
"Environment" = "production"
"Project" = "Terraform-Project"
}
}
Here:
The common_tags local is used for both the EC2 instance and the S3 bucket, ensuring consistency in
tags across resources.
If you need to add more tags, you can update the local, and it will apply to all resources using
local.common_tags.
Reusability: Locals allow you to reuse the same value or calculation in multiple places.
Simplification: Helps make your configuration more readable and maintainable.
Avoiding Repetition: Reduces the need to repeat the same expressions or values multiple times.
59
Terraform Guide
Dynamic Configurations: Supports complex logic or calculations that are needed for configurations.
When you need to use a value multiple times in your Terraform configuration.
When you want to simplify expressions and calculations in your code.
When you need a dynamic value based on other resources or variables that don’t require user
input.
In real-time projects, Terraform locals can make managing infrastructure configurations easier, especially
when dealing with multiple environments, complex resource configurations, or dynamic calculations.
Setting up Terraform CI/CD pipelines using Jenkins and GitHub Actions is a great way to automate your
infrastructure provisioning and management. Below are detailed steps and best practices for configuring
CI/CD with both Jenkins and GitHub Actions.
Prerequisites:
Steps:
Jenkinsfile Example:
pipeline {
agent any
60
Terraform Guide
environment {
TF_VERSION = "1.6.0"
AWS_REGION = "us-west-2"
AWS_ACCESS_KEY_ID = credentials('aws_access_key_id')
AWS_SECRET_ACCESS_KEY = credentials('aws_secret_access_key')
}
stages {
stage('Checkout Code') {
steps {
git '[Link]
}
}
stage('Terraform Init') {
steps {
script {
sh 'terraform init'
}
}
}
stage('Terraform Validate') {
steps {
script {
sh 'terraform validate'
}
}
}
stage('Terraform Plan') {
steps {
script {
sh 'terraform plan -out=tfplan'
}
}
}
stage('Terraform Apply') {
steps {
input message: "Approve to apply Terraform plan?", ok: "Apply"
script {
sh 'terraform apply tfplan'
}
}
}
stage('Terraform Destroy') {
61
Terraform Guide
steps {
input message: "Approve to destroy resources?", ok: "Destroy"
script {
sh 'terraform destroy -auto-approve'
}
}
}
}
post {
always {
cleanWs()
}
}
}
Checkout Code: Pulls the Terraform code from your GitHub repository.
Terraform Init: Initializes the working directory and configures the backend.
Terraform Validate: Ensures the Terraform code is valid and ready to execute.
Terraform Plan: Runs terraform plan to generate an execution plan.
Terraform Apply: Waits for manual approval to apply the plan (good practice to avoid accidental
deployments).
Terraform Destroy: Optionally, it destroys the resources after tests or when tearing down
environments.
Store AWS credentials or other sensitive values securely in Jenkins using Jenkins credentials store.
Reference the credentials in the pipeline using:
AWS_ACCESS_KEY_ID = credentials('aws_access_key_id')
AWS_SECRET_ACCESS_KEY = credentials('aws_secret_access_key')
Prerequisites:
Steps:
62
Terraform Guide
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
terraform:
runs-on: ubuntu-latest
environment:
AWS_REGION: us-west-2
steps:
- name: Checkout code
uses: actions/checkout@v2
63
Terraform Guide
Checkout code: Checks out the Terraform code from your GitHub repository.
Set up Terraform: Uses the hashicorp/setup-terraform action to install the specified Terraform
version.
Configure AWS credentials: Uses aws-actions/configure-aws-credentials to authenticate with AWS
using secrets stored in GitHub.
Terraform Init: Initializes Terraform and sets up the backend.
Terraform Validate: Validates the Terraform configuration files.
Terraform Plan: Runs terraform plan to generate an execution plan.
Terraform Apply: Applies the Terraform plan if the action is triggered by a push event.
Terraform Destroy: Optionally, you can add a step to destroy resources after the deployment is
complete.
Store sensitive values such as AWS credentials in GitHub Secrets for secure access in the workflow.
Go to your GitHub repository, navigate to Settings > Secrets, and add AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY.
The workflow is triggered on push and pull_request events to the main branch.
You can modify the workflow to trigger on other branches or events as needed.
Jenkins: Store the Terraform state in a remote backend such as S3 with DynamoDB for state
locking.
GitHub Actions: Use Terraform Cloud or S3 with DynamoDB to manage the state remotely and
prevent conflicts with state files.
Avoid automating terraform apply directly for production resources. Always require manual
approval before applying changes to production.
64
Terraform Guide
Use GitHub Secrets (in GitHub Actions) or the Jenkins credentials store to store sensitive values
such as AWS credentials.
Separate the terraform plan and terraform apply stages. This allows teams to review the plan
before applying changes, minimizing errors.
Organize your Terraform code into modules to promote reuse and better organization.
Share common infrastructure components (e.g., VPC, IAM roles, EC2 instances) as reusable
modules in your pipeline.
7. Isolate Environments
Use Terraform workspaces or separate configurations for different environments (e.g., dev, prod,
staging) to isolate state and configuration.
Integrate terraform fmt and terraform validate in your pipeline to ensure your code follows best
practices and is correctly formatted.
Conclusion
By combining Jenkins and GitHub Actions with best practices for Terraform CI/CD, you can effectively
automate your infrastructure deployment and management processes. Use manual approvals for critical
steps like terraform apply and leverage remote backends for state management to prevent conflicts.
Always store credentials securely and ensure that your infrastructure is tested and validated throughout
the pipeline.
65
Terraform Guide
What are dynamic blocks in Terraform and what are the best practices?
In Terraform, dynamic blocks allow you to create resource configurations or modules dynamically based
on a variable or a list of values. This is useful when you need to generate multiple instances of a resource
block based on some input data.
Dynamic blocks allow you to generate repetitive sections of Terraform code. Instead of manually defining
each block for each resource, a dynamic block can loop over a list of values (or any other structure) to
create those blocks.
dynamic "block_type" {
for_each = <list or map>
content {
<block content>
}
}
block_type: The name of the block you want to generate dynamically (e.g., security_group_rule,
ingress, egress).
for_each: A list or map that defines the number of instances to generate.
content: The block's contents, where you can reference values from the for_each expression.
Consider a scenario where you're provisioning an AWS security group and you want to create multiple
ingress rules dynamically.
variable "ingress_rules" {
type = list(object({
cidr = string
port = number
protocol = string
}))
default = [
{ cidr = "[Link]/0", port = 80, protocol = "tcp" },
{ cidr = "[Link]/0", port = 443, protocol = "tcp" }
]
}
66
Terraform Guide
dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = [Link]
to_port = [Link]
protocol = [Link]
cidr_blocks = [[Link]]
}
}
}
The ingress_rules variable holds a list of objects containing cidr, port, and protocol.
The dynamic "ingress" block loops over the ingress_rules variable and creates a security group rule
for each entry.
67
Terraform Guide
Let's say you need to assign tags dynamically to an AWS EC2 instance. The tags are stored in a variable as a
map, and you want to apply them dynamically.
variable "tags" {
type = map(string)
default = {
"Environment" = "production"
"Owner" = "DevOps"
"App" = "MyApp"
}
}
dynamic "tags" {
for_each = [Link]
content {
key = [Link]
value = [Link]
}
}
}
68
Terraform Guide
In this example, the tags block is dynamically created based on the tags map variable. Each key-value pair
in the map results in a separate tags block being created.
Dynamic Blocks: Help automate and simplify resource creation when dealing with repeating blocks
based on data structures like lists and maps.
Best Practices: Use dynamic blocks for complex and variable configurations, avoid overuse, validate
input data, and keep logic simple.
Common Use Cases: Security group rules, tags, load balancer listeners, and auto-scaling
configurations are some common examples.
By using dynamic blocks, you can reduce code duplication and improve the maintainability of your
Terraform configurations.
Terraform Drift refers to a situation where the actual state of your infrastructure diverges from the state
managed by Terraform. This typically occurs when resources are modified or changed outside of Terraform
(e.g., manually from the cloud provider's console or through other tools). As a result, Terraform’s state no
longer accurately represents the real-world configuration of resources.
Manual changes: A user manually updates the configuration of an infrastructure resource outside
of Terraform.
Automatic updates: Cloud providers or other systems automatically modify resources (e.g., scaling,
updates to managed services).
External systems: Other automation tools or CI/CD pipelines may modify the resources.
Drift can lead to issues when you run terraform plan or terraform apply because Terraform may try to re-
apply or revert changes to align resources with the defined configuration, causing potential conflicts or
errors.
1. Inconsistency: Terraform cannot manage resources that have been changed manually, causing
confusion and lack of alignment.
2. Loss of Control: Manual changes mean the desired state described in the Terraform configuration is
not reflected in reality.
3. Risk of Overwriting: If drift is not detected, Terraform could overwrite the manual changes made to
a resource, leading to unintended consequences.
69
Terraform Guide
Terraform does not automatically detect drift, but you can manually check for it using the following
methods:
Using terraform plan: Run terraform plan to compare your current infrastructure state with the
state defined in your Terraform configuration. This will show you any differences.
terraform plan
If any drift has occurred, Terraform will detect the differences and show what it intends to change.
Using terraform refresh: You can run terraform refresh to update the state file with the latest state
of the infrastructure.
terraform refresh
This will sync your Terraform state with the actual state of the infrastructure. However, this does
not fix drift; it only updates Terraform’s state file.
Third-party tools: There are tools like Terraform Drift Detection that scan your cloud environment
and check for drift across resources.
2. Managing Drift
Once drift has been detected, you have several options for managing it:
a. Manual Intervention
Inspect and Correct Drift: Review the drift and either manually revert the changes in the cloud
provider’s console or use Terraform to bring the infrastructure back to the desired state.
o If changes were intentional (e.g., for scaling or urgent fixes), you can update your Terraform
configuration to match the new reality and prevent future drift.
o If changes were not intentional, use Terraform to re-align the resource with the desired
state.
After detecting drift, you can apply the changes from Terraform to correct the infrastructure back
to the defined configuration by running:
terraform apply
This will reconcile the differences between the actual state and the desired configuration.
70
Terraform Guide
If drift occurred due to missing changes in your Terraform configuration, you can modify the
Terraform code to match the actual state and then apply the changes.
When multiple teams or automation tools are interacting with Terraform, ensure state locking is
enabled to prevent concurrent modifications that could cause drift.
For example, if using AWS S3 as a backend, enable state locking with DynamoDB:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "path/to/my/key"
region = "us-west-2"
dynamodb_table = "my-lock-table"
}
}
This ensures that only one process can modify the state at a time.
Store Terraform Configurations in Version Control: Ensure all your Terraform code is stored in
version control (e.g., Git). This enables you to track changes and catch potential drift issues caused
by unapproved changes.
Code Review Process: Implement a code review process for all changes to your Terraform
configuration. This can help prevent unauthorized or manual changes to infrastructure.
Immutable Infrastructure: One of the key principles of Infrastructure as Code (IaC) is the idea of
immutable infrastructure, where resources should not be manually modified once they are
created. Instead of making changes manually, any required changes should be done through
Terraform.
Automate Rollbacks: If manual changes need to be applied, they should be tracked, and changes
should be rolled back through automation tools rather than manual intervention.
71
Terraform Guide
Policy as Code: Terraform Cloud/Enterprise offers the ability to define policies that prevent
unauthorized changes from being made outside of Terraform. This ensures that infrastructure is
only managed by the defined Terraform configurations.
Run terraform refresh periodically to ensure your Terraform state is in sync with the actual
infrastructure. This will help catch drift early and avoid surprises when running terraform plan or
terraform apply.
Set up drift detection tools such as Terraform Drift Detection or third-party monitoring systems to
periodically check for drift in your infrastructure.
Some teams use external monitoring tools (e.g., Datadog, CloudWatch) to alert them when a
resource's configuration changes unexpectedly.
Infrastructure as Code with CI/CD: Build a CI/CD pipeline for Terraform that ensures your
infrastructure is continuously tested and deployed, avoiding manual changes. When changes need
to be applied, they go through the CI/CD pipeline, ensuring that they are tracked and versioned.
Automate Terraform Runs: Use a tool like Jenkins, GitLab CI, or GitHub Actions to trigger Terraform
runs as part of your CI/CD pipeline, ensuring that your infrastructure is always aligned with your
desired state.
Restrict Access: Ensure that only authorized users or systems can make manual changes to
infrastructure. Use tools like IAM policies, service accounts, and roles to limit manual access to
critical resources.
Summary
Terraform Drift occurs when changes are made to your infrastructure outside of Terraform, causing a
mismatch between the actual state and the state managed by Terraform.
72
Terraform Guide
In Terraform, dependencies are critical for ensuring that resources are created, updated, or destroyed in
the correct order. Terraform has built-in mechanisms for managing dependencies between resources, both
implicitly and explicitly. Understanding the different types of dependencies in Terraform helps you control
the flow of resource creation and modification.
1. Implicit Dependencies
2. Explicit Dependencies
3. Data Source Dependencies
4. Provisioner Dependencies
5. Module Dependencies
1. Implicit Dependencies
Implicit dependencies are automatically created by Terraform when one resource refers to another. This
means Terraform understands that one resource needs to be created before another because of the
reference between them.
How it works: When one resource refers to the output of another resource (such as using the id of
a resource), Terraform automatically establishes an implicit dependency.
Example: In this example, the aws_security_group resource depends on the creation of an aws_vpc
because the security group references the VPC ID.
Here, the security group implicitly depends on the VPC because it uses the vpc_id from the
aws_vpc.[Link]. Terraform will automatically create the VPC first and then the security group.
2. Explicit Dependencies
Explicit dependencies are defined using the depends_on argument, which allows you to manually specify
the order in which resources should be created, updated, or destroyed. While Terraform generally handles
the dependency graph automatically, the depends_on argument is useful when implicit relationships are
not enough or when you need to enforce a specific order of execution.
73
Terraform Guide
How it works: You explicitly tell Terraform that one resource depends on another, ensuring that the
dependent resource is processed first.
Example: In this example, the aws_security_group explicitly depends on the aws_vpc and
aws_instance resources being created first.
depends_on = [
aws_instance.example
]
}
In Terraform, data sources allow you to fetch data from an external system (e.g., from AWS, Google Cloud,
etc.) without creating or managing resources in the state. Data sources often introduce dependencies
because the data retrieved can be used to configure other resources.
How it works: If a resource references a data source, an implicit dependency is established on the
data source.
Example: Here, the aws_security_group references a data source to find an existing security group,
creating an implicit dependency.
74
Terraform Guide
4. Provisioner Dependencies
Provisioners are used to execute scripts or commands on a resource after it has been created or updated.
While provisioners are not recommended for general use (as they can create dependencies that Terraform
cannot track), they can create implicit dependencies, especially when they execute actions based on
resource creation.
How it works: The provisioners often create dependencies on other resources because the
commands might depend on data or state from other resources.
Example: Here, a provisioner depends on an instance being created to run a script.
provisioner "remote-exec" {
inline = [
"echo 'Hello, World!'"
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
}
}
In this case, the provisioner depends on the aws_instance.example being created first before
running the script.
5. Module Dependencies
When you use modules in Terraform, dependencies can be established between different modules. This is
particularly useful for organizing large configurations, where one module’s output may be another
module’s input.
How it works: Modules have outputs that can be passed as inputs to other modules, creating an
explicit dependency.
Example: Here, a VPC module is used in one module, and the output from that VPC is passed into
another module (e.g., to create instances in the VPC).
75
Terraform Guide
module "vpc" {
source = "./vpc"
cidr_block = "[Link]/16"
}
module "instances" {
source = "./instances"
vpc_id = [Link].vpc_id
}
In this case, the instances module explicitly depends on the vpc module because it needs the vpc_id
output to create the instances inside the VPC.
1. Use Implicit Dependencies: Let Terraform automatically handle dependencies whenever possible.
Terraform’s built-in understanding of resource dependencies usually results in a cleaner and more
maintainable configuration.
2. Use depends_on Sparingly: Explicit dependencies using depends_on should be used only when
Terraform cannot automatically infer the correct order. Overusing depends_on can make your code
more complex and harder to maintain.
3. Avoid Manual Changes: Try to avoid manually changing infrastructure that is managed by
Terraform. Manual changes can break the implicit dependencies Terraform creates.
4. Modules for Reusability: Leverage modules to encapsulate reusable logic and pass values between
modules explicitly. This helps maintain clear dependencies and improves the modularity of your
Terraform configuration.
5. Use Data Sources Wisely: When using data sources, make sure to handle dependencies correctly.
Since data sources are often used to retrieve external data, they may create an implicit dependency
chain.
6. Limit Provisioner Usage: Provisioners are often not ideal for creating dependencies. They can cause
issues in managing state and should only be used in cases where it's absolutely necessary (e.g.,
bootstrapping a system).
Summary
Terraform manages dependencies using both implicit and explicit methods. Implicit dependencies are
automatically handled when resources reference each other, while explicit dependencies are manually
defined using depends_on. Additionally, dependencies can exist in data sources, provisioners, and
modules. By understanding and managing these dependencies effectively, you can ensure that your
Terraform configurations are more reliable, maintainable, and scalable.
76