0% found this document useful (0 votes)
274 views76 pages

Master Terraform - Notes and Real-World Practices

The document provides a comprehensive guide on Terraform, an infrastructure automation tool that enables users to manage infrastructure as code (IaC) efficiently. It covers key features, benefits, core components, and the Terraform workflow, emphasizing automation, scalability, and collaboration in infrastructure management. Additionally, it explains the lifecycle of Terraform resources and the importance of using multiple provider instances for managing different environments and cloud platforms.

Uploaded by

vcramakoti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
274 views76 pages

Master Terraform - Notes and Real-World Practices

The document provides a comprehensive guide on Terraform, an infrastructure automation tool that enables users to manage infrastructure as code (IaC) efficiently. It covers key features, benefits, core components, and the Terraform workflow, emphasizing automation, scalability, and collaboration in infrastructure management. Additionally, it explains the lifecycle of Terraform resources and the importance of using multiple provider instances for managing different environments and cloud platforms.

Uploaded by

vcramakoti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Master Terraform — Notes and Real-World Practices

Level up your Infrastructure as Code skills with concise notes, practical


examples

Naveen R

Senior Software Engineer (DevOps Focus) | Tech Enthusiast


LinkedIn: [Link]
Medium: @stackcouture] [Link]

1
Terraform Guide

What is Terraform?

 Terraform is an IT infrastructure automation tool for building, changing, and versioning infrastructure
safely and efficiently
 Terraform can manage existing as well as custom in-house solutions
 It turns your infrastructure as cod IAC i.e your computing environment has some of the same
attributes as your application
a. Your infrastructure is version able
b. Your infrastructure is reusable
c. Your infrastructure is testable
d. Minimizes errors and security violations
 You only need to tell what the desired state should be, not how to achieve it
 Terraform is cloud-agnostic

Features of Terraform

Infrastructure as Code
 Infrastructure is described using a high-level configuration syntax
 Provides single unified syntax

Executes Plans
 Terraform has a “planning” step where it generates an execution plan
 The execution plan shows what Terraform will do before making the actual changes

Resource Graph
 Terraform builds a graph of all your resources, and parallelizes the creation and modification of
any non-dependent resources
 Terraform builds infrastructure as efficiently as possible

Change Automation
 Complex change sets can be applied to your infrastructure with minimal human interaction
 With the previously mentioned execution plan and resource graph, you know exactly what
Terraform will change and in what order, avoiding many possible human errors

Terraform Workflow

 Write: Infrastructure as code


 Plan: Preview Changes Before applying
 Apply: Provision Reproducible Infrastructure

2
Terraform Guide

HCL ( HashiCorp Configuration Language )

 Terraform code is written in a language called HCL in files with the extenstion .tf
 Terraform can also read JSON configurations that is named with the .tf json
 It is declarative language, so your goals is to describe the infrastructure you want, and
Terraform will figure out how to create it
 HCL syntax is composed of blocks that define a configuration in Terraform
 Blocks are comprised of key = value pairs
 Single line comments start with # % multi-line are commented with /* and */
 Strings are in double-quotes, Boolean values: true, false
 Lists values are made with square brackets ([]), for example [“foo”,”bar”,”baz”]
 Maps can be made with braces ({}) and colons (:) for example {“foo”:”bar”,”bar”:”baz”}
 Strings can interpolate other values using syntax wrapped in ${}, for example ${[Link]}

Infrastructure as Code (IaC) tools like Terraform provide several key benefits:

1. Consistency and Reproducibility

 Eliminates Human Error: By defining infrastructure in code, you reduce the chances of errors that
typically occur during manual configuration.
 Version Control: Infrastructure configurations are stored in source control (e.g., Git), making it easy
to track changes, roll back, and collaborate with teams.

2. Automation and Efficiency

 Automated Provisioning: Terraform allows for the automated provisioning of resources across
multiple cloud providers, reducing the time and effort required for manual setup.
 Repeatable Deployments: You can easily spin up identical environments for different stages of
development (e.g., dev, staging, production), ensuring consistency.

3. Scalability and Flexibility

 Infrastructure Scaling: With IaC, you can scale resources automatically based on changing needs by
modifying the configuration and running terraform apply to update the infrastructure.
 Multi-cloud Support: Terraform supports multiple providers (AWS, Azure, Google Cloud,
Kubernetes, etc.), enabling a multi-cloud or hybrid infrastructure without vendor lock-in.

4. Collaboration and Versioning

 Collaboration: Teams can collaborate on infrastructure changes through pull requests and code
reviews, enhancing team communication and workflows.
 Versioned Infrastructure: By using version control, changes to infrastructure can be tracked over
time, and specific versions of infrastructure can be deployed.

3
Terraform Guide

5. Documentation

 Declarative Configuration: The infrastructure code itself serves as documentation for how the
system is set up and configured. This is much clearer and more up-to-date than manually written
documentation.
 Easier Onboarding: New team members can review the code to quickly understand how the
infrastructure is set up.

6. Cost Control

 Cost Management: With Terraform’s ability to define and manage infrastructure in code, you can
easily view the resources you’ve provisioned and ensure that only necessary resources are being
used, reducing waste and avoiding over-provisioning.
 Plan and Review: Before applying changes, Terraform can show a plan of what will be changed,
added, or destroyed, allowing teams to review and assess any cost impacts.

7. Security and Compliance

 Enforcement of Standards: Using IaC allows teams to enforce security and compliance policies as
part of the infrastructure configuration (e.g., ensuring encryption is enabled, access controls are set
up).
 Automated Audits: With version-controlled infrastructure, it's easy to conduct audits and track
who made what changes, helping to maintain security and compliance standards.

8. Infrastructure Lifecycle Management

 State Management: Terraform keeps track of the current state of the infrastructure, making it
easier to manage complex dependencies and make incremental changes without manually
updating resources.
 Easy Updates: You can apply changes incrementally with terraform plan, ensuring only the
necessary updates are made to your infrastructure.

These benefits combine to make infrastructure management more efficient, reliable, and easier to
maintain, especially as environments grow more complex.

What are the core items of Terraform?

The core items of Terraform are the fundamental components that make up the infrastructure as code
(IaC) workflow. These components are essential for defining, managing, and deploying infrastructure in a
consistent and automated manner.

1. Providers

 Definition: Providers are responsible for interacting with external APIs to manage the
infrastructure. Each provider is specific to a cloud platform or service (e.g., AWS, Azure, Google
Cloud, Kubernetes, etc.).

4
Terraform Guide

 Function: They define the resources you can manage, such as EC2 instances in AWS or virtual
machines in Azure.
 Example:

provider "aws" {
region = "us-east-1"
}

2. Resources

 Definition: Resources represent individual infrastructure components that you want to manage
with Terraform, such as virtual machines, networks, or storage buckets.
 Function: Resources are the building blocks of your infrastructure and are the actual items that are
created, modified, or destroyed.
 Example:

resource "aws_instance" "example" {


ami = "ami-123456"
instance_type = "[Link]"
}

3. Variables

 Definition: Variables allow you to parameterize your Terraform configurations, making them
reusable and dynamic by defining values that can be passed into your configuration.
 Function: They allow you to customize values like region, instance type, etc., without hardcoding
them into the Terraform configuration.
 Example:

variable "instance_type" {
type = string
default = "[Link]"
}

4. Outputs

 Definition: Outputs are used to expose important information about the infrastructure after a
Terraform run, such as IP addresses or resource IDs.
 Function: They allow you to capture and display values for use in other processes, like showing the
public IP of an EC2 instance.
 Example:

output "instance_ip" {
value = aws_instance.example.public_ip
}

5. Modules

5
Terraform Guide

 Definition: A module is a container for multiple Terraform resources that are used together.
Modules can be reused across different configurations, allowing for code organization and sharing.
 Function: Modules help in organizing complex Terraform configurations and in reusing code. You
can use both local modules (in your own codebase) and external modules (from the Terraform
Registry).
 Example:

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
name = "my-vpc"
cidr = "[Link]/16"
}

6. State

 Definition: Terraform state is a representation of the current state of your infrastructure.


Terraform uses it to map the real-world infrastructure to your configuration and to detect changes.
 Function: The state file keeps track of the resources Terraform manages and is critical for
operations like terraform plan and terraform apply. It helps in determining what changes need to
be made to the infrastructure.
 Example: The state is typically stored in a file called [Link], but it can also be stored
remotely (e.g., AWS S3) in a remote backend for better collaboration.

7. Data Sources

 Definition: Data sources allow you to query existing infrastructure outside of Terraform’s
management. This can be data from the cloud provider that you want to reference in your
configuration.
 Function: They are used when you need to get information about existing resources (e.g., the ID of
a subnet or a security group).
 Example:

data "aws_ami" "latest" {


most_recent = true
owners = ["amazon"]
filters = {
name = "amzn2-ami-hvm-*-x86_64-gp2"
}
}

8. Backend

 Definition: The backend in Terraform determines where the Terraform state is stored. It defines
how and where the state file is managed (locally or remotely).
 Function: It helps with state management, enables collaboration, and provides features like state
locking to prevent concurrent changes.

6
Terraform Guide

 Example (AWS S3 backend):

terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "state/[Link]"
region = "us-east-1"
}
}

9. Provisioners

 Definition: Provisioners allow you to execute scripts or commands on the resources after they are
created or updated. They are typically used for bootstrapping instances or configuring
infrastructure after it's been provisioned.
 Function: Provisioners are generally used to install software, configure services, or run initialization
tasks on newly created resources.
 Example:

resource "aws_instance" "example" {


ami = "ami-123456"
instance_type = "[Link]"

provisioner "remote-exec" {
inline = [
"echo Hello, world!"
]
}
}

Conclusion

The core items of Terraform are the fundamental components that enable the creation, modification, and
management of infrastructure. These include:

 Providers (to interact with APIs)


 Resources (representing infrastructure components)
 Variables (for parameterization)
 Outputs (for exposing useful information)
 Modules (for organizing and reusing code)
 State (to keep track of the infrastructure's real state)
 Data Sources (to fetch information about existing resources)
 Backend (to store state remotely)
 Provisioners (for resource initialization).

7
Terraform Guide

The primary purpose of a Terraform workflow in a team setting is to

In a team environment, a Terraform workflow establishes a standardized approach for teams to


collaboratively write, review, and apply infrastructure changes. It typically involves the following steps:

1. Write Code: Developers and operators write Terraform configuration files to define the
infrastructure.
2. Version Control: These files are stored in a version control system (like Git), ensuring changes are
tracked and versioned.
3. Plan: The terraform plan command is used to preview changes before applying them, which helps
teams assess the impact of changes.
4. Review: Changes are reviewed by team members through pull requests or code reviews, ensuring
correctness, security, and alignment with best practices.
5. Apply: Once reviewed, the terraform apply command is run to provision or update the
infrastructure.

State Management: The state of the infrastructure is managed and updated in a shared backend (e.g.,
Terraform Cloud, S3), ensuring all team members work with the most up-to-date infrastructure.

We need multiple provider instances in Terraform for several important reasons:

1. Manage Multiple Environments (e.g., Dev, Staging, Prod)

 Separate Configurations: You might want to manage different environments (such as development,
staging, and production) that require separate configurations, resources, or access controls. Using
multiple provider instances, you can define different provider configurations for each environment
to ensure isolation and proper resource management.
 Environment-specific Variables: Each environment might use different credentials, regions, or
resource configurations. Multiple provider instances allow you to handle these differences without
affecting other environments.

2. Target Resources in Different Regions, Accounts, or Endpoints

 Region-Specific Resources: For cloud providers like AWS, Azure, or Google Cloud, resources are
often created within specific regions. You may need to manage resources in multiple regions within
the same provider, and using multiple provider instances allows you to do this seamlessly.
 Cross-Account/EndPoint Management: You can configure different provider instances to manage
resources in different cloud accounts or across different endpoints (e.g., managing resources on-
premises in addition to the cloud). This flexibility is key for organizations that operate across
multiple accounts or have hybrid cloud environments.

3. Managing Multiple Clouds (e.g., AWS, Azure, Google Cloud)

 Multi-cloud Architecture: In modern infrastructures, organizations often leverage more than one
cloud provider (e.g., using AWS for compute, Azure for networking, and Google Cloud for machine

8
Terraform Guide

learning). With multiple provider instances, Terraform allows you to manage resources across these
different platforms within a single project.
 Consistency Across Providers: You can use a unified Terraform workflow to ensure that
infrastructure is defined in code and deployed consistently across multiple clouds.

4. Isolation of Resource Configurations

 Clear Separation of Responsibilities: If you're managing resources in different contexts (e.g., one
provider for core infrastructure and another for a specific service), using multiple provider instances
helps maintain a clean separation of configurations. This can help avoid accidental resource
conflicts or mismanagement.
 Granular Control: Each provider instance can have its own set of configurations, such as
authentication credentials, access keys, or region settings. This level of granularity makes it easier
to control and manage your infrastructure.

Example:

To manage resources in both AWS and Azure, you might define two provider instances:

provider "aws" {
region = "us-west-2"
profile = "my-aws-profile"
}

provider "azurerm" {
features {}
}

resource "aws_instance" "my_instance" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"
}

resource "azurerm_virtual_machine" "my_vm" {


name = "my-vm"
location = "East US"
resource_group_name = "my-resource-group"
...
}

In this case, you have separate provider blocks for AWS and Azure, each managing its respective resources.

9
Terraform Guide

Describe the lifecycle of a Terraform resource.

The lifecycle of a Terraform resource describes how Terraform manages the creation, modification, and
destruction of infrastructure resources over time. It consists of the following phases:

1. Configuration

You define the desired infrastructure in .tf files using HCL (HashiCorp Configuration Language).

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"
}

2. Initialization (terraform init)

Terraform initializes the working directory, downloads provider plugins, and prepares the environment.

3. Planning (terraform plan)

Terraform compares the current state (in the .tfstate file) to your configuration and shows an execution
plan.

It determines what to:

 Create (+)
 Update (~)
 Destroy (-)

4. Apply (terraform apply)

Terraform applies the execution plan:

 Creates new resources


 Modifies existing ones
 Destroys resources that no longer exist in your configuration

Changes are made via API calls to your cloud/SaaS providers.

5. State Update

After applying, Terraform updates the state file ([Link]) to reflect the real infrastructure.

The state file is Terraform’s single source of truth.

10
Terraform Guide

6. Lifecycle Meta-Arguments (Optional Control)

You can influence the behavior of resources using lifecycle blocks:

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"

lifecycle {
prevent_destroy = true
create_before_destroy = true
ignore_changes = [tags]
}
}

 prevent_destroy: Stops accidental deletion


 create_before_destroy: Ensures safe replacement
 ignore_changes: Prevents re-apply on specific attribute changes

7. Destroy (terraform destroy)

Terraform tears down all managed resources and updates the state file accordingly.

Summary Table
Phase Description

Configuration Write desired infrastructure code

Init Initialize directory & download providers

Plan Show changes needed to match config

Apply Make changes to real infrastructure

State Update Record the new state of resources

Lifecycle Options Add fine-grained control (prevent destroy, etc.)

Destroy Remove all managed infrastructure

terraform state list lists all the resources currently tracked in the Terraform state file. These are the
resources Terraform is managing.

11
Terraform Guide

Example:

terraform state list

Output:

aws_instance.example
aws_s3_bucket.my_bucket
aws_vpc.main

❌Why the other options are incorrect:

 1. terraform show – Displays the full state file content, including values, but not as a simple list.
 3. terraform validate – Only checks whether the syntax and configuration are valid. It doesn’t list
resources.
 4. terraform output – Shows the outputs defined in the configuration, not a list of resources.

Here are a few scenarios where terraform plan may pass, but terraform apply can still fail:

1. Permissions Issues

 Scenario: You have valid configurations, and terraform plan shows no errors, but when you apply,
Terraform tries to create or modify resources, and it encounters permission issues in your cloud
provider (e.g., AWS IAM).
 Example:
o terraform plan successfully shows what will be created.
o When you run terraform apply, Terraform tries to create an EC2 instance, but the IAM role
used by Terraform doesn't have sufficient permissions to create EC2 instances.
 Error during terraform apply:

Error: AccessDenied: User is not authorized to perform: ec2:RunInstances

2. Provider Configuration Issues

 Scenario: You may have valid provider configurations, and terraform plan may succeed. However,
there could be an issue with provider authentication or other settings that only become evident
when you attempt to apply the changes.
 Example:
o terraform plan successfully generates a plan for resource creation.
o When you run terraform apply, Terraform can't authenticate with the provider because
your credentials have expired or are not set up properly.
 Error during terraform apply:

Error: Authentication Failed: Invalid credentials

12
Terraform Guide

3. Resource Dependencies Not Met

 Scenario: You might have defined resources with dependencies, but those dependencies aren't
satisfied at the time of terraform apply.
 Example:
o terraform plan runs successfully because Terraform just checks the configuration and finds
no issues.
o When you run terraform apply, Terraform tries to create a resource that depends on
another resource that hasn't been created yet due to an unmet dependency or missing
configuration.
 Error during terraform apply:

Error: Resource 'aws_security_group' not found

4. Out-of-Sync Infrastructure (Drift)

 Scenario: Infrastructure that was manually modified or deleted (drift), but terraform plan doesn't
detect it, or it doesn't show up in the state, causing terraform apply to fail when it tries to reconcile.
 Example:
o terraform plan runs and shows no changes, but manually deleting a resource (e.g., an EC2
instance) doesn't update the state.
o terraform apply tries to modify or delete the resource, and AWS fails because the resource
doesn't exist anymore.
 Error during terraform apply:

Error: Instance "i-xxxxxxxxxxxx" not found

5. Resource State Mismatch

 Scenario: There is a discrepancy between the local state file and the actual infrastructure due to
state file corruption or manipulation.
 Example:
o terraform plan runs and assumes the state is up-to-date.
o During terraform apply, Terraform detects that the state file is out of sync with the actual
infrastructure and fails to apply the changes.
 Error during terraform apply:

Error: State conflict detected. The state file is out of sync with real infrastructure.

6. Terraform Resource Update Limitations

 Scenario: Certain resources might have limitations when it comes to updates. Terraform can plan
updates, but the provider might prevent it.
 Example:
o You modify a resource that has restrictions on in-place updates (like changing the size of a
managed disk in Azure).
o terraform plan shows the change.

13
Terraform Guide

o When you run terraform apply, it fails because the resource can't be updated in-place, and
Terraform tries to recreate it.
o
 Error during terraform apply:

Error: Cannot modify 'disk_size' of managed disk. It must be recreated.

7. Race Conditions with Resources

 Scenario: If there are multiple teams or processes interacting with the same infrastructure (e.g.,
creating resources in parallel), Terraform's plan might pass, but terraform apply might fail due to a
resource being modified or deleted by another process in the meantime.
 Example:
o terraform plan shows that everything is ready.
o In the middle of applying, a resource like an S3 bucket is deleted manually or by another
process, causing the apply step to fail.
 Error during terraform apply:

Error: BucketNotFound: The specified bucket does not exist.

8. Invalid Resource Modifications

 Scenario: You modify a resource in a way that is technically valid in the configuration, but it violates
constraints in the cloud provider or system that can only be detected during execution.
 Example:
o You change the instance type for an EC2 instance in your configuration.
o terraform plan shows the planned change.
o During terraform apply, AWS fails to stop the instance to apply the change due to a
constraint (like the instance running critical workloads).
 Error during terraform apply:

Error: Instance cannot be stopped. It is in use by a critical service.

Summary of Scenarios:

1. Permissions Issues – Terraform has no issues planning, but lacks permissions to apply changes.
2. Provider Configuration Issues – Misconfigured provider settings.
3. Resource Dependencies Not Met – Dependencies are not in place.
4. Out-of-Sync Infrastructure – Manual changes cause discrepancies.
5. State File Mismatch – State file and infrastructure are out of sync.
6. Resource Update Limitations – Certain updates can't be performed in-place.
7. Race Conditions – Resource modified/deleted by another process.
8. Invalid Resource Modifications – Changes violate cloud provider constraints.

14
Terraform Guide

What are Provisioners? How do you define provisioners? What are the types of provisioners?

In Terraform are used to execute scripts or commands on resources as part of the resource creation or
modification process. They allow you to customize resources further after they are created, such as
installing software, configuring systems, or running initialization tasks. Provisioners can be applied to any
resource and are typically used to configure the environment once the resource itself is provisioned.

How to Define Provisioners

Provisioners are defined within the resource block, using a provisioner argument followed by a specific
provisioner type and configuration.

Example:

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"

provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx"
]
}
}

In this example, the remote-exec provisioner is used to execute a series of commands on the newly
created AWS EC2 instance.

Types of Provisioners

1. local-exec
Executes commands on the machine running Terraform (the local machine) during the resource
creation or modification.

Example:

provisioner "local-exec" {
command = "echo 'Resource Created' > [Link]"
}

2. remote-exec
Executes commands on a remote machine after the resource is created, typically via SSH or WinRM
(for Windows). This is commonly used to configure instances post-deployment.

Example:

15
Terraform Guide

provisioner "remote-exec" {
inline = [
"sudo apt-get update",
"sudo apt-get install -y nginx"
]
connection {
host = aws_instance.example.public_ip
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
}
}

3. file
Copies files from the local machine to the remote machine. It is useful when you need to upload
configuration files, scripts, or other resources to a remote instance.

Example:

provisioner "file" {
source = "[Link]"
destination = "/tmp/[Link]"
}

When to Use Provisioners

Provisioners should be used cautiously because they can create dependencies on the order of resource
creation. They should be reserved for situations where resource creation alone doesn't fully configure the
system. Additionally, it's often better to use configuration management tools (e.g., Ansible, Puppet, Chef)
for long-term infrastructure management instead of relying heavily on provisioners.

What is taint? When are resources 🚩 marked tainted?

In Terraform, tainting refers to the process of marking a resource as needing to be recreated, even if it
hasn't changed according to the Terraform plan. When a resource is marked as tainted, Terraform will
destroy and recreate that resource during the next apply.

What is Taint?

A tainted resource is one that has been flagged for destruction and recreation. Tainting a resource in
Terraform means that Terraform will destroy the existing resource and re-create it on the next apply, even
if there were no changes to its configuration. This is useful in cases where the resource has been
corrupted, has become inconsistent, or needs to be re-provisioned for other reasons, even though the
configuration has not changed.

16
Terraform Guide

When are Resources Marked Tainted?

Resources can be marked tainted in a few ways:

1. Manual Tainting:
You can manually taint a resource by using the terraform taint command. This explicitly marks the
resource for destruction and recreation on the next terraform apply.

Example:

terraform taint aws_instance.example

In this case, the aws_instance.example resource will be marked as tainted, and it will be destroyed
and recreated the next time terraform apply is run.

2. Implicit Tainting (State Drift):


Sometimes, resources are implicitly tainted by Terraform if it detects that the actual state of the
resource has drifted from the desired configuration. This can happen due to manual changes made
to the resource outside of Terraform, or because of infrastructure issues that make the resource
unhealthy. Terraform will notice that the resource no longer matches the desired state and will
mark it as tainted so it can be replaced.
3. Failed Resource Creation or Updates:
If a resource creation or update fails during a terraform apply operation, Terraform may mark the
resource as tainted. This often happens when a resource is created but there was an issue with it
(e.g., a provisioning script failed or an API returned an error), and Terraform knows that the
resource cannot be used as is.
4. External Factors (Resource Corruption):
If external systems or manual actions (such as an operator manually deleting or modifying the
resource) cause the resource to be in a bad state, Terraform can detect this during the next plan or
apply and mark it as tainted.

How to Manage Tainted Resources

1. Recreating a Tainted Resource:


Once a resource is tainted, Terraform will recreate it during the next apply. You can check the taint
status by running:

terraform show

It will indicate which resources are tainted. To apply the changes and recreate the resource, you
simply run:

terraform apply

2. Untainting a Resource:
If you decide that a resource shouldn't be tainted anymore, you can untarget it using the terraform
untaint command:

17
Terraform Guide

terraform untaint aws_instance.example

3. Use in CI/CD Pipelines:


Tainting can be useful in CI/CD pipelines when you want to ensure resources are re-created each
time to maintain consistency (e.g., to get fresh environments or to resolve issues that might arise
from stale resources).

Why is Tainting Useful?

 Correcting Unhealthy Resources: If a resource is in a bad state (e.g., misconfigured, corrupted, or


non-functional), tainting forces Terraform to destroy and recreate it to ensure that the
infrastructure is in the desired state.
 Forced Reprovisioning: Tainting can be used when you need to force a resource to be re-
provisioned, such as when changes need to be applied that Terraform does not automatically
detect (e.g., manual changes).

What is a Workspace? When & how to use it?

A Workspace in Terraform is an isolated environment within a Terraform configuration that allows you to
manage multiple versions of infrastructure using the same codebase. Workspaces help you manage
different stages of infrastructure (e.g., development, staging, production) without needing to duplicate
your configuration files.

What is a Workspace?

A Workspace is essentially a container for Terraform state files. By default, Terraform uses a "default"
workspace, but you can create multiple workspaces to isolate state files for different environments. Each
workspace has its own state and set of variables, making it easier to manage different environments or
configurations.

Terraform workspaces allow you to:

 Manage different environments (e.g., dev, staging, production) with the same Terraform code.
 Keep state isolated per environment, preventing interference between environments.
 Use the same Terraform configuration for multiple environments by using workspace-specific
variables and state.

When to Use Workspaces

You might want to use workspaces in the following scenarios:

 Multi-environment deployments: When you want to use the same Terraform code to manage
different environments (e.g., separate resources for development, staging, and production).
 Testing multiple versions of infrastructure: When you need to test changes to infrastructure in a
temporary isolated environment without affecting the main infrastructure.

18
Terraform Guide

 Managing different configurations for a single project: When the configuration changes based on
the workspace (for example, using different resources or settings for different environments).

How to Use Workspaces

1. Creating a New Workspace You can create a new workspace using the terraform workspace new
command.

Example:

terraform workspace new dev

This will create a new workspace named dev.

2. Switching Between Workspaces To switch to an existing workspace, use the terraform workspace
select command.

Example:

terraform workspace select dev

This command switches to the dev workspace. Terraform will now manage infrastructure for the
dev environment.

3. Listing Workspaces To view all available workspaces, use the terraform workspace list command.

Example:

terraform workspace list

This will show all the workspaces that exist in the current project.

4. Workspace-Specific Variables You can define different variables for each workspace. You can use
[Link] to reference the current workspace inside your Terraform configurations.

Example:

variable "instance_type" {
type = string
default = "[Link]"
}

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = var.instance_type

tags = {
Name = "ExampleInstance-${[Link]}"

19
Terraform Guide

}
}

In this example, the [Link] variable dynamically adds the workspace name to the
Name tag, making it clear which environment the resource belongs to.

5. State Isolation Each workspace has its own state file. When you switch workspaces, Terraform uses
a different state file, ensuring that your infrastructure for different environments does not interfere
with one another.

You can check the current workspace by running:

terraform workspace show

This will display the current workspace, such as dev, staging, or production.

6. Deleting a Workspace If you no longer need a workspace, you can delete it using the terraform
workspace delete command. Note that deleting a workspace will not delete the actual resources in
your cloud provider, just the state associated with that workspace.

Example:

terraform workspace delete dev

Example Use Case

Let’s say you are working on a project that requires separate environments for development, staging, and
production. You can use workspaces to manage these environments:

1. Create a new workspace for each environment:

terraform workspace new dev


terraform workspace new staging
terraform workspace new production

2. Use [Link] in your configuration to set different variables or configurations for each
environment. For example, different instance types in each environment.
3. Switch between workspaces based on the environment you are working on:

terraform workspace select dev


terraform apply

4. Once you’re ready to deploy to production, switch to the production workspace:

terraform workspace select production


terraform apply

20
Terraform Guide

Best Practices

 Use workspaces to isolate different environments (e.g., development, staging, production) while
using the same Terraform codebase.
 Avoid using workspaces for multi-region or multi-tenant configurations that have complex
dependencies across different states.
 Use workspaces with caution in CI/CD pipelines, as managing state correctly in these environments
is critical to avoid accidental modifications.

If different teams are working on the same configuration. How do you make files to have
consistent formatting?

To ensure consistent formatting of Terraform configuration files when different teams are working on the
same codebase, you should use:

terraform fmt (Terraform Format)

This command automatically formats your .tf and .tfvars files according to the standard Terraform style
conventions.

Here's how to implement it in a team setting:

1. Use terraform fmt Locally

 Developers should run terraform fmt before committing code.


 Example:

terraform fmt -recursive

This formats all Terraform files in the current directory and subdirectories.

2. Enforce with Pre-Commit Hook

 Use a Git pre-commit hook to auto-format files before they're committed:

# .git/hooks/pre-commit
terraform fmt -recursive

3. Integrate into CI Pipeline

 Add a formatting check in your CI (e.g., GitHub Actions, GitLab CI, Jenkins) to ensure PRs have
properly formatted code:

terraform fmt -check -recursive

This fails the build if any file is incorrectly formatted.

21
Terraform Guide

4. Use Editor Plugins

 Most IDEs (e.g., VS Code, IntelliJ) have Terraform plugins/extensions that format code on save using
terraform fmt.

Summary:

Using terraform fmt ensures clean, consistent formatting, reduces merge conflicts, and improves
readability and collaboration across teams.

Explain difference between terraform fmt and terraform validate with examples

Great question! Both terraform fmt and terraform validate are essential Terraform commands, but they
serve very different purposes:

terraform fmt – Formatting

Purpose: Automatically formats your Terraform configuration files to follow the standard style guide.

 Use case: Keeps your code clean and consistently styled.


 Does not check for correctness, only formatting.

Example:

Before running terraform fmt:

resource "aws_instance" "example"{


ami="ami-123456"
instance_type= "[Link]"
}

Run:

terraform fmt

After running terraform fmt:

resource "aws_instance" "example" {


ami = "ami-123456"
instance_type = "[Link]"
}

 It fixes spacing, alignment, and block formatting automatically.

22
Terraform Guide

terraform validate – Syntax & Configuration Validation

Purpose: Validates that the Terraform files are syntactically valid and internally consistent.

 Checks for errors in syntax, required arguments, or logical issues.


 Does not interact with any real infrastructure or providers.

Example:

Invalid configuration:

resource "aws_instance" "example" {


instance_type = "[Link]"
# Missing required 'ami' argument
}

Run:

terraform validate

Output:

Error: Missing required argument

on [Link] line 1, in resource "aws_instance" "example":


1: resource "aws_instance" "example" {

The argument "ami" is required, but no definition was found.

🔄 Summary Table
Feature terraform fmt terraform validate

Purpose Format .tf files to standard style Check if configuration is syntactically valid

Fixes formatting? ✅Yes ❌No

Checks config logic? ❌No ✅Yes

Required before plan? Optional, for clean code Recommended to catch errors early

Modifies files? ✅Yes (reformats) ❌No

Typical use case Auto-format before commit Validate configuration before plan or apply

23
Terraform Guide

If Terraform crashes during execution, which of the following steps can you take to debug the
issue and gather more information?

If Terraform crashes during execution, there are several steps you can take to debug the issue and gather
more information to diagnose the cause of the crash. Here are the common steps:

1. Check Terraform Logs (TF_LOG)

Terraform provides detailed logging capabilities through the TF_LOG environment variable. This can give
you more insight into what Terraform was doing at the time of the crash.

 Set TF_LOG to DEBUG or TRACE to get detailed debug-level or trace-level logs:

export TF_LOG=DEBUG

Or, for even more detailed output:

export TF_LOG=TRACE

 Once set, Terraform will output detailed logs to the console, which can help identify the exact point
of failure or provide more context about the issue.
 If you want to store the log output in a file, use:

export TF_LOG_PATH=./[Link]

This is a powerful tool for investigating crashes and troubleshooting unexpected behavior in Terraform.

2. Review the Crash Output

After a crash, Terraform usually prints an error message or stack trace to the console. This is often the first
place to look for clues about what went wrong. Carefully review the error message and stack trace to
determine if there's any specific resource or operation that is causing the crash.

3. Use terraform plan for Validation

If you're unsure about what changes Terraform is trying to make, you can run terraform plan to preview
the changes before actually applying them. This can help you catch any obvious issues before a crash
occurs during execution.

terraform plan

This command shows what Terraform intends to do, which might give insight into what causes the crash
during the apply.

4. Run Terraform with -debug Flag

24
Terraform Guide

Running Terraform with the -debug flag forces Terraform to output debug-level information, which can be
helpful in understanding the crash.

terraform apply -debug

5. Check Terraform Version and Providers

 Ensure that you're using a stable version of Terraform, and check for any issues related to your
version by visiting the Terraform GitHub repository or the Terraform release notes.
 Similarly, check that all the provider plugins you're using are up to date. Sometimes crashes are due
to bugs in older versions of the provider plugins.

To check your current Terraform version:

terraform version

6. Examine the State File

Corrupted or inconsistent state files can sometimes lead to crashes during execution. If you suspect the
state file is the issue, try running:

terraform state list

This will show the resources in your state file and allow you to verify that it hasn't become corrupted. If
you suspect corruption, you can try to manually inspect or repair the state file (e.g., using terraform state
commands).

7. Increase System Resources

Terraform can crash if it runs out of memory or other system resources during execution, especially when
managing large infrastructures. Check your system’s resource usage (CPU, RAM, disk) to ensure that it has
enough resources to handle the workload. You can also try running Terraform on a machine with more
memory or CPU resources if possible.

8. Update Dependencies

If the crash happens during a specific provider's resource management, ensure that all providers are
updated. Run the following command to upgrade all provider versions to the latest compatible version:

terraform init -upgrade

9. Isolate the Problem

If you’re unable to identify the crash from logs or output, try isolating the issue:

25
Terraform Guide

 Comment out sections of the configuration or remove some resources temporarily to narrow down
the specific resource causing the crash.
 Run smaller configurations to verify if a particular resource or module is causing the crash.

10. Check for Known Issues

After reviewing the logs, check if the error matches any known issues by searching for it in the Terraform
GitHub issues or related provider repositories.

11. Contact HashiCorp Support

If you're unable to resolve the issue, and you have a Terraform Enterprise subscription or support plan, you
can contact HashiCorp support for further assistance.

Summary of Steps:

1. Set TF_LOG=DEBUG or TF_LOG=TRACE for detailed logs.


2. Review the crash output for error messages or stack traces.
3. Run terraform plan to check the intended changes before applying.
4. Use the -debug flag to get more detailed information during execution.
5. Ensure you're using the latest Terraform and provider versions.
6. Check the state file for inconsistencies.
7. Monitor system resources to avoid running out of memory or CPU.
8. Isolate the problem by commenting out parts of the configuration.
9. Search for known issues related to the error.
10. Contact HashiCorp support if needed.

What is Module? Where do you find and explore terraform modules? How do you make sure
that module have stability and compatibility? Explain with example

In Terraform, a module is a container for multiple resources that are used together. A module allows you
to group and organize resources logically and re-use them across different configurations. Modules help in
abstracting away complex infrastructure logic into reusable units of work. They can be local to your project
or sourced from public or private module repositories.

Key Benefits of Using Modules:

 Reusability: Modules allow you to create reusable infrastructure components that can be easily
applied in different configurations.
 Maintainability: You can organize your Terraform configuration into modular units, making it easier
to manage, maintain, and update.
 Abstraction: Modules can abstract away complex infrastructure setups, making it easier to work
with simple interface inputs.

26
Terraform Guide

 Scalability: By using modules, you can scale your infrastructure management by reusing predefined
modules in different parts of your infrastructure.

Where to Find and Explore Terraform Modules?

Terraform modules can be sourced from various locations:

1. Terraform Registry: The Terraform Registry ([Link] is the primary place to


discover, explore, and share modules. It contains a large collection of modules for different cloud
providers and use cases.
o Example module search: [Link]
2. GitHub: Many Terraform modules are hosted on GitHub repositories. You can find them by
searching for specific keywords or by browsing through popular Terraform module repositories.
3. Private Repositories: You can create your own private module repositories for your organization’s
infrastructure needs.
4. Community Contributions: Many organizations and developers share modules through blogs,
forums, or dedicated repositories like GitHub. It's often useful to review community modules to see
if they fit your needs.

Example of Using a Module from Terraform Registry

Here is an example of how you can use a module to deploy an AWS EC2 instance. This module can be
found on the Terraform Registry.

module "ec2_instance" {
source = "terraform-aws-modules/ec2-instance/aws"

instance_count = 1
ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"
name = "example-instance"

tags = {
Name = "Example Instance"
}
}

 source: The source points to the module's location. In this case, it’s a publicly available module
from the Terraform AWS Modules collection on the Terraform Registry.
 The module will automatically create an EC2 instance using the provided parameters like ami,
instance_type, and name.

How to Make Sure a Module Has Stability and Compatibility?

To ensure that the module you are using is stable and compatible, you can take the following steps:

27
Terraform Guide

1. Check Module Version

 Many modules in the Terraform Registry support versioning. By specifying a version in the source
parameter, you ensure that you are using a stable and compatible version of the module.

Example:

module "ec2_instance" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 3.0"
}

The version constraint ~> 3.0 ensures that Terraform uses a stable version from the 3.x series, ensuring
compatibility with your setup. If you do not specify a version, Terraform will fetch the latest version, which
may introduce breaking changes.

2. Review Module Documentation

 Always check the module documentation on the Terraform Registry or the module's repository.
Well-documented modules will provide details about:
o The inputs and outputs (variables and return values).
o Examples of usage.
o Compatibility with different versions of Terraform and the providers.
o Known issues and limitations.
o Required dependencies and the module's supported features.

This helps to ensure the module meets your needs and works as expected in your environment.

3. Check for Active Maintenance

 Ensure the module is actively maintained by checking the commit history on GitHub (if using a
module from there). Look for recent commits and releases to verify ongoing support.
 If the module has not been updated in a long time, it could be deprecated or incompatible with
newer Terraform versions.

Example of checking maintenance activity:

 GitHub repository's commit history.


 Module release tags or changelog.

4. Read Community Feedback

 Review community feedback, issues, and pull requests. In the Terraform Registry, you can often
find user reviews and issues posted by others who have used the module. This can help identify any
known bugs or incompatibility issues.
 GitHub repositories often have an "Issues" section where users report problems, and you can gauge
if these issues are being actively addressed.

28
Terraform Guide

5. Testing and Validation

 Before using a module in production, always test it in a staging or development environment.


 Validate the configuration by running terraform plan to check the changes that the module will
apply.
 You can also review the resource outputs and their behavior in a test environment to ensure the
module is working as expected.

6. Module Compatibility with Terraform Version

 Ensure that the module is compatible with the version of Terraform you are using. Some modules
may require specific Terraform versions due to changes in the Terraform language or provider APIs.
 Check the module’s documentation or changelog for any version restrictions.

7. Check Input Variables and Defaults

 Look for any input variables and their default values. For example, the instance_type variable in the
EC2 module might have a default value, but you can override it to better fit your infrastructure
needs.

Example:

variable "instance_type" {
description = "EC2 instance type"
default = "[Link]"
}

By reviewing and adjusting variables, you can make the module work in a variety of contexts, improving its
stability for your environment.

Example: Verifying Stability and Compatibility

If you are using a module to provision an AWS RDS instance, you can ensure its stability and compatibility
by:

1. Specifying the version:

module "rds" {
source = "terraform-aws-modules/rds/aws"
version = "~> 4.0"
}

2. Reviewing the module’s documentation for supported AWS regions and Terraform versions.
3. Checking GitHub issues to see if others have faced compatibility issues or bugs.
4. Testing the module in a staging environment before deploying it in production.

29
Terraform Guide

Conclusion

Terraform modules help you reuse and share infrastructure code. To ensure that the modules you use are
stable and compatible, always:

 Specify versions.
 Review module documentation carefully.
 Check the module’s maintenance status and community feedback.
 Test the module in a controlled environment.
 Validate compatibility with your Terraform version.

What’s Variable? Types & examples? Explain precedence

A variable in Terraform is a way to input data into your configuration. It enables you to customize the
infrastructure deployment by passing in different values when applying the configuration.

Types of Variables in Terraform

1. Input Variables
These are variables defined in Terraform configurations that allow you to pass dynamic values into
your modules or resources.

Example:

variable "region" {
description = "The AWS region to create resources in"
type = string
default = "us-east-1"
}

In this example, the region variable is defined to allow dynamic configuration of the region for AWS
resources.

2. Output Variables
These variables capture values from your Terraform resources and output them after the plan is
applied. This is useful for exporting data like IP addresses or resource IDs.

Example:

output "instance_ip" {
value = aws_instance.example.public_ip
}

The instance_ip output variable stores the public IP address of the AWS EC2 instance.

30
Terraform Guide

3. Local Values
Local values are named expressions that can be used to simplify your Terraform code. They're
similar to variables but are used within the configuration to store intermediate results.

Example:

locals {
instance_type = "[Link]"
}

The instance_type local value simplifies referencing the EC2 instance type in your configuration.

Variable Types in Terraform

When defining input variables, you can specify a type to control the kind of data that can be passed in:

1. String A variable that holds a text value.

variable "environment" {
type = string
default = "production"
}

2. Number A variable that holds numeric values.

variable "instance_count" {
type = number
default = 3
}

3. Bool A variable that holds a boolean (true or false) value.

variable "enable_logging" {
type = bool
default = true
}

4. List A variable that holds an ordered collection of values, where each value is of the same type.

variable "availability_zones" {
type = list(string)
default = ["us-east-1a", "us-east-1b"]
}

5. Map A variable that holds a collection of key-value pairs, where the key is a string, and the value
can be any data type.

variable "instance_tags" {

31
Terraform Guide

type = map(string)
default = {
Name = "my-instance"
Environment = "production"
}
}

6. Object A variable that holds a complex structure with specific attributes.

variable "server" {
type = object({
name = string
type = string
})
default = {
name = "web-server"
type = "[Link]"
}
}

Example of Defining and Using Variables in Terraform

Here is an example of defining input variables, assigning them values, and using them within a resource
configuration:

variable "region" {
description = "AWS region"
type = string
default = "us-east-1"
}

variable "instance_type" {
description = "Type of EC2 instance"
type = string
default = "[Link]"
}

resource "aws_instance" "example" {


ami = "ami-123456"
instance_type = var.instance_type
availability_zone = [Link]
}

Precedence of Variables in Terraform

Terraform variables follow a precedence order when values are assigned. Here's the order in which
Terraform looks for values for a variable:

32
Terraform Guide

1. Environment Variables
You can set variables using environment variables, and these will override values set in .tfvars files
or default values in the configuration. Environment variables are named as
TF_VAR_<variable_name>.

Example: TF_VAR_region=us-west-2

2. CLI Arguments
When running terraform apply, you can pass variables via the command line using the -var flag.
These override environment variables, .tfvars files, and defaults in the configuration.

Example:

terraform apply -var="region=us-west-2"

3. .tfvars Files
If you have a file named [Link] (or any .tfvars file), Terraform will automatically load the
values from that file. These values will override the default values in the configuration but are
overridden by the environment variables and CLI arguments.
4. Default Values
The default value of the variable is used if no value is provided via CLI, .tfvars files, or environment
variables.

Example of Variable Precedence

variable "region" {
type = string
default = "us-east-1"
}

# [Link]
region = "us-west-2"

If you run terraform apply without passing any variables, it will use us-west-2 from the .tfvars file instead of
the default us-east-1.

But, if you pass the variable through the command line:

terraform apply -var="region=us-west-3"

It will use us-west-3 instead of us-west-2 or the default us-east-1.

33
Terraform Guide

What is the purpose of using a Terraform remote backend?

A Terraform remote backend is used to manage the state files of a Terraform project in a centralized
location. The state file contains important information about the infrastructure managed by Terraform,
and using a remote backend allows teams to share and collaborate on the same infrastructure without
conflicts. Remote backends provide several benefits:

 Centralized Storage: The state files are stored remotely (e.g., in AWS S3, Azure Blob Storage, etc.),
ensuring all team members are working with the same version of the state file.
 Collaboration: Multiple users can work on the same infrastructure concurrently without the risk of
overwriting each other's changes.
 State Locking: Some remote backends (like S3 with DynamoDB or Azure Blob Storage) support state
locking, which prevents concurrent modifications to the state file.
 Security: State files may contain sensitive information, and remote backends can provide
encryption and access control to protect this data.

Why the other options are incorrect:

1. To store Terraform configuration files in a central location: Terraform configuration files (.tf files)
are typically version-controlled in a source code repository like Git, not stored in a remote backend.
2. To execute Terraform commands remotely without local setup: Terraform backends are not
designed to execute commands remotely. They only manage state storage. Terraform commands
are still executed locally on your machine or CI/CD system.
3. To store the Terraform provider plugins: Provider plugins are stored locally or downloaded from
the Terraform Registry, not within the remote backend. The backend stores the state of the
infrastructure.

So, the purpose of a remote backend is primarily to centralize the storage and management of Terraform
state files.

The purpose of using a Terraform remote backend is to store and manage Terraform state files in a
centralized, secure, and accessible location. This allows for better collaboration, state management, and
ensures that teams can safely and efficiently work together on shared infrastructure.

Key Purposes of a Remote Backend:

1. Centralized State Storage:


Storing Terraform state remotely helps ensure that the state of your infrastructure is consistently
shared and updated across teams or environments. The state file holds crucial information about
the infrastructure, and using a remote backend avoids local state files that can be out of sync or
lost.
2. Collaboration and Team Support:
With remote state, multiple team members can work on the same infrastructure concurrently
without interfering with each other’s work. The backend ensures that everyone is working with the
most recent version of the state file, enabling collaboration.
3. State Locking:
Many remote backends (like AWS S3 with DynamoDB, or Azure Blob Storage) support state file

34
Terraform Guide

locking, which prevents concurrent updates to the state file, avoiding potential conflicts or
corruption of the state.
4. Security:
Remote backends often offer additional security features, such as encrypted storage and fine-
grained access control, ensuring that sensitive data in the state file (like resource credentials) is
protected.
5. Scalability:
Using a remote backend is essential for large, complex Terraform deployments that need to scale
across multiple teams, regions, or environments, as it centralizes and simplifies state management.
6. Integration with Version Control and CI/CD:
With remote backends, it’s easier to integrate Terraform with CI/CD pipelines and version control
systems, ensuring that infrastructure changes are tracked and versioned.

Common Remote Backend Solutions:

 Amazon S3 with DynamoDB (for locking): Used to store state files in S3 and use DynamoDB for
state locking.
 Azure Blob Storage: A scalable solution for storing Terraform state on Azure.
 HashiCorp Consul: A distributed, highly-available key-value store that can be used for state storage
and locking.
 Google Cloud Storage (GCS): A backend that stores state in GCS buckets.

How can you import existing infrastructure into Terraform? In which scenario you need to use import?
What are the limitations? Please explain with an example.

You can use the terraform import command to bring existing infrastructure under Terraform management.
This command allows you to take resources that were created outside of Terraform (manually or by other
means) and import them into the Terraform state so you can manage them going forward.

The general syntax for the terraform import command is:

terraform import [options] <RESOURCE_TYPE>.<RESOURCE_NAME> <RESOURCE_ID>

Where:

 RESOURCE_TYPE is the type of resource (e.g., aws_instance).


 RESOURCE_NAME is the name you want to give the resource in your Terraform configuration.
 RESOURCE_ID is the ID of the resource in the cloud provider (e.g., AWS EC2 instance ID, or Azure
resource ID).

Scenario for Using Terraform Import

You would use terraform import in the following scenarios:

35
Terraform Guide

1. Managing Pre-existing Resources:


When you have infrastructure that was created manually, through scripts, or by another team, and
you want to bring it under Terraform management without recreating it.
2. Migrating from Another Tool:
When migrating an existing infrastructure managed by another tool (like CloudFormation, or
manually using the console) to Terraform, you can use terraform import to bring those resources
under Terraform control.
3. Working with Hybrid Environments:
In a hybrid environment, you might want to combine resources that Terraform manages with those
that were created manually or by other tools.
4. Refactoring Infrastructure:
If you've made changes manually in your cloud provider's console and want to keep Terraform in
sync without recreating resources.

Example of Terraform Import

Let’s assume you have an EC2 instance in AWS and you want to bring it under Terraform management.

1. Find the EC2 Instance ID:


In AWS, you can get the EC2 instance ID from the console, CLI, or API. Let's say the instance ID is i-
0abcd1234efgh5678.
2. Import the EC2 Instance: Use the terraform import command to import the resource into
Terraform:

terraform import aws_instance.example i-0abcd1234efgh5678

3. Create the Resource Block:


After importing, you need to manually create the corresponding resource block in your Terraform
configuration file (e.g., [Link]):

resource "aws_instance" "example" {


ami = "ami-0abcdef1234567890" # The AMI ID should match the actual instance
instance_type = "[Link]" # Match the instance type
# Add any other attributes based on the existing configuration
}

Once the import is successful, Terraform will have the instance in its state file, and the resource will
be managed under Terraform.

4. Run terraform plan: After importing, run terraform plan to verify that Terraform recognizes the
state of the resource and checks for any differences between the configuration and actual
infrastructure:

terraform plan

36
Terraform Guide

Limitations of Terraform Import

While Terraform import is powerful, there are a few limitations:

1. Does Not Create Configuration Files Automatically:


o The terraform import command only adds the resource to the state file. It does not
generate Terraform configuration files. You must manually write the configuration block for
the imported resource.
2. State-Only Operation:
o Importing a resource only affects the state file; it does not change the actual infrastructure
or configuration. This means that Terraform does not automatically update your .tf files to
reflect the current resource's settings.
3. Complex Resources May Need Manual Configuration:
o For some complex resources, you may need to figure out the correct configuration settings
manually. For example, an EC2 instance could have tags, security groups, or additional
properties that are not automatically added during the import.
4. No Import of Dependencies:
o If the imported resource has dependencies (e.g., security groups, IAM roles), you must
manually import those dependencies and configure them in your .tf files. Terraform will not
automatically detect these associated resources.
5. Cannot Import All Resource Types:
o Not all resource types are supported by terraform import. The resource must be supported
by the Terraform provider, and some less common or custom resources may not be
importable.
6. No Historical Changes in State:
o When you import an existing resource, Terraform does not have the history of changes
made to that resource. You will not be able to track changes that occurred before the
import, and you may miss insights into previous configurations unless you manually capture
them.
7. Requires Correct Resource ID:
o You need to know the correct identifier for the resource you’re importing. Without this, you
won’t be able to successfully import the resource. The ID format varies between providers.

Example of Limitations:

Let’s say you’re importing an EC2 instance, but the instance has a custom security group attached that you
didn’t define. You’ll need to import that security group separately and ensure that it’s linked in your
configuration.

1. Import the Security Group:

terraform import aws_security_group.example sg-0abcd1234efgh5678

2. Manually Add Security Group to Configuration: After importing the security group, you’ll need to
manually associate it with your EC2 instance in the configuration file.

37
Terraform Guide

Conclusion

The terraform import command allows you to bring existing infrastructure into Terraform's management,
making it easier to manage and automate the configuration of pre-existing resources. However, you must
manually write the corresponding Terraform configuration files, and be aware of its limitations, such as the
inability to automatically import resource dependencies or generate configuration from existing
infrastructure.

Give few examples of lifecycle rules in Terraform.

In Terraform, lifecycle rules are used to customize the behavior of resources when performing actions like
creation, update, or destruction. These rules allow you to control how Terraform interacts with resources
during these lifecycle events.

Here are a few common examples of lifecycle rules in Terraform:

1. create_before_destroy

 Purpose: Ensures that a resource is created before the existing one is destroyed during updates.
 Use Case: This is useful when you need to avoid downtime or when a resource must exist before its
replacement can be destroyed (e.g., load balancers, databases).

resource "aws_security_group" "example" {


name = "example-sg"
description = "Example security group"

lifecycle {
create_before_destroy = true
}

// Other security group properties


}

Explanation: This rule ensures that the security group will be created first, and only once the new one is
successfully created, the old one will be destroyed.

2. prevent_destroy

 Purpose: Prevents the resource from being destroyed, even when a terraform destroy or terraform
apply is executed.
 Use Case: This is useful for critical resources where you do not want them to be deleted
accidentally, such as production databases or stateful applications.

resource "aws_instance" "example" {


ami = "ami-0abc12345"
instance_type = "[Link]"

38
Terraform Guide

lifecycle {
prevent_destroy = true
}

// Other instance properties


}

Explanation: The prevent_destroy = true setting ensures that the resource can't be destroyed, and an error
will occur if you attempt to destroy the resource.

3. ignore_changes

 Purpose: Tells Terraform to ignore specific changes to resource attributes during an update,
effectively preventing Terraform from attempting to modify those attributes.
 Use Case: This is useful when you have resources that are externally managed or updated, and you
don’t want Terraform to track changes to those attributes (e.g., manual changes to tags or instance
metadata).

resource "aws_instance" "example" {


ami = "ami-0abc12345"
instance_type = "[Link]"

tags = {
Name = "example-instance"
}

lifecycle {
ignore_changes = [
tags["Name"] # Ignore changes to the Name tag
]
}
}

Explanation: In this example, Terraform will not attempt to change the Name tag of the instance even if
the tag is modified outside of Terraform.

4. replace_triggered_by

 Purpose: This rule causes a resource to be replaced when a change to another resource or data
source occurs, even if the change does not directly impact the resource.
 Use Case: This is useful when a change in one resource triggers the need to replace another
resource, even if the two are not directly related in Terraform’s normal dependency graph.

resource "aws_security_group" "example" {


name = "example-sg"
description = "Example security group"

39
Terraform Guide

lifecycle {
replace_triggered_by = [aws_security_group.[Link]]
}
}

Explanation: In this example, if any changes occur to the tags of aws_security_group.example, Terraform
will replace the security group, even if the change is unrelated to the resource's configuration.

5. destroy_before_create

 Purpose: Ensures that a resource is destroyed before a new one is created when making changes.
 Use Case: This rule can be useful when you need to destroy a resource first before creating a new
one, like when upgrading a resource that cannot be reused or if it requires a downtime window.

resource "aws_instance" "example" {


ami = "ami-0abc12345"
instance_type = "[Link]"

lifecycle {
destroy_before_create = true
}

// Other instance properties


}

Explanation: In this case, Terraform will destroy the instance first before creating the new instance, which
could be useful for certain types of infrastructure changes.

6. ignore_changes for Multiple Attributes

 Purpose: You can use ignore_changes to ignore multiple attributes at once, preventing Terraform
from making changes to them during an update.
 Use Case: This is useful when you want to prevent updates to multiple properties that may be
modified manually or managed externally.

resource "aws_instance" "example" {


ami = "ami-0abc12345"
instance_type = "[Link]"

tags = {
Name = "example-instance"
}

lifecycle {
ignore_changes = [
ami, # Ignore changes to AMI

40
Terraform Guide

instance_type # Ignore changes to instance type


]
}
}

Explanation: Terraform will ignore changes to the ami and instance_type properties, meaning that manual
updates to these fields outside of Terraform won’t trigger changes to the instance.

Summary of Lifecycle Rules in Terraform

 create_before_destroy: Ensures a resource is created before the previous one is destroyed


(avoiding downtime).
 prevent_destroy: Prevents a resource from being destroyed, useful for critical infrastructure.
 ignore_changes: Ignores specific changes to resource attributes during updates, useful for
externally managed attributes.
 replace_triggered_by: Forces a resource to be replaced when changes to other resources or data
sources occur.
 destroy_before_create: Destroys the resource before creating a new one, useful for cases where
re-creating a resource is required.

What is interpolation in Terraform? Where to use?

Interpolation in Terraform refers to the process of embedding dynamic expressions or values inside your
Terraform configuration files. It allows you to reference variables, attributes, and data from other
resources and pass them as input to other resources or configuration blocks.

Interpolation is typically done using ${} syntax, where you can insert expressions that will be evaluated and
replaced by actual values during the execution of terraform plan or terraform apply.

Where to Use Interpolation in Terraform?

Interpolation is used in various places within Terraform configurations, such as:

1. Variables: Referencing input variables inside the configuration.


2. Resource Attributes: Using values from one resource to configure another.
3. Data Sources: Accessing dynamic values from external sources.
4. Output Values: Defining outputs that dynamically reference resource attributes.

Examples of Interpolation in Terraform

1. Using Variables in Configuration

You can use interpolation to reference variables within a resource or other parts of your Terraform
configuration.

41
Terraform Guide

variable "instance_type" {
type = string
default = "[Link]"
}

resource "aws_instance" "example" {


ami = "ami-12345678"
instance_type = "${var.instance_type}" # Interpolating the value of instance_type variable
}

Explanation: Here, the ${var.instance_type} interpolation is used to reference the instance_type variable in
the aws_instance resource.

2. Referencing Resource Attributes

Interpolation allows you to use attributes from one resource in another resource. This is particularly useful
when you need to configure one resource based on the attributes of another.

resource "aws_security_group" "example" {


name = "example-sg"
description = "Example Security Group"
}

resource "aws_instance" "example" {


ami = "ami-12345678"
instance_type = "[Link]"
security_groups = ["${aws_security_group.[Link]}"] # Interpolating security group name
}

Explanation: The security_groups attribute of the aws_instance resource is using interpolation to


reference the name of the aws_security_group.example resource.

3. Using Outputs and Expressions

You can also use interpolation in outputs to reference attributes or variables and output their values.

output "instance_id" {
value = "${aws_instance.[Link]}" # Interpolating the instance ID
}

Explanation: The value of the instance_id output is interpolating the id attribute of the
aws_instance.example resource.

4. String Manipulation with Interpolation

You can use interpolation to perform string manipulation in Terraform, like concatenating strings or
combining values.

42
Terraform Guide

output "instance_name" {
value = "example-${aws_instance.[Link]}" # Concatenating a string with instance ID
}

Explanation: The output instance_name concatenates the string "example-" with the id of the EC2
instance, producing a dynamic name based on the instance.

5. Using Expressions for Conditional Logic

Terraform allows for more complex expressions within interpolations. You can use conditions and
functions to dynamically determine values.

resource "aws_instance" "example" {


ami = "ami-12345678"
instance_type = "${var.instance_type == "[Link]" ? "[Link]" : "[Link]"}" # Conditional
interpolation
}

Explanation: This example uses a ternary operator for conditional interpolation. If var.instance_type is
"[Link]", the instance_type will be "[Link]"; otherwise, it will be "[Link]".

When to Use Interpolation in Terraform?

You should use interpolation when:

1. Referencing Dynamic Values: When you need to inject variables, resource attributes, or data into
resource definitions.
2. Creating Dynamic Resources: When the configuration depends on other resources or variables, and
you want to automate the creation of resources dynamically.
3. Combining Values: When you need to combine different strings, variables, or values together to
form a dynamic configuration.
4. Conditional Logic: When you need to apply logic that decides which values or resources to use
based on input conditions.
5. Outputting Dynamic Data: When creating outputs that represent the results of your infrastructure,
such as resource IDs or URLs.

Terraform 0.12+ and Interpolation

Starting with Terraform 0.12 and beyond, interpolation is optional in many cases. Terraform automatically
handles most situations without needing explicit interpolation. For instance:

# Pre-Terraform 0.12, interpolation was required:


output "instance_name" {
value = "${aws_instance.[Link]}"
}

# In Terraform 0.12+, interpolation can be omitted:


output "instance_name" {

43
Terraform Guide

value = aws_instance.[Link] # Direct reference without interpolation


}

Conclusion

Interpolation in Terraform allows you to create dynamic configurations by referencing variables, resource
attributes, or data sources. It is used for:

 Injecting variables into resource definitions.


 Creating relationships between resources.
 Generating dynamic output values.
 Performing string manipulations and conditional logic.

Since Terraform 0.12+, interpolation has become more intuitive, and in many cases, you don't need to use
the ${} syntax for simple references. But it’s still essential to understand how and where to use it to make
your configurations dynamic and flexible.

How can you leverage Terraform’s “count” and “for_each” features for resource iteration?

Terraform’s count and for_each are meta-arguments that allow you to create multiple instances of a
resource, module, or block in a clean, efficient, and scalable way. These features are useful when you need
to dynamically create resources based on input data like lists or maps.

1. count – Basic Iteration with Index

Use Case:

Use count when you want to repeat the same resource multiple times, usually based on a list or number.

🚩 Example: Creating multiple EC2 instances

variable "instance_count" {
default = 3
}

resource "aws_instance" "web" {


count = var.instance_count
ami = "ami-12345678"
instance_type = "[Link]"
tags = {
Name = "web-${[Link]}"
}
}

44
Terraform Guide

Explanation:

 Creates 3 EC2 instances.


 ${[Link]} gives the current loop index (0, 1, 2).

2. for_each – Iteration Over Maps or Sets

Use Case:

Use for_each when you want to create one resource per item in a map or set, and you want to access
each key/value clearly.

Example: Creating multiple S3 buckets from a list

variable "bucket_names" {
default = ["dev-bucket", "test-bucket", "prod-bucket"]
}

resource "aws_s3_bucket" "buckets" {


for_each = toset(var.bucket_names)

bucket = [Link]
acl = "private"
}

Explanation:

 Each bucket will be created with the name from the list.
 [Link] gives the current bucket name (like "dev-bucket").

Example with for_each using a Map


variable "bucket_config" {
default = {
dev = "dev-bucket-123"
prod = "prod-bucket-456"
}
}

resource "aws_s3_bucket" "buckets" {


for_each = var.bucket_config

bucket = [Link]
tags = {
Environment = [Link]
}
}

45
Terraform Guide

Explanation:

 Each key/value pair becomes a resource.


 [Link] = "dev", "prod"
 [Link] = "dev-bucket-123", etc.

count vs for_each – When to Use?


Feature Use When You Have Key Access Style Supports Maps? Index Info

count A number or list [Link] ❌No Yes

for_each A map or set of strings [Link] / [Link] ✅Yes No (but you get keys)

Real-World Use Cases

 count: Create N number of subnets, EC2 instances, or IAM users.


 for_each: Create resources based on unique names (e.g., tagging policies, S3 buckets with
meaningful names).

Notes

 You can’t use both count and for_each in the same resource.
 For dynamic resource creation where names/keys matter, prefer for_each.
 When using for_each, your keys must be unique.

You want to create security groups for dev, test, and prod environments.

Using count

Input:

variable "environments" {
default = ["dev", "test", "prod"]
}

Terraform Code:

resource "aws_security_group" "count_example" {


count = length([Link])

name = "sg-${[Link][[Link]]}"
description = "Security group for ${[Link][[Link]]}"
vpc_id = "vpc-123456"

46
Terraform Guide

tags = {
Environment = [Link][[Link]]
}
}

Key Points:

 Uses a list.
 Must access values using [Link].
 Less readable when accessing attributes in complex structures.

Using for_each

Input:

variable "env_map" {
default = {
dev = "vpc-aaa111"
test = "vpc-bbb222"
prod = "vpc-ccc333"
}
}

Terraform Code:

resource "aws_security_group" "foreach_example" {


for_each = var.env_map

name = "sg-${[Link]}"
description = "Security group for ${[Link]}"
vpc_id = [Link]

tags = {
Environment = [Link]
}
}

🚩 Key Points:

 Uses a map.
 Easier to read and manage key-value structures.
 [Link] is the environment, [Link] is the VPC ID.

Summary Table
Feature Input Type Key Access Example Use Case

47
Terraform Guide

Feature Input Type Key Access Example Use Case

count List / Number [Link] Same resource repeated N times (e.g., EC2)

for_each Map / Set [Link]/value Named resources (e.g., S3 buckets, SGs)

How can you handle resource failures, retries in Terraform?

In Terraform, handling resource failures and retries is crucial for maintaining a reliable and idempotent
infrastructure-as-code workflow. Terraform doesn't natively offer granular retry policies like a
programming language, but it provides several mechanisms to deal with transient failures and ensure
resources are created reliably.

1. Terraform Built-in Retry Mechanism

Terraform automatically retries failed operations like create, update, or delete for many providers
(especially AWS, Azure, GCP) when the failure is transient (e.g., throttling, rate limits, API timeouts).

You don’t have to write retries for most transient cloud issues – Terraform already does that internally.

2. Using create_before_destroy for safer updates

When updating resources like security groups or IAM roles, Terraform might try to destroy and recreate
them. To avoid downtime or failure, you can use:

lifecycle {
create_before_destroy = true
}

Use when you want to ensure the new resource is created before the old one is destroyed.

3. Using ignore_changes to avoid unnecessary diffs

Sometimes updates fail because Terraform tries to change something that should be left alone (like an
externally managed field):

lifecycle {
ignore_changes = [tags]
}

Use when external systems modify the resource and Terraform shouldn’t try to revert it.

48
Terraform Guide

4. Retry Logic with null_resource and local-exec

For scripts or commands that need retries, you can manually implement a retry loop using null_resource
and shell scripting.

resource "null_resource" "retry_example" {


provisioner "local-exec" {
command = <<EOT
for i in {1..5}; do
curl -f [Link] && break || sleep 10
done
EOT
}
}

Use when calling APIs or commands that are flaky and may need retries.

5. Using depends_on for ordering to prevent race conditions

Sometimes resources fail because dependencies weren’t fully created yet.

resource "aws_instance" "example" {


# ...
depends_on = [aws_security_group.allow_ssh]
}

Use when you need strict ordering to avoid race conditions.

6. Use Modules and Outputs for Better Error Handling

Organizing infrastructure into modules and using outputs helps break down large operations and isolate
failures to smaller scopes, improving retry recovery.

Limitations

 No custom retry count or backoff control for individual resources (like retry_attempts in AWS SDK).
 Failures in one resource can halt the entire plan/apply unless isolated.
 No native support for conditional retries (use null_resource as workaround).

Best Practices
Strategy When to Use

Built-in retries Most AWS/Azure/GCP API operations

49
Terraform Guide

Strategy When to Use

create_before_destroy Resource replacements without downtime

ignore_changes Externally managed fields

null_resource retry loop Scripting with retryable actions

depends_on Enforce dependency order

A colleague accidentally deletes the Terraform state file. How would you recover and ensure no
resources are recreated?

Step-by-Step Recovery Plan

1. Check for Backups

If you're using:

 Remote backends like S3 with DynamoDB:


o S3 usually has versioning enabled, so you can restore a previous version.
o Go to the S3 bucket → select [Link] → click Versions → restore the latest known
good version.
 Terraform Cloud or Enterprise:
o They maintain state history. You can roll back from the Terraform UI.

❌If No Backups Exist

2. Use terraform import to Rebuild the State

You must manually re-import all existing resources into a new Terraform state file. This avoids recreating
them.

Example:

Suppose you had an AWS EC2 instance and security group.

terraform import aws_instance.web i-0abc12345def67890


terraform import aws_security_group.web_sg sg-0123456789abcdef0

This tells Terraform: “This real AWS resource maps to this Terraform resource block.”

Make sure the [Link] (or equivalent) file matches the actual configuration of those resources.

50
Terraform Guide

3. Run terraform plan to Confirm No Changes

After importing all resources:

terraform plan

✅If done correctly, Terraform should show no changes.


❌If it shows a destroy/create action, something doesn’t match — fix the configuration or import.

4. Recreate State File ([Link])

Once you've imported all resources correctly, a fresh [Link] is generated.

You can now:

 Push it to your remote backend again (if applicable).


 Lock it using backend support like S3+DynamoDB.

What Not to Do

 Don’t immediately run terraform apply after deleting state — it will try to recreate all resources.
 Don’t manually edit the .tfstate unless you know exactly what you're doing.

Prevention Tips

 Always enable remote backends like S3 + DynamoDB for locking and versioning.
 Use Terraform Cloud to store and version state safely.
 Consider automating state backups in a CI/CD pipeline.

What is the purpose of Terraform’s “null_resource” , and when would you use it? Give an
example code?

The null_resource in Terraform is used to execute arbitrary actions that are not directly associated with
any specific cloud provider resource. It’s a resource placeholder that allows you to run provisioners (e.g.,
local-exec, remote-exec) when no other suitable resource exists or when you want to create custom
behavior during apply time.

🚩 Key Use Cases for null_resource:

1. Running local or remote scripts.


2. Performing configuration steps outside Terraform's built-in resources.
3. Triggering actions based on changes in input variables using triggers.
4. Debugging or prototyping Terraform logic.

51
Terraform Guide

Example: Using null_resource to run a shell script

resource "null_resource" "run_script" {


provisioner "local-exec" {
command = "echo Hello from null_resource > [Link]"
}

triggers = {
always_run = "${timestamp()}" # Forces this to re-run on every apply
}
}

Explanation:

 local-exec tells Terraform to run the command on the machine where terraform apply is executed.
 The triggers block is used to specify conditions under which the null_resource should be re-created.
Using timestamp() ensures it always runs.

When not to use it:

Avoid overusing null_resource for tasks that can be better modeled with proper Terraform providers (like
using aws_instance instead of using a null_resource to run an AWS CLI command).

How can you perform targeted resource deployment in Terraform? Can you give one example?

In Terraform, targeted resource deployment is achieved using the -target option with the terraform apply
or terraform plan commands. This allows you to apply changes only to a specific resource or module,
instead of running a full deployment for all resources.

Syntax:

terraform apply -target=resource_type.resource_name

Or for a resource inside a module:

terraform apply -target=module.module_name.resource_type.resource_name

Example:

Suppose you have the following resources:

resource "aws_s3_bucket" "my_bucket" {


bucket = "my-unique-bucket-name"
acl = "private"
}

52
Terraform Guide

resource "aws_instance" "my_ec2" {


ami = "ami-0abcdef1234567890"
instance_type = "[Link]"
}

If you want to apply only the S3 bucket creation and skip the EC2 instance for now:

terraform apply -target=aws_s3_bucket.my_bucket

Notes:

 Useful during debugging, testing, or partial deployments.


 Does not guarantee dependency completeness — Terraform will still attempt to evaluate
dependencies.
 Use sparingly in production, as it can lead to configuration drift if not used carefully.

Project Structure:

.
├── [Link]
└── modules/
└── s3_bucket/
└── [Link]

modules/s3_bucket/[Link]

resource "aws_s3_bucket" "this" {


bucket = var.bucket_name
acl = "private"
}

variable "bucket_name" {
type = string
}

[Link]

module "s3_bucket_module" {
source = "./modules/s3_bucket"
bucket_name = "my-app-prod-logs"
}

resource "aws_instance" "my_ec2" {


ami = "ami-0abcdef1234567890"
instance_type = "[Link]"
}

Targeted Deployment Example:

53
Terraform Guide

If you only want to deploy the S3 bucket inside the module:

terraform apply -target=module.s3_bucket_module

Or, to target the specific resource within the module:

terraform apply -target=module.s3_bucket_module.aws_s3_bucket.this

When to Use This:

 You’re testing a new module in isolation.


 You only want to update one part of your infrastructure without affecting the rest.
 You’re troubleshooting or validating the behavior of a specific resource/module.

Tell me 5 scenarios where terraform apply will pass but when you run terraform destroy it will fail?

Here are five scenarios where terraform apply might pass, but terraform destroy could fail:

1. Manual Changes to Resources:


o If manual changes are made to resources outside of Terraform (e.g., directly in the AWS
console, Azure portal, etc.), Terraform may no longer have an accurate view of the state.
When you run terraform destroy, Terraform could fail to destroy the resources because the
actual state doesn't match what Terraform expects.
2. Resource Dependencies or Deletion Order:
o Terraform applies resources in a specific order based on their dependencies. If a resource is
dependent on another, Terraform will destroy them in reverse order. However, if there is an
external dependency or a resource that is locked or in use (e.g., an EC2 instance that's
running or an RDS instance with active connections), the destroy operation might fail.
3. Provisioned Resources with External Constraints:
o Resources that have external constraints (like security groups, IAM roles, or databases) may
fail to be destroyed if they are still in use or have dependencies not defined in Terraform.
For example, an RDS instance may have an active connection or a security group might still
be attached to another resource, causing the destroy operation to fail.
4. Immutable Resources:
o Certain resources, like managed services (e.g., AWS RDS or AWS Redshift clusters), might
have configuration constraints that make them immutable once created. Even though
terraform apply might succeed when initially creating them, running terraform destroy
might fail because these resources cannot be easily destroyed or replaced due to their
immutability or because certain parameters can't be reverted to the required state.
5. State Corruption or Mismatched State Files:
o If the Terraform state file becomes corrupted or is out of sync with the actual resources
(e.g., if it's been manually edited, or if the state file was not properly updated during an
apply), Terraform may successfully apply changes but fail to destroy resources because the
state does not reflect the current infrastructure. This can happen when you have multiple
people working on the same state file or if state management (e.g., via remote backends) is
not set up properly.

54
Terraform Guide

1. Resource Dependency or Missing Dependencies

 Scenario: Resources that were created by Terraform depend on other resources not managed by
Terraform (e.g., external resources or manual changes to the environment).
 What Happens: terraform apply can create the resources as it doesn’t check for external
dependencies. However, when you try to run terraform destroy, Terraform fails to delete those
resources because they are dependent on something outside of its control (like external service
dependencies or missing permissions).

2. Manual Modifications Outside Terraform

 Scenario: If a resource was manually altered outside Terraform (for example, a resource in AWS
was modified directly through the console), Terraform might not be aware of the change.
 What Happens: terraform apply will create resources based on the current state, but when it
attempts to destroy the resources, it fails because Terraform tries to remove the resource that was
modified in a way that doesn’t match the Terraform state (like a different tag, name, or security
settings).

3. Resource with Protection/Deletion Policies

 Scenario: Some cloud providers, like AWS or Azure, allow you to apply deletion protection or set
specific policies that prevent resource destruction (e.g., aws_instance with delete_protection =
true).
 What Happens: During terraform apply, the resource is created without issue. But terraform
destroy will fail if deletion protection or a similar safeguard is in place, preventing the resource
from being deleted.

4. Orphaned Resources Not Managed by Terraform

 Scenario: Resources such as networks, volumes, or IAM roles might have been created by
Terraform but have orphaned components (like child resources) that aren't in Terraform's state.
 What Happens: terraform apply successfully creates the resource, but when running terraform
destroy, it may fail because Terraform doesn't recognize or track the orphaned child resources and
thus cannot destroy them.

5. Insufficient Permissions for Deletion

 Scenario: Terraform's execution role may have permissions to create resources but may not have
sufficient permissions to delete them (for example, lacking the necessary IAM permissions to delete
a security group or an EC2 instance).
 What Happens: The terraform apply will succeed as it creates the resources with the available
permissions, but terraform destroy fails because Terraform cannot delete the resource due to
missing permissions or access restrictions.

55
Terraform Guide

1. Resource Dependency or Missing Dependencies

Real-Time Example:

 Scenario: You use Terraform to create an Amazon RDS instance with a VPC that was manually
created before. The RDS instance relies on this manually created VPC.
 What Happens:
o terraform apply runs successfully, creating the RDS instance.
o terraform destroy fails because Terraform doesn't manage the manually created VPC, and it
cannot delete the dependent RDS instance.

2. Manual Modifications Outside Terraform

Real-Time Example:

 Scenario: You deploy an AWS EC2 instance using Terraform. Later, you manually modify the
instance by enabling termination protection from the AWS Console.
 What Happens:
o terraform apply successfully creates the EC2 instance.
o terraform destroy fails because termination protection prevents Terraform from deleting
the EC2 instance.

3. Resource with Protection/Deletion Policies

Real-Time Example:

 Scenario: You create an AWS S3 bucket via Terraform. After the creation, you manually enable S3
versioning or lifecycle policies outside Terraform.
 What Happens:
o terraform apply successfully creates the S3 bucket.
o terraform destroy fails because the lifecycle policies or versioning settings prevent the
bucket from being deleted without manual intervention.

4. Orphaned Resources Not Managed by Terraform

Real-Time Example:

 Scenario: You use Terraform to create an AWS Load Balancer (ELB) and Security Groups. Later, you
manually add EC2 instances in the AWS Console to be attached to the ELB but do not manage these
instances with Terraform.
 What Happens:
o terraform apply successfully creates the ELB and security groups.
o terraform destroy fails because the EC2 instances (added manually) are not managed by
Terraform and are not removed when destroying the ELB.

5. Insufficient Permissions for Deletion

56
Terraform Guide

Real-Time Example:

 Scenario: You deploy AWS resources (such as IAM roles and security groups) using a Terraform
service account that has only create permissions for certain resources but lacks delete permissions.
 What Happens:
o terraform apply successfully creates the IAM roles and security groups.
o terraform destroy fails because the service account doesn't have sufficient permissions to
delete those resources, resulting in an error message about insufficient permissions.

What are Terraform locals?

Terraform locals are used to define values that can be reused throughout your configuration within a
module. They are essentially variables that help simplify code by storing values that don’t need to be
passed in as input or output. Locals are evaluated once and then can be used multiple times in the
configuration.

Locals allow for better readability, reusability, and maintainability of your Terraform code, especially in
larger projects.

Syntax:

locals {
local_variable_name = value
}

You can use locals to store values like strings, numbers, maps, or lists.

Real-Time Examples of Terraform Locals:

1. Using Locals to Avoid Repetition

Scenario: You have multiple resources that need to use a specific region, which might change based on
environment (e.g., us-east-1, us-west-2).

Example:

locals {
region = "us-east-1"
}

resource "aws_instance" "example" {


ami = "ami-12345678"
instance_type = "[Link]"
region = [Link]
}

57
Terraform Guide

resource "aws_s3_bucket" "example" {


bucket = "example-bucket"
region = [Link]
}

In this example:

 The region local is defined once and used in both the EC2 instance and the S3 bucket resource.
 If the region needs to change (e.g., for different environments), you can just modify the local, and it
will be applied across all resources that use it.

2. Using Locals for Complex Calculations

Scenario: You need to calculate some complex values like subnet CIDR blocks based on a network size.

Example:

locals {
base_network = "[Link]/16"
subnet_mask = 8

subnet_cidr = cidrsubnet(local.base_network, local.subnet_mask, 1)


}

resource "aws_subnet" "example" {


vpc_id = "vpc-12345678"
cidr_block = local.subnet_cidr
}

Here:

 local.subnet_cidr is calculated using the cidrsubnet() function, which is a local expression based on
a given base network and subnet mask.
 This allows for easy management of network ranges, as any changes to the base network or mask
are automatically reflected in all calculations using local.subnet_cidr.

3. Using Locals with Conditional Logic

Scenario: You want to conditionally set values based on environment or other conditions.

Example:

locals {
environment = "production" # Can be "development", "staging", etc.

instance_type = [Link] == "production" ? "[Link]" : "[Link]"


}

58
Terraform Guide

resource "aws_instance" "example" {


ami = "ami-12345678"
instance_type = local.instance_type
}

In this case:

 local.instance_type is set to [Link] if the environment is "production", and [Link] otherwise.


 This allows for different configurations for different environments, reducing redundancy in your
code.

4. Using Locals for Lists and Maps

Scenario: You want to store a list of tags and use them in multiple places.

Example:

locals {
common_tags = {
"Environment" = "production"
"Project" = "Terraform-Project"
}
}

resource "aws_instance" "example" {


ami = "ami-12345678"
instance_type = "[Link]"
tags = local.common_tags
}

resource "aws_s3_bucket" "example" {


bucket = "example-bucket"
tags = local.common_tags
}

Here:

 The common_tags local is used for both the EC2 instance and the S3 bucket, ensuring consistency in
tags across resources.
 If you need to add more tags, you can update the local, and it will apply to all resources using
local.common_tags.

Key Benefits of Using Locals:

 Reusability: Locals allow you to reuse the same value or calculation in multiple places.
 Simplification: Helps make your configuration more readable and maintainable.
 Avoiding Repetition: Reduces the need to repeat the same expressions or values multiple times.

59
Terraform Guide

 Dynamic Configurations: Supports complex logic or calculations that are needed for configurations.

When to Use Locals:

 When you need to use a value multiple times in your Terraform configuration.
 When you want to simplify expressions and calculations in your code.
 When you need a dynamic value based on other resources or variables that don’t require user
input.

In real-time projects, Terraform locals can make managing infrastructure configurations easier, especially
when dealing with multiple environments, complex resource configurations, or dynamic calculations.

How to Setup Terraform CI/CD & best practices?

Setting up Terraform CI/CD pipelines using Jenkins and GitHub Actions is a great way to automate your
infrastructure provisioning and management. Below are detailed steps and best practices for configuring
CI/CD with both Jenkins and GitHub Actions.

1. Terraform CI/CD Setup with Jenkins

Prerequisites:

 Jenkins installed and running.


 GitHub repository containing your Terraform code.
 AWS credentials or credentials for your cloud provider set up on Jenkins.

Steps:

Step 1: Install Required Jenkins Plugins

 GitHub Plugin: To connect Jenkins with your GitHub repository.


 Terraform Plugin: Provides support for Terraform commands (e.g., plan, apply, etc.).
 Pipeline Plugin: To define Jenkins pipelines.

Step 2: Create a Jenkins Pipeline

1. Go to Jenkins and create a new Pipeline Job.


2. Under Pipeline configuration, choose Pipeline script from SCM.
3. Select Git as the SCM type, and configure it with your GitHub repository URL.
4. In the Branch field, set the branch you want to track (e.g., main).
5. Define the Pipeline script in the Jenkinsfile (located in your repo or directly within Jenkins).

Jenkinsfile Example:

pipeline {
agent any

60
Terraform Guide

environment {
TF_VERSION = "1.6.0"
AWS_REGION = "us-west-2"
AWS_ACCESS_KEY_ID = credentials('aws_access_key_id')
AWS_SECRET_ACCESS_KEY = credentials('aws_secret_access_key')
}
stages {
stage('Checkout Code') {
steps {
git '[Link]
}
}

stage('Terraform Init') {
steps {
script {
sh 'terraform init'
}
}
}

stage('Terraform Validate') {
steps {
script {
sh 'terraform validate'
}
}
}

stage('Terraform Plan') {
steps {
script {
sh 'terraform plan -out=tfplan'
}
}
}

stage('Terraform Apply') {
steps {
input message: "Approve to apply Terraform plan?", ok: "Apply"
script {
sh 'terraform apply tfplan'
}
}
}

stage('Terraform Destroy') {

61
Terraform Guide

steps {
input message: "Approve to destroy resources?", ok: "Destroy"
script {
sh 'terraform destroy -auto-approve'
}
}
}
}
post {
always {
cleanWs()
}
}
}

Jenkins Pipeline Breakdown:

 Checkout Code: Pulls the Terraform code from your GitHub repository.
 Terraform Init: Initializes the working directory and configures the backend.
 Terraform Validate: Ensures the Terraform code is valid and ready to execute.
 Terraform Plan: Runs terraform plan to generate an execution plan.
 Terraform Apply: Waits for manual approval to apply the plan (good practice to avoid accidental
deployments).
 Terraform Destroy: Optionally, it destroys the resources after tests or when tearing down
environments.

Step 3: Configure Jenkins Credentials

 Store AWS credentials or other sensitive values securely in Jenkins using Jenkins credentials store.
 Reference the credentials in the pipeline using:

AWS_ACCESS_KEY_ID = credentials('aws_access_key_id')
AWS_SECRET_ACCESS_KEY = credentials('aws_secret_access_key')

2. Terraform CI/CD Setup with GitHub Actions

Prerequisites:

 GitHub repository containing your Terraform code.


 AWS credentials or credentials for your cloud provider set up in GitHub Actions.

Steps:

Step 1: Create a GitHub Actions Workflow

62
Terraform Guide

1. In your GitHub repository, go to the Actions tab.


2. Create a new workflow by clicking New Workflow and selecting set up a workflow yourself.
3. Create a .github/workflows/[Link] file.

GitHub Actions Workflow Example:

name: Terraform CI/CD

on:
push:
branches:
- main
pull_request:
branches:
- main

jobs:
terraform:
runs-on: ubuntu-latest

environment:
AWS_REGION: us-west-2

steps:
- name: Checkout code
uses: actions/checkout@v2

- name: Set up Terraform


uses: hashicorp/setup-terraform@v1
with:
terraform_version: '1.6.0'

- name: Configure AWS credentials


uses: aws-actions/configure-aws-credentials@v1
with:
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
region: ${{ env.AWS_REGION }}

- name: Terraform Init


run: terraform init

- name: Terraform Validate


run: terraform validate

- name: Terraform Plan


run: terraform plan -out=tfplan

63
Terraform Guide

- name: Terraform Apply


run: terraform apply -auto-approve tfplan
if: github.event_name == 'push'

- name: Terraform Destroy


run: terraform destroy -auto-approve
if: github.event_name == 'push' && [Link] == 'refs/heads/main'

GitHub Actions Workflow Breakdown:

 Checkout code: Checks out the Terraform code from your GitHub repository.
 Set up Terraform: Uses the hashicorp/setup-terraform action to install the specified Terraform
version.
 Configure AWS credentials: Uses aws-actions/configure-aws-credentials to authenticate with AWS
using secrets stored in GitHub.
 Terraform Init: Initializes Terraform and sets up the backend.
 Terraform Validate: Validates the Terraform configuration files.
 Terraform Plan: Runs terraform plan to generate an execution plan.
 Terraform Apply: Applies the Terraform plan if the action is triggered by a push event.
 Terraform Destroy: Optionally, you can add a step to destroy resources after the deployment is
complete.

Step 2: Store Secrets in GitHub

 Store sensitive values such as AWS credentials in GitHub Secrets for secure access in the workflow.
 Go to your GitHub repository, navigate to Settings > Secrets, and add AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY.

Step 3: Trigger Workflow

 The workflow is triggered on push and pull_request events to the main branch.
 You can modify the workflow to trigger on other branches or events as needed.

Best Practices for Terraform CI/CD (Jenkins & GitHub Actions)

1. Use Remote Backends for State Management

 Jenkins: Store the Terraform state in a remote backend such as S3 with DynamoDB for state
locking.
 GitHub Actions: Use Terraform Cloud or S3 with DynamoDB to manage the state remotely and
prevent conflicts with state files.

2. Manual Approval for terraform apply

 Avoid automating terraform apply directly for production resources. Always require manual
approval before applying changes to production.

64
Terraform Guide

 Jenkins: Use an input step for manual approval.


 GitHub Actions: Use workflow_dispatch or similar methods for manual approvals.

3. Store Secrets Securely

 Use GitHub Secrets (in GitHub Actions) or the Jenkins credentials store to store sensitive values
such as AWS credentials.

4. Separate Plan and Apply Steps

 Separate the terraform plan and terraform apply stages. This allows teams to review the plan
before applying changes, minimizing errors.

5. Testing and Validation

 Implement Terratest or kitchen-terraform for testing Terraform code and resources.


 Run terraform validate in CI/CD to catch syntax or configuration issues before deployment.

6. Use Modules for Reusability

 Organize your Terraform code into modules to promote reuse and better organization.
 Share common infrastructure components (e.g., VPC, IAM roles, EC2 instances) as reusable
modules in your pipeline.

7. Isolate Environments

 Use Terraform workspaces or separate configurations for different environments (e.g., dev, prod,
staging) to isolate state and configuration.

8. Cleanup after Tests

 Set up a terraform destroy stage to clean up resources, especially in testing environments, to


prevent resource sprawl and unnecessary costs.

9. Use Linting and Formatting

 Integrate terraform fmt and terraform validate in your pipeline to ensure your code follows best
practices and is correctly formatted.

Conclusion

By combining Jenkins and GitHub Actions with best practices for Terraform CI/CD, you can effectively
automate your infrastructure deployment and management processes. Use manual approvals for critical
steps like terraform apply and leverage remote backends for state management to prevent conflicts.
Always store credentials securely and ensure that your infrastructure is tested and validated throughout
the pipeline.

65
Terraform Guide

What are dynamic blocks in Terraform and what are the best practices?

In Terraform, dynamic blocks allow you to create resource configurations or modules dynamically based
on a variable or a list of values. This is useful when you need to generate multiple instances of a resource
block based on some input data.

What Are Dynamic Blocks?

Dynamic blocks allow you to generate repetitive sections of Terraform code. Instead of manually defining
each block for each resource, a dynamic block can loop over a list of values (or any other structure) to
create those blocks.

Syntax of a Dynamic Block

A dynamic block has the following syntax:

dynamic "block_type" {
for_each = <list or map>
content {
<block content>
}
}

 block_type: The name of the block you want to generate dynamically (e.g., security_group_rule,
ingress, egress).
 for_each: A list or map that defines the number of instances to generate.
 content: The block's contents, where you can reference values from the for_each expression.

Example of a Dynamic Block

Consider a scenario where you're provisioning an AWS security group and you want to create multiple
ingress rules dynamically.

variable "ingress_rules" {
type = list(object({
cidr = string
port = number
protocol = string
}))
default = [
{ cidr = "[Link]/0", port = 80, protocol = "tcp" },
{ cidr = "[Link]/0", port = 443, protocol = "tcp" }
]
}

66
Terraform Guide

resource "aws_security_group" "example" {


name = "example-sg"
description = "Example security group"

dynamic "ingress" {
for_each = var.ingress_rules
content {
from_port = [Link]
to_port = [Link]
protocol = [Link]
cidr_blocks = [[Link]]
}
}
}

In the example above:

 The ingress_rules variable holds a list of objects containing cidr, port, and protocol.
 The dynamic "ingress" block loops over the ingress_rules variable and creates a security group rule
for each entry.

Real-world Use Cases for Dynamic Blocks

1. Security Group Rules:


o As shown in the example above, dynamic blocks are often used to create multiple ingress or
egress rules based on a list of objects or values.
2. Cloud Instance Tags:
o If you want to dynamically assign tags to instances based on environment or other
parameters, you can use dynamic blocks to create multiple tags.
3. Load Balancer Listeners:
o If you want to dynamically define multiple listeners for a load balancer (e.g., HTTP, HTTPS),
dynamic blocks can loop through a list of protocols and ports.
4. Auto Scaling Group Launch Configuration:
o If you want to define multiple block types for the launch configuration, such as environment
variables or block device mappings, dynamic blocks can generate those based on input
values.

Best Practices for Using Dynamic Blocks in Terraform

1. Use Dynamic Blocks for Complex or Variable Configurations:


o Dynamic blocks should be used when the number or configuration of the block types is
dynamic or variable. For example, when the number of security group rules is determined at
runtime by a variable or list.
2. Avoid Overuse of Dynamic Blocks:
o While dynamic blocks are powerful, they can make your Terraform code harder to read and
maintain. Avoid using them for simple, static configurations where you could just define the
block directly.

67
Terraform Guide

3. Leverage Variables for Flexibility:


o To make the dynamic block flexible, define the values to loop over as variables. This allows
your code to be reusable and modular across different environments or configurations.
4. Validate Input Data:
o When using dynamic blocks with variables (e.g., lists or maps), make sure to validate input
data to avoid runtime errors. You can use type constraints and validation rules in your
variables to ensure that data passed into the dynamic block is in the expected format.
5. Avoid Complex Logic Inside Dynamic Blocks:
o Keep the logic inside dynamic blocks simple. Complex logic can reduce readability and
maintainability. If you need complex operations, try to handle them outside the dynamic
block using locals or other Terraform functions.
6. Document the Dynamic Block Usage:
o Since dynamic blocks can generate multiple resources based on a list, it’s essential to
provide clear comments and documentation in your code to ensure other team members
understand how the dynamic block works.
7. Use for_each Instead of count in Dynamic Blocks:
o In many cases, using for_each is a better practice than count for dynamic blocks, especially
when the data involves maps or complex objects, as for_each makes it easier to refer to
specific keys or elements.

Example of Dynamic Block for Multiple Tags

Let's say you need to assign tags dynamically to an AWS EC2 instance. The tags are stored in a variable as a
map, and you want to apply them dynamically.

variable "tags" {
type = map(string)
default = {
"Environment" = "production"
"Owner" = "DevOps"
"App" = "MyApp"
}
}

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"

dynamic "tags" {
for_each = [Link]
content {
key = [Link]
value = [Link]
}
}
}

68
Terraform Guide

In this example, the tags block is dynamically created based on the tags map variable. Each key-value pair
in the map results in a separate tags block being created.

Summary of Key Points

 Dynamic Blocks: Help automate and simplify resource creation when dealing with repeating blocks
based on data structures like lists and maps.
 Best Practices: Use dynamic blocks for complex and variable configurations, avoid overuse, validate
input data, and keep logic simple.
 Common Use Cases: Security group rules, tags, load balancer listeners, and auto-scaling
configurations are some common examples.

By using dynamic blocks, you can reduce code duplication and improve the maintainability of your
Terraform configurations.

What is Terraform Drift and best practices to manage it?

Terraform Drift refers to a situation where the actual state of your infrastructure diverges from the state
managed by Terraform. This typically occurs when resources are modified or changed outside of Terraform
(e.g., manually from the cloud provider's console or through other tools). As a result, Terraform’s state no
longer accurately represents the real-world configuration of resources.

Terraform Drift can occur for various reasons, such as:

 Manual changes: A user manually updates the configuration of an infrastructure resource outside
of Terraform.
 Automatic updates: Cloud providers or other systems automatically modify resources (e.g., scaling,
updates to managed services).
 External systems: Other automation tools or CI/CD pipelines may modify the resources.

Drift can lead to issues when you run terraform plan or terraform apply because Terraform may try to re-
apply or revert changes to align resources with the defined configuration, causing potential conflicts or
errors.

Why Does Terraform Drift Matter?

1. Inconsistency: Terraform cannot manage resources that have been changed manually, causing
confusion and lack of alignment.
2. Loss of Control: Manual changes mean the desired state described in the Terraform configuration is
not reflected in reality.
3. Risk of Overwriting: If drift is not detected, Terraform could overwrite the manual changes made to
a resource, leading to unintended consequences.

How to Detect and Manage Drift

69
Terraform Guide

1. Detecting Drift in Terraform

Terraform does not automatically detect drift, but you can manually check for it using the following
methods:

 Using terraform plan: Run terraform plan to compare your current infrastructure state with the
state defined in your Terraform configuration. This will show you any differences.

terraform plan

If any drift has occurred, Terraform will detect the differences and show what it intends to change.

 Using terraform refresh: You can run terraform refresh to update the state file with the latest state
of the infrastructure.

terraform refresh

This will sync your Terraform state with the actual state of the infrastructure. However, this does
not fix drift; it only updates Terraform’s state file.

 Third-party tools: There are tools like Terraform Drift Detection that scan your cloud environment
and check for drift across resources.

2. Managing Drift

Once drift has been detected, you have several options for managing it:

a. Manual Intervention

 Inspect and Correct Drift: Review the drift and either manually revert the changes in the cloud
provider’s console or use Terraform to bring the infrastructure back to the desired state.
o If changes were intentional (e.g., for scaling or urgent fixes), you can update your Terraform
configuration to match the new reality and prevent future drift.
o If changes were not intentional, use Terraform to re-align the resource with the desired
state.

b. Reapply Terraform State

 After detecting drift, you can apply the changes from Terraform to correct the infrastructure back
to the defined configuration by running:

terraform apply

This will reconcile the differences between the actual state and the desired configuration.

c. Update Your Terraform Configuration

70
Terraform Guide

 If drift occurred due to missing changes in your Terraform configuration, you can modify the
Terraform code to match the actual state and then apply the changes.

d. Enable State Locking

 When multiple teams or automation tools are interacting with Terraform, ensure state locking is
enabled to prevent concurrent modifications that could cause drift.

For example, if using AWS S3 as a backend, enable state locking with DynamoDB:

terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "path/to/my/key"
region = "us-west-2"
dynamodb_table = "my-lock-table"
}
}

This ensures that only one process can modify the state at a time.

Best Practices for Managing Terraform Drift

1. Version Control and Code Review

 Store Terraform Configurations in Version Control: Ensure all your Terraform code is stored in
version control (e.g., Git). This enables you to track changes and catch potential drift issues caused
by unapproved changes.
 Code Review Process: Implement a code review process for all changes to your Terraform
configuration. This can help prevent unauthorized or manual changes to infrastructure.

2. Enforce Immutable Infrastructure

 Immutable Infrastructure: One of the key principles of Infrastructure as Code (IaC) is the idea of
immutable infrastructure, where resources should not be manually modified once they are
created. Instead of making changes manually, any required changes should be done through
Terraform.
 Automate Rollbacks: If manual changes need to be applied, they should be tracked, and changes
should be rolled back through automation tools rather than manual intervention.

3. Use Terraform Cloud/Enterprise

 Centralized State Management: Terraform Cloud or Terraform Enterprise provides a centralized


service for managing Terraform state and collaboration across teams. It includes features like state
locking, drift detection, and notifications for drift.

71
Terraform Guide

 Policy as Code: Terraform Cloud/Enterprise offers the ability to define policies that prevent
unauthorized changes from being made outside of Terraform. This ensures that infrastructure is
only managed by the defined Terraform configurations.

4. Regularly Refresh Terraform State

 Run terraform refresh periodically to ensure your Terraform state is in sync with the actual
infrastructure. This will help catch drift early and avoid surprises when running terraform plan or
terraform apply.

5. Implement Automated Drift Detection

 Set up drift detection tools such as Terraform Drift Detection or third-party monitoring systems to
periodically check for drift in your infrastructure.
 Some teams use external monitoring tools (e.g., Datadog, CloudWatch) to alert them when a
resource's configuration changes unexpectedly.

6. Use Immutable Infrastructure Patterns

 Infrastructure as Code with CI/CD: Build a CI/CD pipeline for Terraform that ensures your
infrastructure is continuously tested and deployed, avoiding manual changes. When changes need
to be applied, they go through the CI/CD pipeline, ensuring that they are tracked and versioned.
 Automate Terraform Runs: Use a tool like Jenkins, GitLab CI, or GitHub Actions to trigger Terraform
runs as part of your CI/CD pipeline, ensuring that your infrastructure is always aligned with your
desired state.

7. Manage Access to Cloud Providers

 Restrict Access: Ensure that only authorized users or systems can make manual changes to
infrastructure. Use tools like IAM policies, service accounts, and roles to limit manual access to
critical resources.

Summary

Terraform Drift occurs when changes are made to your infrastructure outside of Terraform, causing a
mismatch between the actual state and the state managed by Terraform.

To manage drift effectively:

 Use terraform plan and terraform refresh to detect drift.


 Regularly refresh your Terraform state.
 Apply best practices like version control, immutable infrastructure, and automated drift detection.
 Use tools like Terraform Cloud/Enterprise to centralize state management and prevent manual
changes.

72
Terraform Guide

Explain different types of Dependencies in Terraform?

In Terraform, dependencies are critical for ensuring that resources are created, updated, or destroyed in
the correct order. Terraform has built-in mechanisms for managing dependencies between resources, both
implicitly and explicitly. Understanding the different types of dependencies in Terraform helps you control
the flow of resource creation and modification.

Types of Dependencies in Terraform

1. Implicit Dependencies
2. Explicit Dependencies
3. Data Source Dependencies
4. Provisioner Dependencies
5. Module Dependencies

1. Implicit Dependencies

Implicit dependencies are automatically created by Terraform when one resource refers to another. This
means Terraform understands that one resource needs to be created before another because of the
reference between them.

 How it works: When one resource refers to the output of another resource (such as using the id of
a resource), Terraform automatically establishes an implicit dependency.
 Example: In this example, the aws_security_group resource depends on the creation of an aws_vpc
because the security group references the VPC ID.

resource "aws_vpc" "example" {


cidr_block = "[Link]/16"
}

resource "aws_security_group" "example" {


name = "example-sg"
vpc_id = aws_vpc.[Link]
description = "Example security group"
}

Here, the security group implicitly depends on the VPC because it uses the vpc_id from the
aws_vpc.[Link]. Terraform will automatically create the VPC first and then the security group.

2. Explicit Dependencies

Explicit dependencies are defined using the depends_on argument, which allows you to manually specify
the order in which resources should be created, updated, or destroyed. While Terraform generally handles
the dependency graph automatically, the depends_on argument is useful when implicit relationships are
not enough or when you need to enforce a specific order of execution.

73
Terraform Guide

 How it works: You explicitly tell Terraform that one resource depends on another, ensuring that the
dependent resource is processed first.
 Example: In this example, the aws_security_group explicitly depends on the aws_vpc and
aws_instance resources being created first.

resource "aws_vpc" "example" {


cidr_block = "[Link]/16"
}

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"
}

resource "aws_security_group" "example" {


name = "example-sg"
description = "Example security group"
vpc_id = aws_vpc.[Link]

depends_on = [
aws_instance.example
]
}

Here, the aws_security_group.example depends on both the aws_vpc.example and


aws_instance.example, and the depends_on ensures the security group is created after both
resources.

3. Data Source Dependencies

In Terraform, data sources allow you to fetch data from an external system (e.g., from AWS, Google Cloud,
etc.) without creating or managing resources in the state. Data sources often introduce dependencies
because the data retrieved can be used to configure other resources.

 How it works: If a resource references a data source, an implicit dependency is established on the
data source.
 Example: Here, the aws_security_group references a data source to find an existing security group,
creating an implicit dependency.

data "aws_security_group" "existing" {


name = "existing-sg"
}

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"
security_groups = [data.aws_security_group.[Link]]

74
Terraform Guide

In this case, the aws_instance.example depends on the aws_security_group.existing data source,


meaning the data source is queried before the instance is created.

4. Provisioner Dependencies

Provisioners are used to execute scripts or commands on a resource after it has been created or updated.
While provisioners are not recommended for general use (as they can create dependencies that Terraform
cannot track), they can create implicit dependencies, especially when they execute actions based on
resource creation.

 How it works: The provisioners often create dependencies on other resources because the
commands might depend on data or state from other resources.
 Example: Here, a provisioner depends on an instance being created to run a script.

resource "aws_instance" "example" {


ami = "ami-0c55b159cbfafe1f0"
instance_type = "[Link]"

provisioner "remote-exec" {
inline = [
"echo 'Hello, World!'"
]

connection {
type = "ssh"
user = "ec2-user"
private_key = file("~/.ssh/id_rsa")
host = self.public_ip
}
}
}

In this case, the provisioner depends on the aws_instance.example being created first before
running the script.

5. Module Dependencies

When you use modules in Terraform, dependencies can be established between different modules. This is
particularly useful for organizing large configurations, where one module’s output may be another
module’s input.

 How it works: Modules have outputs that can be passed as inputs to other modules, creating an
explicit dependency.
 Example: Here, a VPC module is used in one module, and the output from that VPC is passed into
another module (e.g., to create instances in the VPC).

75
Terraform Guide

module "vpc" {
source = "./vpc"
cidr_block = "[Link]/16"
}

module "instances" {
source = "./instances"
vpc_id = [Link].vpc_id
}

In this case, the instances module explicitly depends on the vpc module because it needs the vpc_id
output to create the instances inside the VPC.

Best Practices for Managing Dependencies in Terraform

1. Use Implicit Dependencies: Let Terraform automatically handle dependencies whenever possible.
Terraform’s built-in understanding of resource dependencies usually results in a cleaner and more
maintainable configuration.
2. Use depends_on Sparingly: Explicit dependencies using depends_on should be used only when
Terraform cannot automatically infer the correct order. Overusing depends_on can make your code
more complex and harder to maintain.
3. Avoid Manual Changes: Try to avoid manually changing infrastructure that is managed by
Terraform. Manual changes can break the implicit dependencies Terraform creates.
4. Modules for Reusability: Leverage modules to encapsulate reusable logic and pass values between
modules explicitly. This helps maintain clear dependencies and improves the modularity of your
Terraform configuration.
5. Use Data Sources Wisely: When using data sources, make sure to handle dependencies correctly.
Since data sources are often used to retrieve external data, they may create an implicit dependency
chain.
6. Limit Provisioner Usage: Provisioners are often not ideal for creating dependencies. They can cause
issues in managing state and should only be used in cases where it's absolutely necessary (e.g.,
bootstrapping a system).

Summary

Terraform manages dependencies using both implicit and explicit methods. Implicit dependencies are
automatically handled when resources reference each other, while explicit dependencies are manually
defined using depends_on. Additionally, dependencies can exist in data sources, provisioners, and
modules. By understanding and managing these dependencies effectively, you can ensure that your
Terraform configurations are more reliable, maintainable, and scalable.

Connect With Me: [Link]

76

You might also like