aws2tf.py will import into Terraform existing AWS infrastructure, and produce the corresponding Terraform HCL files.
aws2tf.py will also attempt to:
- De-reference hardcoded values into their Terraform addresses.
- Find dependent resources and import them.
- Where possible, remove region and account references and replace with Terraform data values.
Finally aws2tf runs a terraform plan command and there should hopefully be no subsequent additions or deletions reported by the terraform plan command as all the appropriate terraform configuration files will have automatically been created.
- MacOS or Linux
- Python3 (v3.12.0+)
- boto3 1.40.44 or later (pip3 install -r requirements.txt).
- AWS cli (v2) version 2.31.4 or higher needs to be installed and you need a login with at least "Read" privileges.
- Terraform version v1.12.0 or higher needs to be installed. (recommend you avoid early point releases eg. 1.9.0/1.9.1)
- jq version 1.6 or higher
- pyenv - to help manage Python versions and environments (https://github.com/pyenv/pyenv)
- tfenv - - to help manage multiple Terraform versions (https://github.com/tfutils/tfenv)
- trivy version 0.67.0 or later (https://aquasecurity.github.io/trivy/v0.54/)
(This tool is currently developed/tested using Python 3.13.7 on macOS 15.7)
Running the tool in your local shell (bash) required these steps:
-
Unzip or clone this git repo into an empty directory.
For AWS provider version 6.0.0 and above use:
git clone https://github.com/aws-samples/aws2tf.git
For the legacy version 5.x of the AWS provider use:
git clone -b v5 https://github.com/aws-samples/aws2tf.git
-
login to the AWS cli (aws configure).
-
Install th4 requirements:
pip3 install -r requirements.txt- run the tool - see usage guide below.
To see the command line help use:
./aws2tf.py -hor for more extensive help:
./aws2tf.py -lTo generate the terraform files for all the VPC's in your account/region:
./aws2tf.py -t vpcor for a specific VPC:
./aws2tf.py -t aws_vpc -i vpc-xxxxxxxxxxYou can also instead of using predefined types use the direct Terraform resource names:
./aws2tf.py -t aws_sagemaker_domainYou can also combine type requests by using a comma delimited list:
./aws2tf.py -t vpc,efs,aws_sagemaker_domainBy default aws2tf genrates a separate aws_xxxx.tf file for every resource it finds, if you would prefer to have them all merged into a single file (main.tf) use the -s option:
./aws2tf.py -t vpc -sNow you can add whatever resources you want by using the -m (merge) flag:
To add all ECS resources:
./aws2tf.py -t ecs -mYou can see all the supported types (-t [type]) by using -l (long help) option: ./aws2tf.py -l
You can also import just a specific resource by passing it's AWS resource name, in this example all the existing resources and the newly merged resources will be put into a single (main.tf) file as the -s option is included:
./aws2tf.py -t eks -i my-cluster-name -m -sor for a specific domain:
./aws2tf.py -t aws_sagemaker_domain -i d-xxxxxxxxx -m Add a specific S3 bucket:
./aws2tf -t aws_s3_bucket -i my_bucket_name -mOften Organisations (and AWS blogs/workshops) deploy resources for use using a stack.
aws2tf can convert these to terraform for you using the -s [stack name] option
./aws2tf.sh -s <stack name>Finally you can scan everything in your account by simply running:
./aws2tf.pyBut this is Not recommended as this will take quite some time to complete!
You can also try the experimental fast mode which uses multi threading to speed things up:
./aws2tf.py -fYou can override the default Terraform provider version used by using the -tv flag
./aws2tf.py -t vpc -tv 5.86.0You need to ensure the provider version you specify is valid, as (currently) the version is just passed straight through without any validation checks
You can import EC2 instances selectively by using the -ec2tag option
./aws2tf.py -t aws_instance -ec2tag "project:my value"The above will only import instances that have a tag key/value pair of "project" and a value of "my value"
### Using Terraform data resources
These flags will cause aws2tf to use data sources (rather than resources) for certain types - useful for enterprises where for example networking components are provided by a different team, the available flags are:
- -dnet: uses Terraform data sources for aws_vpc, aws_subnet
- -dsgs: uses Terraform data sources for aws_security_group
- -dkms: uses Terraform data sources for aws_kms_key
- -dkey: uses Terraform data sources for aws_key_pair
Please raise an issues if you'd like to see this expanded to other types.
You may come across some kind of error as trying to test everyone's AWS combinations in advance isn't possible.
If you happen to find one of these errors please open an issue here and paste in the error and it will get fixed.
For stack sets (-s option) look for these two files in the generated/tf* directory - and paste their contents into the issue:
- stack-unprocessed.err
- stack-null.err
See the instructions here
Note you do not need to clone this repo if you want to run aws2tf as a container
see here for a list
see here for a list
aws2tf maintains state in it's own local directory:
generated/tf../
When using cumulative mode this same state file is used / added to.
It is not possible at this time to use your own state location (eg. on s3)
October 2024
The python version of this tool aws2tf.py has now superceded the old bash script version.
You can still find and use the old version in the bash-version branch`
git clone -b bash-version https://github.com/aws-samples/aws2tf.git