CloudtoRepo scans your AWS account and generates ready-to-use Terraform
import {} blocks,
resource skeletons, and S3 remote-state backends. No click-ops. No paid tools.
The instinct is to think of this as "exporting Terraform." It is not. What you are actually doing is closer to reverse compilation: discovering all resources across accounts and regions, generating Terraform configuration from live infrastructure, reconstructing dependencies, capturing state, and then refactoring everything into something a human can maintain.
Terraformer, the tool most guides recommend, was built by the Waze engineering team
and has not had meaningful maintenance in years. It works, but it predates
Terraform's native import {} blocks and generates output that needs
significant cleanup.
Former2 is primarily a browser-based tool, and the CLI variant is a separate community project with limited coverage. Both are fine for getting a rough baseline, but neither should be your primary strategy in 2026.
Resources reference each other in ways that tooling will not always catch. Some services do not map cleanly to Terraform resources no matter what you do.
IAM is particularly brutal: the relationship between roles, policies, attachments, and instance profiles is rarely clean in a lived-in estate. Accept these rough edges going in and you will be far less surprised.
CloudtoRepo uses the AWS CLI to discover your resources, then writes the Terraform files needed to bring them under version control, with no manual resource hunting required.
Run the script against one region or sweep an entire organisation across multiple accounts and regions.
One import {} block per discovered resource, grouped into per-service directories, ready for Terraform 1.5+.
Run terraform plan -generate-config-out=generated.tf in any service dir. Terraform reads live state and writes fully-populated HCL.
Run drift.sh regularly to catch resources created or deleted outside Terraform. Use --apply to patch imports.tf, then report.sh to generate a Markdown summary.
All the core services you need to bring a real-world AWS estate under Terraform control.
EKS includes clusters, node groups, addons, and Fargate profiles.
VPC includes subnets, security groups, route tables, internet gateways, and NAT gateways.
IAM includes roles, instance profiles, and OIDC providers.
User pools, clients, and identity pools, fully paginated beyond the 60-pool API limit.
Bedrock agents & knowledge bases; SageMaker domains & endpoints.
Organizations covers accounts & OUs; RAM covers resource shares across accounts.
Each service gets its own directory with three files that Terraform can use immediately.
One import {} block per discovered resource.
import { to = aws_eks_cluster.cluster_production id = "production" } import { to = aws_rds_instance.db_primary id = "prod-postgres-01" }
S3 remote state configuration and AWS provider block, ready to init.
terraform { backend "s3" { bucket = "my-tf-state" key = "us-east-1/eks/terraform.tfstate" region = "us-east-1" } }
Empty resource skeletons matching every import block, populated by terraform plan -generate-config-out.
resource "aws_eks_cluster" "cluster_production" { # populated by terraform plan # -generate-config-out=generated.tf }
Organised by account → region → service for easy navigation.
tf-output/
├── summary.txt
└── 123456789012/
├── us-east-1/
│ ├── ec2/
│ ├── eks/
│ └── rds/
└── eu-west-1/
Once your Terraform baseline is committed, drift.sh re-scans AWS and
diffs the results against your imports.tf files with no AWS Resource Explorer required.
Found in AWS but missing from imports.tf. With --apply, new import {} blocks are appended automatically.
Present in imports.tf but no longer in AWS. With --apply, stale blocks are commented out with a timestamp.
Drop drift.sh into a nightly CI job. Pass --report ./drift.txt to save output, then feed it into report.sh to generate a Markdown summary.
# report only ./drift.sh --output ./tf-output --regions "us-east-1" # apply changes to imports.tf ./drift.sh --output ./tf-output --regions "us-east-1" --apply ------------------------------------------------------- NEW (2 resource(s) found in AWS, not in imports.tf) + aws_instance.web_server_new (id: i-0abc123def456) + aws_instance.batch_worker (id: i-0def789abc012) REMOVED (1 resource(s) in imports.tf, no longer in AWS) - aws_instance.old_bastion (id: i-0111222333444) ------------------------------------------------------- Unchanged: 22 New (not yet imported): 2 Removed (stale): 1
After exporting, reconcile.sh cross-references your imports.tf
files against AWS Resource Explorer to surface any resources that were skipped.
No Resource Explorer? Use --local for an instant per-service summary from
the output directory alone.
Compares every import ID against all ARNs returned by Resource Explorer. Reports matched, missed, and overall coverage percentage.
--local skips the API call entirely and prints import block counts per account / region / service straight from the output directory. Useful during initial setup.
Run ./cloudtorepo.sh --services list or ./drift.sh --services list to print all supported service names — useful for scripting or building a custom --services argument.
# full coverage check (requires Resource Explorer aggregator index)
./reconcile.sh --output ./tf-output --index-region us-east-1
Summary
-------
Total resources (Resource Explorer): 847
Matched to exported import blocks: 801
Potentially missed: 46
Coverage: 94%
# local summary — no AWS API calls beyond listing the output directory ./reconcile.sh --output ./tf-output --local Account / Region / Service Import blocks ------------------------------------------ 123456789012 / us-east-1 / ec2 47 123456789012 / us-east-1 / eks 12 123456789012 / us-east-1 / rds 8 123456789012 / us-east-1 / s3 5 123456789012 / eu-west-1 / ec2 31 ------------------------------------------ Total 103 Tip: enable Resource Explorer for full coverage scoring: aws resource-explorer-2 create-index --type AGGREGATOR --region us-east-1
# print every supported service name, one per line
./cloudtorepo.sh --services list
acm
apigateway
appconfig
apprunner
athena
backup
bedrock
cloudfront
cloudtrail
cloudwatch
codebuild
codepipeline
cognito
config
connect
documentdb
dynamodb
ebs
ecr
ecs
eks
elasticache
elb
emr
...and more
report.sh reads any cloudtorepo.sh output directory and generates a clean Markdown document — service breakdown, import counts, and an optional drift section from a drift.sh --report file.
Write to stdout or pass --out report.md. Paste it into a PR description, Confluence, or a GitHub wiki — it renders instantly.
Pass --drift ./drift.txt (produced by drift.sh --report) to append a NEW / REMOVED breakdown to the same document.
Run report.sh at the end of every nightly drift job and commit report.md to a infra-reports branch — instant audit trail with no extra tooling.
# basic — prints to stdout ./report.sh --output ./tf-output # with drift section, write to file ./drift.sh --output ./tf-output --report ./drift.txt ./report.sh --output ./tf-output \ --drift ./drift.txt \ --title "Production AWS Report — March 2026" \ --out report.md
# Production AWS Report — March 2026 **Generated:** 2026-03-25T11:06:09Z **Output directory:** `./tf-output` --- ## Summary | | | |---|---| | **Account(s)** | `987267051295` | | **Region(s)** | af-south-1 | | **Total import blocks** | **366** | ## Resources by Service ### 987267051295 / af-south-1 | Service | Import blocks | |---------|--------------| | `config` | 319 | | `kms` | 9 | | `vpc` | 9 | | `sns` | 8 | | `eventbridge` | 4 | | `lambda` | 4 | | `ebs` | 3 | | `s3` | 3 | | `ec2` | 2 | | `servicecatalog`| 2 | | `acm` | 1 | | `backup` | 1 | | `sqs` | 1 | ## Account Totals | Account | Import blocks | |---------|--------------| | `987267051295` | 366 | | `123456789012` | 219 | ## Cross-Account Service Totals | Service | Import blocks | |---------|--------------| | `config` | 412 | | `vpc` | 89 | | `ec2` | 41 | | `kms` | 18 | | `lambda` | 12 | --- ## Drift Report | Status | Count | |--------|-------| | Unchanged | 358 | | New (not yet imported) | 0 | | Removed (stale) | 8 | ### 987267051295 / af-south-1 / vpc **REMOVED (8 resource(s) in imports.tf, no longer in AWS)** - `- aws_subnet.defaultvpcsubnetb (id: subnet-93d3d4eb)` - `- aws_subnet.defaultvpcsubnetc (id: subnet-52557418)` - `- aws_subnet.defaultvpcsubneta (id: subnet-b156b3d8)` - `- aws_security_group.cloudflare_proxy (id: sg-02c2539d27ffe5a5f)` - `- aws_security_group.launch_wizard_6 (id: sg-011a26aef17beff13)` - `- aws_security_group.launch_wizard_7 (id: sg-024bd977787fb2789)` - `- aws_route_table.defaultroutetable (id: rtb-6157b208)` - `- aws_internet_gateway.defaultvpcigw (id: igw-5a56b333)`
After exporting, run.sh automates the terraform init +
terraform plan -generate-config-out=generated.tf step across every
service directory, no more running it manually per folder.
Finds every directory with an imports.tf and runs Terraform in it. Filter by account, region, or service to process a subset.
Up to --parallel 3 concurrent Terraform processes by default. Logs per directory, pass/fail/no-change summary at the end.
cloudtorepo.sh and drift.sh scan up to --parallel 5 services simultaneously. All AWS API calls automatically retry on throttling with exponential back-off.
# process everything ./run.sh --output ./tf-output # filter to specific services and region ./run.sh --output ./tf-output \ --regions "us-east-1" \ --services "ec2,eks,rds" # preview directories without running terraform ./run.sh --output ./tf-output --dry-run ------------------------------------------------------- [OK] 123456789012/us-east-1/eks [OK] 123456789012/us-east-1/rds [no-change] 123456789012/us-east-1/s3 [FAIL] 123456789012/us-east-1/iam (see .run.log) ------------------------------------------------------- Succeeded (changes written): 2 No changes: 1 Failed: 1
After run.sh generates generated.tf, use import.sh
to actually call terraform import for every resource — skipping anything
already in state automatically.
Runs terraform state list before each import. Resources already managed are silently skipped — safe to re-run as many times as needed.
Pass --parallel N to import multiple service directories concurrently. Per-directory logs in .import.log, pass/fail summary at the end.
--dry-run prints every address (id: ...) pair that would be imported without touching state. Use it to verify scope before committing.
# dry run — see what would be imported ./import.sh --output ./tf-output --dry-run # import everything (sequential) ./import.sh --output ./tf-output # import with parallel workers + auto terraform init ./import.sh --output ./tf-output --parallel 4 --init ------------------------------------------------------- [INFO] Importing resources (parallel=4)... [INFO] [123456789012/us-east-1/ec2] importing aws_instance.web (id: i-0abc123)... [INFO] [123456789012/us-east-1/rds] importing aws_db_instance.main (id: mydb)... [INFO] ======================================================= [INFO] Resources imported: 47 [INFO] Resources skipped: 3 (already in state) [INFO] Resources failed: 0 [INFO] =======================================================
Three tools required: AWS CLI v2, Terraform 1.5+, and jq. Works on the Bash that ships with macOS (3.2+).
Install everything at once on macOS: brew install awscli terraform jq
git clone https://github.com/cloudtorepo/cloudtorepo.git cd cloudtorepo chmod +x cloudtorepo.sh reconcile.sh drift.sh run.sh report.sh import.sh
./cloudtorepo.sh \ --regions "us-east-1" \ --services "ec2,vpc,rds" \ --dry-run
./cloudtorepo.sh \ --regions "us-east-1,eu-west-1" \ --services "ec2,eks,rds,s3,vpc" \ --state-bucket my-tf-state-prod \ --parallel 5 \ --output ./tf-output
./cloudtorepo.sh \ --profile prod-readonly \ --regions "eu-west-1" \ --services "ec2,vpc,rds,eks" \ --output ./tf-output
Pass any named profile from ~/.aws/config. Works alongside --role for cross-account sweeps; the profile authenticates the base caller, the role is assumed per account.
./cloudtorepo.sh \ --regions "us-east-1" \ --tags "Env=prod,Team=sre" \ --output ./tf-output
Only imports resources that carry the specified tags. Uses the Resource Groups Tagging API with no extra setup required beyond the tags themselves.
./cloudtorepo.sh \ --regions "us-east-1,eu-west-1" \ --output ./tf-output \ --resume
Skips account/region/service combinations already written to the output directory. Safe to run again after a network interruption or timeout.
./run.sh --output ./tf-output
Runs terraform init + terraform plan -generate-config-out=generated.tf in every service directory. Review generated.tf, remove computed attributes, and commit.
A 112-test BATS suite across 7 suites (mock AWS CLI and Terraform, no real credentials needed) and ShellCheck static analysis run automatically via a pre-commit hook and GitHub Actions CI — blocking any commit or push that introduces a regression.