Open Source · MIT License

Reverse-engineer your AWS estate
into Terraform
with CloudtoRepo

CloudtoRepo scans your AWS account and generates ready-to-use Terraform import {} blocks, resource skeletons, and S3 remote-state backends. No click-ops. No paid tools.

View on GitHub Get started ↓
cloudtorepo
$ ./cloudtorepo.sh \
  --regions "us-east-1,eu-west-1" \
  --services "ec2,eks,rds,s3,vpc" \
  --state-bucket my-tf-state-prod \
  --output ./tf-output

Scanning ec2 in us-east-1 ... 24 resources
Scanning eks in us-east-1 ... 3 clusters
Scanning rds in us-east-1 ... 8 instances
Done. Output written to ./tf-output/

Context

Why this is hard, and why most guides get it wrong

The instinct is to think of this as "exporting Terraform." It is not. What you are actually doing is closer to reverse compilation: discovering all resources across accounts and regions, generating Terraform configuration from live infrastructure, reconstructing dependencies, capturing state, and then refactoring everything into something a human can maintain.

The tooling is older than it appears

Terraformer, the tool most guides recommend, was built by the Waze engineering team and has not had meaningful maintenance in years. It works, but it predates Terraform's native import {} blocks and generates output that needs significant cleanup.

Former2 is primarily a browser-based tool, and the CLI variant is a separate community project with limited coverage. Both are fine for getting a rough baseline, but neither should be your primary strategy in 2026.

AWS was never designed to be reverse-compiled

Resources reference each other in ways that tooling will not always catch. Some services do not map cleanly to Terraform resources no matter what you do.

IAM is particularly brutal: the relationship between roles, policies, attachments, and instance profiles is rarely clean in a lived-in estate. Accept these rough edges going in and you will be far less surprised.


How it works

From live AWS to Terraform in minutes

CloudtoRepo uses the AWS CLI to discover your resources, then writes the Terraform files needed to bring them under version control, with no manual resource hunting required.

STEP 01

Scan your account

Run the script against one region or sweep an entire organisation across multiple accounts and regions.

STEP 02

Import blocks generated

One import {} block per discovered resource, grouped into per-service directories, ready for Terraform 1.5+.

STEP 03

Auto-populate config

Run terraform plan -generate-config-out=generated.tf in any service dir. Terraform reads live state and writes fully-populated HCL.

STEP 04

Detect drift & report

Run drift.sh regularly to catch resources created or deleted outside Terraform. Use --apply to patch imports.tf, then report.sh to generate a Markdown summary.


Supported services

65+ AWS services covered

All the core services you need to bring a real-world AWS estate under Terraform control.

Compute

ec2ebsecsekslambda

EKS includes clusters, node groups, addons, and Fargate profiles.

Networking

vpcelbcloudfrontroute53acmtransitgatewayvpcendpoints

VPC includes subnets, security groups, route tables, internet gateways, and NAT gateways.

Data

rdsdynamodbelasticachemsks3efsopensearchredshiftdocumentdb

Streaming

kinesisfirehose

Integration

sqssnsapigatewayeventbridgestepfunctionsses

Security & Compliance

iamkmssecretsmanagerwafv2configcloudtrailguardduty

IAM includes roles, instance profiles, and OIDC providers.

Platform & CI/CD

ecrssmcloudwatchbackupcodepipelinecodebuild

Auth

cognito

User pools, clients, and identity pools, fully paginated beyond the 60-pool API limit.

ETL & Analytics

glueathenalakeformationmemorydbemr

Storage & Transfer

fsxtransfer

App Platform

elasticbeanstalkapprunnerlightsailconnectappconfig

AI / ML

bedrocksagemaker

Bedrock agents & knowledge bases; SageMaker domains & endpoints.

Governance & Org

servicecatalogorganizationsramservicequotasxray

Organizations covers accounts & OUs; RAM covers resource shares across accounts.


Output

Clean, structured Terraform files

Each service gets its own directory with three files that Terraform can use immediately.

imports.tf

One import {} block per discovered resource.

import {
  to = aws_eks_cluster.cluster_production
  id = "production"
}

import {
  to = aws_rds_instance.db_primary
  id = "prod-postgres-01"
}

backend.tf

S3 remote state configuration and AWS provider block, ready to init.

terraform {
  backend "s3" {
    bucket = "my-tf-state"
    key    = "us-east-1/eks/terraform.tfstate"
    region = "us-east-1"
  }
}

resources.tf

Empty resource skeletons matching every import block, populated by terraform plan -generate-config-out.

resource "aws_eks_cluster" "cluster_production" {
  # populated by terraform plan
  # -generate-config-out=generated.tf
}

Directory structure

Organised by account → region → service for easy navigation.

tf-output/
├── summary.txt
└── 123456789012/
    ├── us-east-1/
    │   ├── ec2/
    │   ├── eks/
    │   └── rds/
    └── eu-west-1/

Drift detection

Stay in sync after day one

Once your Terraform baseline is committed, drift.sh re-scans AWS and diffs the results against your imports.tf files with no AWS Resource Explorer required.

NEW

Resources added outside Terraform

Found in AWS but missing from imports.tf. With --apply, new import {} blocks are appended automatically.

REMOVED

Resources deleted outside Terraform

Present in imports.tf but no longer in AWS. With --apply, stale blocks are commented out with a timestamp.

CI

Run on a schedule

Drop drift.sh into a nightly CI job. Pass --report ./drift.txt to save output, then feed it into report.sh to generate a Markdown summary.

Sample drift report

# report only
./drift.sh --output ./tf-output --regions "us-east-1"

# apply changes to imports.tf
./drift.sh --output ./tf-output --regions "us-east-1" --apply

-------------------------------------------------------
NEW  (2 resource(s) found in AWS, not in imports.tf)
  + aws_instance.web_server_new  (id: i-0abc123def456)
  + aws_instance.batch_worker    (id: i-0def789abc012)
REMOVED  (1 resource(s) in imports.tf, no longer in AWS)
  - aws_instance.old_bastion     (id: i-0111222333444)
-------------------------------------------------------
Unchanged:               22
New (not yet imported):   2
Removed (stale):          1

Coverage check

Find what you missed

After exporting, reconcile.sh cross-references your imports.tf files against AWS Resource Explorer to surface any resources that were skipped. No Resource Explorer? Use --local for an instant per-service summary from the output directory alone.

%

Coverage score

Compares every import ID against all ARNs returned by Resource Explorer. Reports matched, missed, and overall coverage percentage.

LOCAL

No Resource Explorer needed

--local skips the API call entirely and prints import block counts per account / region / service straight from the output directory. Useful during initial setup.

LIST

Discover supported services

Run ./cloudtorepo.sh --services list or ./drift.sh --services list to print all supported service names — useful for scripting or building a custom --services argument.

reconcile.sh — with Resource Explorer

# full coverage check (requires Resource Explorer aggregator index)
./reconcile.sh --output ./tf-output --index-region us-east-1

Summary
-------
Total resources (Resource Explorer):  847
Matched to exported import blocks:    801
Potentially missed:                    46
Coverage:                              94%

reconcile.sh --local — no Resource Explorer required

# local summary — no AWS API calls beyond listing the output directory
./reconcile.sh --output ./tf-output --local

Account / Region / Service        Import blocks
------------------------------------------
123456789012 / us-east-1 / ec2           47
123456789012 / us-east-1 / eks           12
123456789012 / us-east-1 / rds            8
123456789012 / us-east-1 / s3             5
123456789012 / eu-west-1 / ec2           31
------------------------------------------
Total                                   103

Tip: enable Resource Explorer for full coverage scoring:
  aws resource-explorer-2 create-index --type AGGREGATOR --region us-east-1

--services list — discover all supported services

# print every supported service name, one per line
./cloudtorepo.sh --services list

acm
apigateway
appconfig
apprunner
athena
backup
bedrock
cloudfront
cloudtrail
cloudwatch
codebuild
codepipeline
cognito
config
connect
documentdb
dynamodb
ebs
ecr
ecs
eks
elasticache
elb
emr
...and more

Reporting

One-command Markdown reports

report.sh reads any cloudtorepo.sh output directory and generates a clean Markdown document — service breakdown, import counts, and an optional drift section from a drift.sh --report file.

MD

Markdown output

Write to stdout or pass --out report.md. Paste it into a PR description, Confluence, or a GitHub wiki — it renders instantly.

DRIFT

Drift section included

Pass --drift ./drift.txt (produced by drift.sh --report) to append a NEW / REMOVED breakdown to the same document.

CI

Commit it to git

Run report.sh at the end of every nightly drift job and commit report.md to a infra-reports branch — instant audit trail with no extra tooling.

Usage

# basic — prints to stdout
./report.sh --output ./tf-output

# with drift section, write to file
./drift.sh --output ./tf-output --report ./drift.txt
./report.sh --output ./tf-output \
  --drift  ./drift.txt \
  --title  "Production AWS Report — March 2026" \
  --out    report.md

Example report output — account 987267051295 / af-south-1

# Production AWS Report — March 2026

**Generated:** 2026-03-25T11:06:09Z
**Output directory:** `./tf-output`

---

## Summary

| | |
|---|---|
| **Account(s)** | `987267051295` |
| **Region(s)**  | af-south-1 |
| **Total import blocks** | **366** |

## Resources by Service

### 987267051295 / af-south-1

| Service | Import blocks |
|---------|--------------|
| `config`        | 319 |
| `kms`           |   9 |
| `vpc`           |   9 |
| `sns`           |   8 |
| `eventbridge`   |   4 |
| `lambda`        |   4 |
| `ebs`           |   3 |
| `s3`            |   3 |
| `ec2`           |   2 |
| `servicecatalog`|   2 |
| `acm`           |   1 |
| `backup`        |   1 |
| `sqs`           |   1 |

## Account Totals

| Account | Import blocks |
|---------|--------------|
| `987267051295` | 366 |
| `123456789012` | 219 |

## Cross-Account Service Totals

| Service | Import blocks |
|---------|--------------|
| `config`  | 412 |
| `vpc`     |  89 |
| `ec2`     |  41 |
| `kms`     |  18 |
| `lambda`  |  12 |

---

## Drift Report

| Status | Count |
|--------|-------|
| Unchanged              | 358 |
| New (not yet imported) |   0 |
| Removed (stale)       |   8 |

### 987267051295 / af-south-1 / vpc
**REMOVED  (8 resource(s) in imports.tf, no longer in AWS)**

- `- aws_subnet.defaultvpcsubnetb  (id: subnet-93d3d4eb)`
- `- aws_subnet.defaultvpcsubnetc  (id: subnet-52557418)`
- `- aws_subnet.defaultvpcsubneta  (id: subnet-b156b3d8)`
- `- aws_security_group.cloudflare_proxy  (id: sg-02c2539d27ffe5a5f)`
- `- aws_security_group.launch_wizard_6  (id: sg-011a26aef17beff13)`
- `- aws_security_group.launch_wizard_7  (id: sg-024bd977787fb2789)`
- `- aws_route_table.defaultroutetable  (id: rtb-6157b208)`
- `- aws_internet_gateway.defaultvpcigw  (id: igw-5a56b333)`

Workflow automation

One command to rule them all

After exporting, run.sh automates the terraform init + terraform plan -generate-config-out=generated.tf step across every service directory, no more running it manually per folder.

AUTO

Walk the entire output tree

Finds every directory with an imports.tf and runs Terraform in it. Filter by account, region, or service to process a subset.

×5

Parallel runs

Up to --parallel 3 concurrent Terraform processes by default. Logs per directory, pass/fail/no-change summary at the end.

FAST

Parallel scans + auto-retry

cloudtorepo.sh and drift.sh scan up to --parallel 5 services simultaneously. All AWS API calls automatically retry on throttling with exponential back-off.

run.sh usage

# process everything
./run.sh --output ./tf-output

# filter to specific services and region
./run.sh --output ./tf-output \
  --regions "us-east-1" \
  --services "ec2,eks,rds"

# preview directories without running terraform
./run.sh --output ./tf-output --dry-run

-------------------------------------------------------
  [OK]        123456789012/us-east-1/eks
  [OK]        123456789012/us-east-1/rds
  [no-change] 123456789012/us-east-1/s3
  [FAIL]      123456789012/us-east-1/iam  (see .run.log)
-------------------------------------------------------
Succeeded (changes written):  2
No changes:                   1
Failed:                       1

Terraform import

From import blocks to state in one command

After run.sh generates generated.tf, use import.sh to actually call terraform import for every resource — skipping anything already in state automatically.

SKIP

State-aware

Runs terraform state list before each import. Resources already managed are silently skipped — safe to re-run as many times as needed.

×N

Parallel imports

Pass --parallel N to import multiple service directories concurrently. Per-directory logs in .import.log, pass/fail summary at the end.

DRY

Preview first

--dry-run prints every address (id: ...) pair that would be imported without touching state. Use it to verify scope before committing.

import.sh usage

# dry run — see what would be imported
./import.sh --output ./tf-output --dry-run

# import everything (sequential)
./import.sh --output ./tf-output

# import with parallel workers + auto terraform init
./import.sh --output ./tf-output --parallel 4 --init

-------------------------------------------------------
[INFO]  Importing resources (parallel=4)...
[INFO]    [123456789012/us-east-1/ec2] importing aws_instance.web (id: i-0abc123)...
[INFO]    [123456789012/us-east-1/rds] importing aws_db_instance.main (id: mydb)...
[INFO]  =======================================================
[INFO]  Resources imported:     47
[INFO]  Resources skipped:       3  (already in state)
[INFO]  Resources failed:        0
[INFO]  =======================================================
Quickstart

Up and running in four commands

Three tools required: AWS CLI v2, Terraform 1.5+, and jq. Works on the Bash that ships with macOS (3.2+).

Install everything at once on macOS: brew install awscli terraform jq

1

Clone the repo

git clone https://github.com/cloudtorepo/cloudtorepo.git
cd cloudtorepo
chmod +x cloudtorepo.sh reconcile.sh drift.sh run.sh report.sh import.sh
2

Dry-run to preview resource counts

./cloudtorepo.sh \
  --regions "us-east-1" \
  --services "ec2,vpc,rds" \
  --dry-run
3

Export with S3 remote state

./cloudtorepo.sh \
  --regions "us-east-1,eu-west-1" \
  --services "ec2,eks,rds,s3,vpc" \
  --state-bucket my-tf-state-prod \
  --parallel 5 \
  --output ./tf-output
4

Named AWS profile (optional)

./cloudtorepo.sh \
  --profile prod-readonly \
  --regions "eu-west-1" \
  --services "ec2,vpc,rds,eks" \
  --output ./tf-output

Pass any named profile from ~/.aws/config. Works alongside --role for cross-account sweeps; the profile authenticates the base caller, the role is assumed per account.

5

Filter by tags (optional)

./cloudtorepo.sh \
  --regions "us-east-1" \
  --tags "Env=prod,Team=sre" \
  --output ./tf-output

Only imports resources that carry the specified tags. Uses the Resource Groups Tagging API with no extra setup required beyond the tags themselves.

6

Resume a partial scan (optional)

./cloudtorepo.sh \
  --regions "us-east-1,eu-west-1" \
  --output ./tf-output \
  --resume

Skips account/region/service combinations already written to the output directory. Safe to run again after a network interruption or timeout.

7

Populate all service directories at once

./run.sh --output ./tf-output

Runs terraform init + terraform plan -generate-config-out=generated.tf in every service directory. Review generated.tf, remove computed attributes, and commit.


Quality

Tested before every commit

A 112-test BATS suite across 7 suites (mock AWS CLI and Terraform, no real credentials needed) and ShellCheck static analysis run automatically via a pre-commit hook and GitHub Actions CI — blocking any commit or push that introduces a regression.

112
automated tests
7
test suites
cloudtorepo · drift · import · reconcile · report · run · common
0
real AWS credentials needed
mock AWS CLI in tests/helpers/