Advanced Features
Powerful capabilities for managing complex infrastructure including resource graphs, project cloning, Terraform outputs, and file operations.
Resource Graph Visualization
Visualize dependencies between your Terraform resources to understand relationships and troubleshoot issues.
What is a Resource Graph?
A resource graph shows how your infrastructure resources depend on each other. For example, a database might depend on a VPC, which depends on an internet gateway. ops0 automatically builds this graph from your Terraform code.
Viewing the Resource Graph
Navigate to your project
Open the IaC project you want to visualize.
Click Resource Graph
Find the Resource Graph button in the project toolbar.
Explore dependencies
View resources as nodes and dependencies as arrows connecting them.
Resource graphs are built from your Terraform state file. Run a deployment first if you don't see the graph option.
Use Cases
- Understand complex infrastructure - See how 50+ resources relate to each other
- Debug dependency issues - Identify circular dependencies or missing connections
- Plan changes safely - Visualize impact before modifying critical resources
- Onboard new team members - Share visual infrastructure maps
Project Cloning
Duplicate entire IaC projects to create dev/staging/prod environments or experiment safely.
What Gets Cloned?
| Component | Status | Note |
|---|---|---|
| Terraform Files | ✅ | Files, directory structure |
| Project Settings | ✅ | Cloud provider, region |
| Variable Definitions | ✅ | terraform.tfvars content |
| Terraform State | ❌ | Clones start fresh |
| Deployment History | ❌ | History starts empty |
| GitHub Sync | ❌ | Must be re-configured |
How to Clone a Project
Open project menu
Click the ⋮ menu in the project list or project header.
Select "Duplicate Project"
Choose the duplicate option from the dropdown.
Configure the clone
- Set a new project name (e.g.,
my-app-staging) - Optionally modify cloud integration or region
- Review the files that will be copied
Create clone
Click Duplicate to create the new project.
Modifying Cloned Projects
After cloning, you'll typically want to:
Update resource names
Change prefixes or suffixes to avoid conflicts.
# Original
resource "aws_s3_bucket" "prod_data" {
bucket = "my-app-prod-data"
}
# Cloned (update this)
resource "aws_s3_bucket" "staging_data" {
bucket = "my-app-staging-data"
}
Adjust variables
Update environment-specific values.
# Use different instance sizes for staging
variable "instance_type" {
default = "t3.medium" # Was t3.xlarge in prod
}
Change regions
Deploy to a different AWS region or GCP zone.
Connect new integrations
Link to a different cloud account if needed.
Renaming Best Practices
Always rename resources in cloned projects to prevent name collisions with the original infrastructure.
- Name consistently - Use environment prefixes:
prod-,staging-,dev- - Separate accounts - Deploy environments to different cloud accounts for isolation
- Document differences - Add comments explaining environment-specific changes
- Use workspaces - Consider Terraform workspaces for simpler environment management
Terraform Outputs
View and refresh Terraform output values without redeploying your infrastructure.
What are Terraform Outputs?
Outputs expose values from your infrastructure for use in other systems or for reference. Common examples:
- Load balancer URLs
- Database connection strings
- VPC IDs for cross-stack references
Defining Outputs
output "load_balancer_url" {
description = "Public URL of the load balancer"
value = aws_lb.main.dns_name
}
output "database_endpoint" {
description = "RDS database endpoint"
value = aws_db_instance.main.endpoint
sensitive = true # Marks output as sensitive
}
output "vpc_id" {
description = "VPC ID for cross-stack references"
value = aws_vpc.main.id
}
Viewing Outputs in ops0
Open your project
Navigate to the IaC project with deployed infrastructure.
Click Outputs/Secrets
Find the button in the project toolbar.
View output values
The Outputs tab shows all defined outputs with their current values.
Refresh outputs
Click the ⟳ refresh button to fetch the latest values from your state.
Output Features
Copy to Clipboard
- Click the copy icon next to any output value
- Paste into other applications or scripts
Sensitive Output Protection
- Outputs marked with
sensitive = trueare masked:•••••••••••••••• - Provides security for passwords, API keys, and connection strings
Type Display
- Outputs show their type: string, list, map, object
- Complex objects are displayed with expandable JSON viewers
Search and Filter
- Use the search box to find specific outputs
- Useful for projects with many outputs
Refreshing Outputs
Terraform outputs reflect the current state of your infrastructure. Refresh outputs when:
- Infrastructure changed outside of ops0 (manual changes in cloud console)
- You deployed via Terraform CLI locally
- You want to verify the latest values without redeploying
Refreshing outputs runs terraform output against your state file. It doesn't modify infrastructure or trigger a new deployment.
File Renaming
Rename Terraform files in your project to reorganize your infrastructure code.
Renaming a File
Right-click the file
In the file explorer, right-click the file you want to rename.
Select "Rename"
Choose the rename option from the context menu.
Enter new path
Type the new file path/name in the modal.
Confirm rename
Click Rename to apply the change.
File Path Format
Use forward slashes for directory paths:
modules/networking/vpc.tf
environments/prod/main.tf
resources/database.tf
What Happens During Rename
- A new version of the file is created with the new path
- The old version is marked as deleted (soft delete)
- File content remains unchanged
- File version history is maintained
Limitations
Renaming files doesn't automatically update module source paths or references in other files. Update these manually after renaming.
Check after renaming:
- Module source paths pointing to the renamed file
- Relative path references in
terraformblocks - Documentation or comments referencing the old filename
Best Practices
- Rename before deployment - Avoid renaming files with active deployments in progress
- Test after renaming - Run
terraform validateto catch broken references - Use descriptive names - Name files based on the resources they contain:
✓ database.tf, networking.tf, storage.tf ✗ main.tf, resources.tf, stuff.tf
File Versioning
Every change to a Terraform file creates a new version, allowing you to track the evolution of your infrastructure code.
How It Works
ops0 automatically versions files when you:
- Create a new file
- Edit file content
- Rename a file
- Delete a file (soft delete - file is marked deleted but retained)
Version Information
Each file version includes:
- Version number - Incremental (v1, v2, v3...)
- Timestamp - When the version was created
- Author - User who made the change (via AI or manual edit)
- Content - Full file content at that version
Viewing Version History
Version history is maintained in the database and accessible through deployment history:
- View deployment details
- Check which file versions were deployed
- Compare changes between deployments
Backend Configuration & Remote State
Configure Terraform backend to store state remotely in S3, enabling team collaboration and state locking for safe concurrent operations.
Why Use Remote Backend?
Local State Limitations:
- State file stored only on one machine
- No team collaboration
- No locking mechanism
- Risk of state file loss
- Cannot share infrastructure between projects
Remote Backend Benefits:
- Centralized state storage in S3
- State locking with DynamoDB prevents concurrent modifications
- Team members can access and modify infrastructure
- Automatic state backups
- Version history of state changes
Backend Configuration Options
ops0 supports multiple backend configurations:
| Backend Type | Storage | Locking | Use Case |
|---|---|---|---|
| Local | ops0 database | ✅ Database locks | Single user, simple projects |
| S3 | AWS S3 bucket | ✅ DynamoDB table | Team collaboration, production workloads |
| Custom Remote | Your S3 bucket | ✅ Your DynamoDB table | Enterprise, compliance requirements |
Setting Up S3 Backend
Configure remote state storage using AWS S3 and DynamoDB for locking.
Prerequisites
AWS Integration Connected
Ensure your project has an AWS integration configured with appropriate permissions.
Required IAM Permissions
The AWS integration needs these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-terraform-state-bucket",
"arn:aws:s3:::your-terraform-state-bucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/terraform-state-lock"
}
]
}
S3 Bucket Created
Create an S3 bucket for state storage (or use ops0 auto-creation).
DynamoDB Table Created
Create a DynamoDB table for state locking with partition key LockID (string).
Configuring Backend in ops0
Navigate to Project Settings
Open your IaC project and click Settings in the toolbar.
Select Backend Tab
Click the Backend Configuration tab.
Choose Backend Type
Select S3 Remote Backend.
Configure S3 Details
Enter backend configuration:
- S3 Bucket:
your-company-terraform-state - S3 Key:
projects/${project_name}/terraform.tfstate - Region:
us-east-1 - DynamoDB Table:
terraform-state-lock - Encrypt: ✅ Enable server-side encryption
Test Connection
Click Test Backend to verify ops0 can access S3 and DynamoDB.
Save Configuration
Click Save to apply backend settings.
What Happens When Backend is Configured
Backend Block Added
ops0 automatically adds this to your Terraform configuration:
terraform {
backend "s3" {
bucket = "your-company-terraform-state"
key = "projects/my-project/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-state-lock"
encrypt = true
}
}
State Migration
If you had local state, it's automatically migrated to S3 on next deployment.
Locking Enabled
DynamoDB table tracks state locks to prevent concurrent modifications.
State Locking
State locking prevents multiple users or deployments from modifying infrastructure simultaneously, which could corrupt the state file.
How Locking Works
Deployment Starts
When you start a deployment, ops0 attempts to acquire a lock.
Lock Acquired
A lock entry is created in DynamoDB table with:
- LockID: Unique identifier for this state file
- Info: Deployment ID, user, timestamp
- Created: Lock acquisition time
Deployment Runs
Only this deployment can modify infrastructure while lock is held.
Lock Released
After deployment completes (or fails), lock is automatically released.
Lock Information
State Lock Status
─────────────────────────────────────
Status: Locked
Locked by: Deployment #47
User: sarah@company.com
Started: 2024-01-15 10:30:00 UTC
Duration: 2m 15s
Operation: terraform apply
Handling Lock Conflicts
Scenario: You try to deploy while another deployment is running.
ops0 Response:
⚠️ State Lock Conflict
The Terraform state is currently locked by another deployment.
Locked by: Deployment #46
User: mike@company.com
Started: 3 minutes ago
Operation: terraform apply
Options:
- Wait for deployment #46 to complete
- Cancel deployment #46 (requires permission)
- Force unlock (dangerous - only if deployment is truly stuck)
Only force unlock if you're certain the locking deployment has crashed or been abandoned. Force unlocking during an active deployment can corrupt your state file.
Viewing State File
View the current Terraform state without downloading or exposing sensitive data.
State File Viewer
Open Project
Navigate to your IaC project.
Click State
Find the View State button in project toolbar.
Explore Resources
The state viewer shows:
- All managed resources
- Resource attributes and values
- Resource dependencies
- Terraform provider versions
State File Features
Resource List
- Filterable list of all resources in state
- Search by resource type or name
- Group by module
Resource Details
- Full attribute values for any resource
- Sensitive values are masked
- Dependency graph for selected resource
State Metadata
- Terraform version used
- Provider versions
- State format version
- Last updated timestamp
Backend Migration
Migrate between local and remote backends or between different remote configurations.
Local → S3 Migration
Configure S3 Backend
Set up S3 backend configuration as described above.
Trigger Migration
On next deployment, ops0 prompts:
Backend Migration Detected
─────────────────────────────────────
Current Backend: Local (ops0 database)
New Backend: S3 (your-company-terraform-state)
Terraform will copy your state file from local to S3.
This is safe and reversible. Your local state will be
backed up before migration.
[Proceed with Migration] [Cancel]
Migration Executes
During deployment initialization:
- Terraform detects backend change
- Copies state from local to S3
- Verifies state integrity
- Continues with deployment
Verify Migration
Check S3 bucket to confirm state file exists at configured key.
S3 → Different S3 Migration
Change S3 bucket or key path:
Update Backend Config
Change S3 bucket name or key in project settings.
Deploy with Migration
Terraform will:
- Copy state from old S3 location to new location
- Verify copy succeeded
- Continue deployment
Clean Up Old State
Optionally delete state file from old S3 location (manual step).
- Always test migrations in non-production projects first
- Ensure old backend is accessible during migration
- Keep backups of state files before migrating
- Verify new backend after migration
Bring Your Own Repository
Teams with existing IaC repositories (Terraform, OpenTofu, Oxid, etc.) can connect them directly to ops0 and start working immediately.
Create a Project
Create a new IaC project in ops0 and select your engine (Terraform, OpenTofu, or Oxid).
Connect Your Repository
Link your existing GitHub or GitLab repository via the integrations page. ops0 syncs your .tf files automatically with two-way sync.
Configure State Backend
Set up a state backend (S3, GCS, Azure Blob, or Oxid). If your state already lives in a remote backend, point ops0 to the same location. Terraform handles state initialization automatically.
Deploy
Run a plan to verify everything is in sync. Your existing code, modules, providers, and state work without modification.
Alternatively, teams starting fresh can use Discovery to scan existing cloud resources and generate IaC code automatically.
State File Operations
Downloading State
Download state file for local inspection or backup:
Open State Viewer
Navigate to View State in project toolbar.
Click Download
Click the Download State button.
Save File
State downloads as terraform.tfstate JSON file.
Use Cases:
- Manual backup before risky changes
- Debugging state file issues
- Importing into local Terraform CLI
- Compliance/audit requirements
State Backup History
ops0 automatically creates state backups:
| Event | Backup Created |
|---|---|
| Before Apply | State backed up before deployment |
| After Apply | New state version stored |
| Backend Migration | Both old and new states saved |
| Force Unlock | State backed up before unlocking |
View backup history in Project → State History.
Restoring from Backup
Navigate to State History
Open Project → Settings → State History.
Select Backup
Choose the backup version to restore.
Review Changes
Compare current state vs backup state.
Confirm Restore
Click Restore This Version.
Verify Infrastructure
Run terraform plan to see if infrastructure matches restored state.
Restoring an old state file can cause Terraform to think resources were deleted if they were created after that backup. Always review the plan output carefully after restoring state.
Troubleshooting Backend Issues
Cannot Access S3 Bucket
Symptoms: Deployment fails with "access denied" or "bucket not found".
Checks:
- Verify bucket name is correct (no typos)
- Confirm AWS integration has s3:GetObject and s3:PutObject permissions
- Ensure bucket exists in the configured region
- Check bucket encryption settings allow your AWS role
Solution: Update IAM permissions or bucket policy to grant access.
DynamoDB Locking Errors
Symptoms: "Error acquiring state lock" or "lock table not found".
Checks:
- Verify DynamoDB table name matches configuration
- Confirm table exists in same region as S3 bucket
- Ensure partition key is named
LockID(case-sensitive) - Check AWS integration has DynamoDB permissions
Solution: Create table or update permissions.
State Locked Forever
Symptoms: Deployment failed but lock never released.
Cause: Process crashed before releasing lock.
Solution:
- Verify no deployment is actually running
- Check DynamoDB for lock entry
- Force unlock via project settings (requires admin permission)
- Run deployment again
State File Corrupted
Symptoms: Terraform plan fails with "state file corrupted" or JSON parse errors.
Recovery:
- Download last known good state from backup history
- Restore backup version
- Run
terraform planto verify - Investigate what caused corruption (failed deployment, manual edit)
Tips for Advanced Usage
Organizing Large Projects
For projects with 20+ files, use this structure:
main.tf # Provider and backend configuration
variables.tf # Variable definitions
outputs.tf # Output definitions
modules/
networking/
vpc.tf
subnets.tf
security_groups.tf
compute/
instances.tf
autoscaling.tf
data/
databases.tf
storage.tf
Resource Graph Navigation
- Zoom in/out - Use mouse wheel or pinch gestures
- Pan - Click and drag the canvas
- Focus on resource - Click a node to highlight its dependencies
- Filter by type - Show only specific resource types
Output Organization
Group related outputs together:
# Application outputs
output "app_url" { ... }
output "app_health_endpoint" { ... }
# Database outputs
output "db_endpoint" { ... }
output "db_port" { ... }
# Networking outputs
output "vpc_id" { ... }
output "subnet_ids" { ... }
Troubleshooting
Resource graph not available
Possible causes:
- No Terraform state file exists - deploy infrastructure first
- Project has no resources defined
- State file is corrupted or inaccessible
Solution: Run a successful deployment to create the state file.
Clone failed
Common issues:
- Not enough permissions - ensure you have
editoradminrole on the project - Invalid project name - check naming constraints
- Source project has active deployment - wait for deployment to complete
Outputs not refreshing
Check:
- Terraform state exists and is accessible
- Cloud integration credentials are valid
- No state lock from another deployment
Rename broke my code
Fix:
- Update module
sourcepaths that referenced the old filename - Check for hardcoded file paths in scripts or documentation
- Run
terraform validateto identify broken references