ops0ops0

Set Up Multi-Cloud

Manage AWS, GCP, and Azure infrastructure from a single ops0 organization — with unified policies, cross-cloud discovery, and a shared resource graph.


Before you start

ops0 organization created
Admin access to each cloud provider you want to connect
ops0 Settings access for adding integrations

Scenario

Your organization uses multiple cloud providers:

  • AWS for core infrastructure — EC2, RDS, S3, Lambda
  • GCP for machine learning — Vertex AI, BigQuery, GKE
  • Azure for enterprise integration — Active Directory, Cosmos DB, AKS

You want to manage all of this from one place with consistent policies, a shared view of resources, and unified deployment workflows.


Step 1: Add Multiple Cloud Integrations

ops0 lets you add as many cloud integrations as you need to a single organization. There is no limit per provider.

Go to Settings → Integrations → Add Integration and add each provider. Follow the detailed setup guides for each:

Name your integrations clearly so team members understand what each one is for:

Integration NameProviderPurpose
aws-productionAWSProduction workloads
aws-stagingAWSStaging and test environments
aws-dataAWSData platform account
gcp-ml-platformGCPMachine learning project
gcp-analyticsGCPBigQuery and analytics
azure-enterpriseAzureEnterprise directory and apps

Step 2: Create Cloud-Specific IaC Projects

Each ops0 IaC project targets exactly one integration (one cloud account). Keep each project scoped to one cloud and one environment — this makes plan output, state files, and policies predictable.

One cloud per project

Do not mix AWS and GCP resources in the same Terraform project. Terraform providers are separate and state files cannot span clouds. Create separate projects per cloud per environment.

Suggested project structure

Organization
├── aws-production-network        → aws-production
├── aws-production-compute        → aws-production
├── aws-production-databases      → aws-production
├── aws-staging                   → aws-staging
├── gcp-ml-platform               → gcp-ml-platform
├── gcp-analytics                 → gcp-analytics
└── azure-enterprise-apps         → azure-enterprise

Example: GCP project for Vertex AI

# main.tf — GCP Vertex AI workbench
resource "google_vertex_ai_notebook_instance" "ml_workbench" {
  name         = "ml-research-workbench"
  location     = "us-central1-a"
  machine_type = "n1-standard-4"

  vm_image {
    project      = "deeplearning-platform-release"
    image_family = "tf-latest-gpu"
  }

  install_gpu_driver = true

  labels = {
    team        = "ml-research"
    environment = "production"
    managed_by  = "ops0"
  }
}

resource "google_storage_bucket" "ml_data" {
  name          = "acme-ml-training-data"
  location      = "US"
  storage_class = "STANDARD"

  versioning {
    enabled = true
  }

  lifecycle_rule {
    action { type = "SetStorageClass"; storage_class = "NEARLINE" }
    condition { age = 90 }
  }
}

Step 3: Apply Consistent Policies Across Clouds

One of the most powerful benefits of multi-cloud in ops0 is that the same policy engine (OPA/Rego) evaluates all deployments regardless of provider.

Cross-cloud encryption policy

package terraform

# Block any storage resource that lacks encryption — works across AWS, GCP, Azure

deny[msg] {
    resource := input.resource_changes[_]
    resource.type == "aws_s3_bucket"
    not has_field(resource.change.after, "server_side_encryption_configuration")
    msg := sprintf("S3 bucket %s must have server-side encryption enabled", [resource.address])
}

deny[msg] {
    resource := input.resource_changes[_]
    resource.type == "google_storage_bucket"
    not resource.change.after.encryption
    msg := sprintf("GCS bucket %s must have a customer-managed encryption key", [resource.address])
}

deny[msg] {
    resource := input.resource_changes[_]
    resource.type == "azurerm_storage_account"
    resource.change.after.enable_https_traffic_only == false
    msg := sprintf("Azure Storage account %s must enforce HTTPS", [resource.address])
}

has_field(obj, field) {
    _ = obj[field]
}

Cross-cloud required tags policy

package terraform

required_tags := ["Environment", "Team", "ManagedBy"]

deny[msg] {
    resource := input.resource_changes[_]
    resource.change.actions[_] == "create"
    tag := required_tags[_]
    not resource.change.after.tags[tag]
    msg := sprintf("%s is missing required tag: %s", [resource.address, tag])
}

Step 4: Run Cross-Cloud Discovery

After connecting your providers, run a discovery scan on each to build a complete inventory:

Start a scan per provider

Go to Discovery → New Scan and run one scan per cloud integration.

Review the unified inventory

After scans complete, the Discovery view shows all resources across all clouds in a single table. Filter by provider, region, or resource type.

Find unmanaged resources

Resources not tracked in any ops0 IaC project appear as "unmanaged." These are ClickOps resources — candidates for drift remediation or import.

Import priority resources

Use Import to Project on any discovered resource to generate Terraform code and pull it under management.


Step 5: View the Cross-Cloud Resource Graph

The Resource Graph shows topology and relationships across all connected clouds:

  • Filter by provider (AWS / GCP / Azure) or view all together
  • Identify cross-cloud dependencies (e.g., an AWS Lambda calling a GCP BigQuery API)
  • See live health status overlaid on the topology
  • Click any resource to inspect its configuration and relationships

Step 6: Create Cross-Cloud Workflows

Use Workflows to coordinate deployments across providers in a single pipeline:

Example: Unified deployment workflow
─────────────────────────────────────────────────────────────
Step 1: Deploy AWS networking (IaC step → aws-production-network)
Step 2: Deploy GCP VPC peering (IaC step → gcp-ml-platform)
Step 3: Approval gate (required: infrastructure lead)
Step 4: Deploy AWS compute (IaC step → aws-production-compute)
Step 5: Deploy GCP workloads (IaC step → gcp-ml-platform)
Step 6: Notify Slack (HTTP step → #infra-deployments)
─────────────────────────────────────────────────────────────

Each step can target a different cloud integration. The approval gate in the middle ensures a human reviews the networking changes before compute is deployed on top.


Best Practices

Separate projects by cloud and environment

Each Terraform project should target exactly one cloud integration and one environment (production, staging, etc.). Mixing clouds or environments in a single project makes plans, state, and rollbacks unpredictable.

PracticeWhy
Use consistent tagging across all cloudsEnables cost allocation, resource ownership, and policy enforcement without cloud-specific logic
Mirror environments across cloudsIf you have aws-production, also create gcp-production — not gcp-prod — so naming is predictable
Create cloud-agnostic policies firstWrite policies for business rules (tagging, encryption, region restrictions) that apply regardless of provider
Run discovery before the first deployUnderstand what already exists before creating new resources — avoids duplication
Use Blueprints for repeatable patternsSave common multi-cloud patterns (VPC + peering, standard tags) as Blueprints to reuse across teams

Troubleshooting

IssueSolution
Integration shows "Connection failed"Re-run Test Connection in Settings. The trust policy or federated credential may have expired or been modified.
Discovery scan returns 0 resources on GCPVerify the service account has the roles/viewer role on the project and the Workload Identity pool is active.
Policy is only applying to one cloudCheck the Rego resource.type values — AWS, GCP, and Azure use different resource type strings (e.g., aws_instance vs google_compute_instance).
Cross-cloud workflow step failsEach IaC step runs in isolation. Check that the correct integration is selected for each step in the workflow builder.

Next Steps