Set up a GCP VM build infrastructure
Currently, this feature is behind the Feature Flag CI_VM_INFRASTRUCTURE. Contact Harness Support to enable the feature.
This topic describes how to set up a CI build infrastructure in Google Cloud Platform (GCP). To do this, you will create an Ubuntu VM and then install a Harness Delegate and Drone VM Runner on it. The runner creates VMs dynamically in response to CI build requests.
This is one of several CI build infrastructure options. For example, you can also set up a Kubernetes cluster build infrastructure.
The following diagram illustrates a CI build farm. The Harness Delegate communicates directly with your Harness instance. The VM Runner maintains a pool of VMs for running builds. When the delegate receives a build request, it forwards the request to the runner, which runs the build on an available VM.

Prepare the Google Cloud VM
These are the requirements to configure the Google Cloud VM. This is the primary VM where you will host your Harness Delegate and runner.
- 
Log into the Google Cloud Console and launch a VM to host your Harness Delegate and runner. - Select a machine type with 4 vCPU and 16 GB memory or more. Harness recommends an Ubuntu 20.04 LTS machine image, such as Focal or Jammy.
- To find images to use on Google Compute Engine, run gcloud compute images list. Valid image references follow the format ofprojects/PROJECT/global/images/IMAGE. For example:projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20250701.
 
- 
Configure the VM to allow ingress on ports 22 and 9079. 
- 
SSH into the VM, if you haven't done so already. 
- 
Run gcloud auth application-default loginto create anapplication_default_credentials.jsonfile at/home/$(whoami)/.config/gcloud.
Configure the Drone pool on the Google Cloud VM
The pool.yml file defines the VM spec and pool size for the VM instances used to run the pipeline. A pool is a group of instantiated VMs that are immediately available to run CI pipelines. You can configure multiple pools in pool.yml, such as a Windows VM pool and a Linux VM pool.
- 
Create a /runnerfolder on your Google Cloud VM andcdinto it:mkdir /runner
 cd /runner
- 
Copy your application_default_credentials.jsonfile into the/runnerfolder. You created this file when you prepared the Google Cloud VM.
- 
In the /runnerfolder, create apool.ymlfile.
- 
Modify pool.ymlas described in the following example and the Pool settings reference.
Example pool.yml
version: "1"
instances:
  - name: ubuntu-gcp
    default: true
    type: google
    pool: 1
    limit: 1
    platform:
      os: linux
      arch: amd64
    spec:
      account:
        project_id: ci-play ## Your Google project ID.
        json_path: /path/to/key.json ## Path to the application_default_credentials.json file.
      image: projects/ubuntu-os-cloud/global/images/ubuntu-2204-jammy-v20250701
      machine_type: e2-small
      zone: ## To minimize latency between delegate and build VMs, specify the same zone where your delegate VM is running.
        - us-central1-a
        - us-central1-b
        - us-central1-c
      disk:
        size: 100
        type: "pd-balanced"
      private_ip: true ## Ensures the instance is assigned only a private IP and prevents exposure to the public internet.
With private_ip: true, the runner does not create an external IP.
Pool settings reference
You can configure the following settings in your pool.yml file. You can also learn more in the Drone documentation for the Pool File and Google drivers.
user data example
Provide cloud-init data in either user_data_path or user_data if you need custom configuration. Refer to the user data examples for supported runtime environments.
Below is a sample pool.yml for GCP with user_data configuration:
version: "1"
instances:
  - name: linux-amd64
    type: google
    pool: 1
    limit: 10
    platform:
      os: linux
      arch: amd64
    spec:
      account:
        project_id: YOUR_PROJECT_ID
        json_path: PATH_TO_SERVICE_ACCOUNT_JSON
      image: IMAGE_NAME_OR_PATH
      machine_type: e2-medium
      zones:
        - YOUR_GCP_ZONE # e.g., us-central1-a
      disk:
        size: 100
      user_data: |
        #cloud-config
        {{ if and (.IsHosted) (eq .Platform.Arch "amd64") }}
        packages: []
        {{ else }}
        apt:
          sources:
            docker.list:
              source: deb [arch={{ .Platform.Arch }}] https://download.docker.com/linux/ubuntu $RELEASE stable
              keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88
        packages: []
        {{ end }}
        write_files:
          - path: {{ .CaCertPath }}
            path: {{ .CertPath }}
            permissions: '0600'
            encoding: b64
            content: {{ .TLSCert | base64 }}
          - path: {{ .KeyPath }}
        runcmd:
          - 'set -x'
          - |
            if .ShouldUseGoogleDNS; then
              echo "DNS=8.8.8.8 8.8.4.4\nFallbackDNS=1.1.1.1 1.0.0.1\nDomains=~." | sudo tee -a /etc/systemd/resolved.conf
              systemctl restart systemd-resolved
            fi
          - ufw allow 9079
| Setting | Type | Example | Description | 
|---|---|---|---|
| name | String | name: windows_pool | Unique identifier of the pool. You will need to specify this pool name in Harness when you set up the CI stage build infrastructure. | 
| pool | Integer | pool: 1 | Warm pool size number. Denotes the number of VMs in ready state to be used by the runner. | 
| limit | Integer | limit: 3 | Maximum number of VMs the runner can create at any time. poolindicates the number of warm VMs, and the runner can create more VMs on demand up to thelimit.For example, assume pool: 3andlimit: 10. If the runner gets a request for 5 VMs, it immediately provisions the 3 warm VMs (frompool) and provisions 2 more, which are not warm and take time to initialize. | 
| platform | Key-value pairs, strings | platform: os: linux arch: amd64 | Specify VM platform operating system ( os) and architecture (arch).variantis optional. | 
| spec | Key-value pairs, various | Go to Example pool.yml. | Configure settings for the build VMs. 
 | 
Start the runner
SSH into your Google Cloud VM and run the following command to start the runner:
docker run -v /runner:/runner -p 3000:3000 drone/drone-runner-aws:latest  delegate --pool /runner/pool.yml
This command mounts the volume to the Docker container providing access to pool.yml and JSON credentials to authenticate with GCP. It also exposes port 3000 and passes arguments to the container.
You might need to modify the command to use sudo and specify the runner directory path, for example:
sudo docker run -v ./runner:/runner -p 3000:3000 drone/drone-runner-aws:latest  delegate --pool /runner/pool.yml
When a build starts, the delegate receives a request for VMs on which to run the build. The delegate forwards the request to the runner, which then allocates VMs from the warm pool (specified by pool in pool.yml) and, if necessary, spins up additional VMs (up to the limit specified in pool.yml).
The runner includes lite engine, and the lite engine process triggers VM startup through a cloud init script. This script downloads and installs Scoop package manager, Git, the Drone plugin, and lite engine on the build VMs. The plugin and lite engine are downloaded from GitHub releases. Scoop is downloaded from get.scoop.sh which redirects to raw.githubusercontent.com.
Firewall restrictions can prevent the script from downloading these dependencies. Make sure your images don't have firewall or anti-malware restrictions that are interfering with downloading the dependencies.
Install the delegate
Install a Harness Docker Delegate on your Google Cloud VM.
- 
In Harness, go to Account Settings, select Account Resources, and then select Delegates. You can also create delegates at the project scope. In your Harness project, select Project Settings, and then select Delegates. 
- 
Select New Delegate or Install Delegate. 
- 
Select Docker. 
- 
Enter a Delegate Name. 
- 
Copy the delegate install command and paste it in a text editor. 
- 
To the first line, add --network host, and, if required,sudo. For example:sudo docker run --cpus=1 --memory=2g --network host
- 
SSH into your Google Cloud VM and run the delegate install command. 
The delegate install command uses the default authentication token for your Harness account. If you want to use a different token, you can create a token and then specify it in the delegate install command:
- In Harness, go to Account Settings, then Account Resources, and then select Delegates.
- Select Tokens in the header, and then select New Token.
- Enter a token name and select Apply to generate a token.
- Copy the token and paste it in the value for DELEGATE_TOKEN.
For more information about delegates and delegate installation, go to Delegate installation overview.
Verify connectivity
- 
Verify that the delegate and runner containers are running correctly. You might need to wait a few minutes for both processes to start. You can run the following commands to check the process status: $ docker ps
 $ docker logs DELEGATE_CONTAINER_ID
 $ docker logs RUNNER_CONTAINER_ID
- 
In the Harness UI, verify that the delegate appears in the delegates list. It might take two or three minutes for the Delegates list to update. Make sure the Connectivity Status is Connected. If the Connectivity Status is Not Connected, make sure the Docker host can connect to https://app.harness.io. 
The delegate and runner are now installed, registered, and connected.
Specify build infrastructure
Configure your pipeline's Build (CI) stage to use your GCP VMs as build infrastructure.
- Visual
- YAML
- In Harness, go to the CI pipeline that you want to use the GCP VM build infrastructure.
- Select the Build stage, and then select the Infrastructure tab.
- Select VMs.
- Enter the Pool Name from your pool.yml.
- Save the pipeline.

    - stage:
        name: build
        identifier: build
        description: ""
        type: CI
        spec:
          cloneCodebase: true
          infrastructure:
            type: VM
            spec:
              type: Pool
              spec:
                poolName: POOL_NAME_FROM_POOL_YML
                os: Linux
          execution:
            steps:
            ...
Delegate selectors with self-managed VM build infrastructures
Currently, delegate selectors for self-managed VM build infrastructures is behind the feature flag CI_ENABLE_VM_DELEGATE_SELECTOR. Contact Harness Support to enable the feature.
Although you must install a delegate to use a self-managed VM build infrastructure, you can choose to use a different delegate for executions and cleanups in individual pipelines or stages. To do this, use pipeline-level delegate selectors or stage-level delegate selectors.
Delegate selections take precedence in the following order:
- Stage
- Pipeline
- Platform (build machine delegate)
This means that if delegate selectors are present at the pipeline and stage levels, then these selections override the platform delegate, which is the delegate that you installed on your primary VM with the runner. If a stage has a stage-level delegate selector, then it uses that delegate. Stages that don't have stage-level delegate selectors use the pipeline-level selector, if present, or the platform delegate.
For example, assume you have a pipeline with three stages called alpha, beta, and gamma. If you specify a stage-level delegate selector on alpha and you don't specify a pipeline-level delegate selector, then alpha uses the stage-level delegate, and the other stages (beta and gamma) use the platform delegate.
Early access feature: Use delegate selectors for codebase tasks
Currently, delegate selectors for CI codebase tasks is behind the feature flag CI_CODEBASE_SELECTOR. Contact Harness Support to enable the feature.
By default, delegate selectors aren't applied to delegate-related CI codebase tasks.
With this feature flag enabled, Harness uses your delegate selectors for delegate-related codebase tasks. Delegate selection for these tasks takes precedence in order of pipeline selectors over connector selectors.
Mount Custom Certificates on Windows Build VMs
This configuration applies to Harness CI VM Runners used for Windows build VMs.
You can make custom CA certificates available inside all build step containers (including the drone/git clone container) on Windows build VMs by setting the DRONE_RUNNER_VOLUMES environment variable when starting the VM runner.
Example
docker run -d \
  -v /runner:/runner \
  -p 3000:3000 \
  -e DRONE_RUNNER_VOLUMES=/custom-cert:/git/mingw64/ssl/certs \
  <your_registry_domain>/drone/drone-runner-aws:latest \
  delegate --pool /runner/pool.yml
Notes
- The certificate file inside /custom-certmust be namedca-bundle.crt.
- The drone-gitcontainer on Windows expects the certificate to be available atC:\git\mingw64\ssl\certs\ca-bundle.crt.
- The DRONE_RUNNER_VOLUMESpath must use Linux-style syntax — use/as the path separator and omit the drive letter (C:), even though the build VM runs Windows.
Troubleshoot self-managed VM build infrastructure
- Optimize Windows VM runner
- Can I use the same build VM for multiple CI stages?
- Why are build VMs running when there are no active builds?
- How do I specify the disk size for a Windows instance in pool.yml?
- Clone codebase fails due to missing plugin
- Can I limit memory and CPU for Run Tests steps running on self-managed VM build infrastructure?
Go to the CI Knowledge Base for a broader list of frequently asked questions and answers.