Please note that the contents of this offline web site may be out of date. To access the most recent documentation visit the online version .
Note that links that point to online resources are green in color and will open in a new window.
We would love it if you could give us feedback about this material by filling this form (You have to be online to fill it)



Disks

Compute Engine offers Persistent Disk resources as the primary storage for your virtual machine instances. Persistent disk storage is network storage that can be attached to and from virtual machine instances and perform the same function as a physical hard drive attached to your computer. There are several types of persistent disk storage you can choose from, including standard persistent disks and SSD persistent disks.

If you attach a persistent disk to a virtual machine and then terminate the machine, your persistent disk data remains intact and the persistent disk can still be detached and reattached to another virtual machine. In situations where you would prefer that the persistent disk's lifespan is the same as the virtual machine to which it is attached, you can also set the disk to be deleted when the virtual machine is deleted.

The persistent disk you choose for your virtual machine depends on your scenario and use case. Each disk type has a different performance capabilities and different pricing. To determine the best persistent disk type for you, see Types of persistent disks .

Persistent disks are per-zone resources .

Useful gcloud compute commands:

Managing disks

Querying disks

Moving data to/from disks

Contents

Types of persistent disks

Large sequential I/O describes I/O operations that access locations contiguously. Each I/O operation accesses a large amount of data: approximately 128KB or more.

Small random I/O describes an I/O operation that accesses 4KB to 16KB of data.

Compute Engine offers two types of persistent disk volumes: standard persistent disks and solid-state drive (SSD) persistent disks. Like HDDs, standard persistent disks are best for applications that require bulk storage or sequential I/O with large block sizes, while SSD persistent disks are ideal for high rates of random input/output operations per second (IOPS).

Choosing one or the other type of disk depends on your application and data needs. Each type volume has their own performance characteristics and their own pricing. For a detailed look into the performance characteristics of each type of disk, see the Persistent disk performance section.

Persistent disk performance

Persistent disk performance depends on the size of the volume and the type of disk you select. Larger volumes can achieve higher I/O levels than smaller volumes. There are no separate I/O charges as the cost of the I/O capability is included in the price of the persistent disk.

Persistent disk performance can be described as follows:

This model is a more granular version of what you would see with a RAID set. The more disks in a RAID set, the more I/O the disks can perform and the finer a RAID set is carved up, the less I/O there is per partition. However, instead of growing volumes in increments of entire disks, persistent disk gives Compute Engine customers granularity at the gigabyte (GB) level for their volumes.

This persistent disk pricing and performance model provides three main benefits:

Disk striping is a technique to store logically sequential data across multiple drives or storage devices so that multiple disks can be accessed in parallel, allowing for higher throughput.
Operational simplicity
In the previous persistent disk model (before Compute Engine's general availability announcement in December 2013), you needed to create multiple small volumes and then stripe them together in order to increase the I/O to a virtual machine. This created unnecessary complexity when creating persistent disk volumes and throughout the volume’s lifetime because it required complicated management of snapshots. Under the covers, persistent disk stripes data across a very large number of physical drives, making it redundant for users to also stripe data across separate disk volumes. In this model, a single 1TB volume performs the same as 10 x 100GB volumes striped together.
Predictable pricing
Volumes are priced only on a per GB basis. This price pays for both the volume’s space and all the I/O that the volume is capable of. Customers’ bills do not vary with usage of the volume.
Predictable performance
This model allows more predictable performance than other possible models for HDD and SSD-based storage, while still keeping the price very low.

Volume I/O limits distinguish between read and write I/O and between IOPS and bandwidth. These limits are described in the following performance chart.

Standard persistent disks SSD persistent disks
Price (USD/GB per month) $0.04 $0.325
Maximum Sustained IOPS
Read IOPS/GB 0.3 30
Write IOPS/GB 1.5 30
Read IOPS/volume per VM 3,000 10,000
Write IOPS/volume per VM 15,000 15,000
Maximum Sustained Throughput
Read throughput/GB (MB/s) 0.12 0.48
Write throughput/GB (MB/s) 0.09 0.48
Read throughput/volume per VM (MB/s) 180 240
Write throughput/volume per VM (MB/s) 120 240

To illustrate this chart and the difference between standard persistent disk and SSD persistent disk, consider that for $1.00/month, you get:

Compared to standard persistent disks, SSD persistent disks are more expensive per GB and per MB/s of throughput, but are far less expensive per IOPS. So, it is best to use standard persistent disks where the limiting factor is space or streaming throughput, and it is best to use SSD persistent disks where the limiting factor is random IOPS.

Standard persistent disk performance

When considering a standard persistent disk volume for your instance, keep in the mind the following information:

As an example of how you can use the performance chart to determine the disk volume you want, consider that a 500GB standard persistent disk will give you:

SSD persistent disk performance

When you use SSD persistent disks, keep in mind the following information:

To experience the performance numbers listed in the chart, you should optimize your application and virtual machine:

Determining the size of your persistent disk

To determine what size of volume is required to have the same optimal performance as a typical 7200 RPM SATA drive, you must first identify the I/O pattern of the volume. The chart below describes some I/O patterns and what size of each persistent disk type you would need to create for that I/O pattern.

IO pattern Volume size of standard persistent disk (GB) Volume size of SSD persistent disk (GB)
Small random reads 250 3
Small random writes 50 3
Streaming large reads 1000 250
Streaming large writes 1333 250

Network egress caps

Persistent disk write operations contribute to network egress traffic and count towards the virtual machine's network egress cap . To calculate the maximum persistent disk write traffic that a virtual machine can issue, you need to subtract the virtual machine’s network egress traffic from the 2Gb/s/core network cap and the remainder would be used for persistent disk write output. Because of redundancy needs, persistent disk write I/O must be multiplied by 3.3, so a single write I/O operation from a persistent disk would count as 3.3 I/O operations.

Assuming no IP traffic from a virtual machine with a large enough persistent disk volume, here are the resulting persistent disk I/O caps per virtual machine, based on the network egress caps for the virtual machine. Additional virtual machine IP traffic would lower these numbers. For example, if a virtual machine sustains 80 Mbits/sec of IP traffic, the limits in this chart could be reduced by 3 MB/s accordingly.

Standard persistent disk SSD persistent disks
Number of cores Standard persistent disk write limit (MB/s) Standard volume size needed to reach limit (GB) SSD persistent disk write limit (MB/s) SSD volume size needed to reach limit (GB)
1 76 842 76 158
2 120 1333 152 316
4 120 1333 240 500
8 120 1333 240 500
16 120 1333 240 500

To derive these numbers, recall that for redundancy reasons, all persistent disk I/O operations you perform is multiplied by 3.3. For example, for a virtual machine with 1 core, the network egress cap is 2Gbits/sec which is equivalent for 250 MB/s:

Number of max write I/O for 1 core = 250 / 3.3 = ~76 MB/s of I/O issued by your standard persistent disk

Considering that write throughput/GB is 0.09 MB/sec for standard persistent disks:

Desired disk size = 76 / 0.09 = ~842 GB

Disk encryption

All data written to disk in Compute Engine is encrypted on the fly and then transmitted and stored in encrypted form. Compute Engine has completed ISO 27001, SSAE-16, SOC 1, SOC 2, and SOC 3 certifications, demonstrating our commitment to information security.

Data integrity

Google Compute Engine uses redundant, industry-standard mechanisms to protect persistent disk users from data corruption and from sophisticated attacks against data integrity.

Disk interface

By default, Google Compute Engine uses SCSI for attaching persistent disks. Images provided on or after 20121106 will have virtio SCSI enabled by default. Images using Google-provided kernels older than 20121106 only support a virtio block interface. If you are currently using images that have a block interface, you should consider switching to a newer image that uses SCSI.

If you are using the latest Google images, they should already be set to use SCSI.

Creating a new persistent disk

Before setting up a persistent disk, keep in mind the following restrictions:

Every region has a quota of the total persistent disk space that you can request. Call gcloud compute regions describe to see your quotas for that region:

$ gcloud compute regions describe us-central1
creationTimestamp: '2013-09-06T17:54:12.193-07:00'
description: us-central1
id: '5778272079688511892'
kind: compute#region
name: us-central1
quotas:
- limit: 24.0
  metric: CPUS
  usage: 5.0
- limit: 5120.0
  metric: DISKS_TOTAL_GB
  usage: 650.0
- limit: 7.0
  metric: STATIC_ADDRESSES
  usage: 4.0
- limit: 23.0
  metric: IN_USE_ADDRESSES
  usage: 5.0
- limit: 1024.0
  metric: SSD_TOTAL_GB
  usage: 0.0
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/regions/us-central1
status: UP
zones:
- https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-a
- https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-b
- https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-f

Create a disk

To create a new persistent disk in your project, call gcloud compute disks create with the following syntax:

$ gcloud compute disks create DISK

When you run the disks create command above, Google Compute Engine prompts you for a zone in which the persistent disk should live unless you set a default zone or pass one using the --zone flag. If you plan to attach this persistent disk to an instance, the persistent disk must be in the same zone as the instance that uses it.

You can check on the status of the disk creation process by running gcloud compute disks describe . Your disk can have one of the following statuses:

After the disk status is READY , you can use your new persistent disk by attaching it to your instance as described in Attaching a persistent disk to an Instance .

Attaching a persistent disk to an instance

After you have created your persistent disk, you must attach it to your instance to use it. You can attach a persistent disk in two ways:

To use a persistent disk with an instance, the persistent disk must live in the same zone as your desired instance. For example, if you want to create an instance in zone us-central1-a and you want to attach a persistent disk to the instance, the persistent disk must also reside in us-central1-a .

Persistent disk size limits

Before you attach a persistent disk to an instance, note that your persistent disks are subjected to certain size and quantity restrictions. Standard, high memory, and high CPU machine types can attach up to 16 persistent disks. Shared-core machine types can attach up to 4 persistent disks.

Additionally, machine types have a restriction on the total maximum amount of persistent disk space that can be mounted at a given time. If you reach the total maximum size for that instance, you won't be able to attach more persistent disks until you unmount some persistent disks from your instance. By default, you can mount up to 10TB of persistent disk space for standard, high memory, and high CPU machine types, or you can mount up to 3TB for shared-core machine types.

For example, if you are using an n1-standard-1 machine type, you can choose to attach up to 16 persistent disks whose combined size is equal to or less than 10TB or you can attach one 10TB disk. Once you have reached that 10TB limit, you cannot mount additional persistent disks until you unmount some space.

To find out an instance's machine type, run gcloud compute instances describle INSTANCE .

Attaching a Disk During Instance Creation

To attach a persistent disk to an instance during instance creation, follow the instructions described. Note that if you are attaching a root persistent disk that is larger than the original source (such as the image or snapshot), you need to repartition the persistent disk before you can use the extra space.

If you attach a data persistent disk that was originally created using a snapshot, and you created the data disk to be larger than the original size of the snapshot, you will need to resize the filesystem to the full size of the disk. For more information, see Restoring a snapshot to a larger size .

  1. Create the persistent disk by calling gcloud compute disks create DISK --disk-type TYPE .

  2. Create the instance where you would like to attach the disk, and assign the disk using the --disk flag.

    Here is the abbreviated syntax to attach a persistent disk to an instance:

    $ gcloud compute instances create INSTANCE \
        --disk name=DISK [mode={ro,rw}] [boot={yes,no}] [device-name=DEVICE_NAME] \
                         [auto-delete={yes,no}] [device-name=DEVICE_NAME]
    

    To attach multiple disks to an instance, you can specify multiple --disk flags. For instance:

    $ gcloud compute instances create INSTANCE \
             --disk name=disk-1 \
             --disk name=disk-2
    
  3. ssh into your instance .

    You can do this using gcloud compute ssh INSTANCE .

  4. Create your disk mount point, if it does not already exist.

    For example, if you want to mount your disk at /mnt/pd0 , create that directory:

    me@my-instance:~$ sudo mkdir -p /mnt/pd0
    
  5. Determine the /dev/* location of your persistent disk by running:

    me@my-instance:~$ ls -l /dev/disk/by-id/google-*
    lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
    lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-pd0 -> ../../sdb #pd0 is mounted at /dev/sdb
    
  6. Format your persistent disk :

    me@my-instance:~$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/DISK_ALIAS_OR_NAME MOUNT_POINT
    

    Specify the local disk alias if you assigned one, or the disk's resource name if you haven't. In this case, the disk alias is /dev/sdb :

    me@my-instance:~$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /mnt/pd0
    

    In this example, you are mounting your persistent disk /dev/diskalias at /mnt/pd0 but you can choose to mount your persistent disk anywhere e.g. /home . If you have multiple disks, you can specify a different mount point for each disk.

That's it! You have mounted your persistent disk and can start using it immediately. To demonstrate this process from start to finish, the following example attaches a previously created persistent disk named pd1 to an instance named example-instance, formats it, and mounts it:

  1. Create an instance and attach the pd1 persistent disk.

    $ gcloud compute instances create example-instance --disk name=pd1
    For the following instances:
     - [example-instance]
    choose a zone:
     [1] asia-east1-a
     [2] asia-east1-b
     [3] europe-west1-a
     [4] europe-west1-b
     [5] us-central1-a
     [6] us-central1-b
    Please enter your numeric choice:  5
    
    Created [https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-a/instances/example-instance].
    NAME         ZONE          MACHINE_TYPE  INTERNAL_IP    EXTERNAL_IP    STATUS
    example-instance us-central1-a n1-standard-1 10.240.113.180 23.251.146.137 RUNNING
    
  2. ssh into the instance.

    $ gcloud compute ssh example-instance
    For the following instances:
     - [example-instance]
    choose a zone:
     [1] asia-east1-a
     [2] asia-east1-b
     [3] europe-west1-a
     [4] europe-west1-b
     [5] us-central1-a
     [6] us-central1-b
    Please enter your numeric choice:  5
    ...
    The programs included with this system are free software;
    the exact distribution terms for each program are described in the
    individual files in /usr/share/doc/*/copyright.
    
    This software comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
    applicable law.
    
  3. Change to root:

    user@example-instance:~$ sudo -s
    
  4. Create a directory to mount the new persistent disk.

    root@example-instance:~# mkdir /mnt/pd0
    
  5. Determine where pd1 is currently mounted by getting a list of available persistent disks on the instance.

    root@example-instance:~$ ls -l /dev/disk/by-id/google-*
    lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
    lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-pd1 -> ../../sdb #pd1 is mounted at /dev/sdb
    
  6. Run the safe_format_and_mount tool.

    root@example-instance:~# /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" /dev/sdb /mnt/pd0
    mke2fs 1.41.11 (14-Mar-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=8 blocks, Stripe width=0 blocks
    655360 inodes, 2621440 blocks
    131072 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2684354560
    80 block groups
    ...
    
  7. Give all users write access to the drive.

    root@example-instance:~# chmod a+w /mnt/pd0
    
  8. Create a new file called hello.txt on the new mounted persistent disk.

    root:~# echo 'Hello, World!' > /mnt/pd0/hello.txt
    
  9. Print out the contents of the file to demonstrate that the new file is accessible and lives on the persistent disk.

    root@example-instance:~# cat /mnt/pd0/hello.txt
    Hello, World!
    

Attaching a disk to a running instance

You can attach an existing persistent disk to a running instance using the instances attach-disk command in gcloud compute or attachDisk in the API. Persistent disks can be attached to multiple instances at the same time in read- only mode (with the exception of root persistent disk, which should only be attached to one instance at a time). If you have already attached a disk to an instance in read-write mode, that disk cannot be attached to any other instance. You also cannot attach the same disk to the same instance multiple times, even in read-only mode.

To attach a persistent disk to an existing instance in gcloud compute , run:

$ gcloud compute instances attach-disk INSTANCE --zone ZONE --disk DISK

To attach a persistent disk to a running instance through the API, perform a HTTP POST request to the following URI:

https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance>/attachDisk

Your request body must contain the following:

bodyContent = {
    'type': 'persistent',
    'mode': '<mode>',
    'source': 'https://www.googleapis.com/compute/v1/projects/<project-id>/zone/<zone>/disks/<disk>'
  }

For more information, see the attachDisk reference documentation.

Detaching a persistent disk

You can detach a persistent disk from a running instance by using the gcloud compute instances detach-disk command.

To detach a disk using gcloud compute , run:

$ gcloud compute instances detach-disk INSTANCE --device-name DEVICE_NAME --zone ZONE

Or:

$ gcloud compute instances detach-disk INSTANCE --disk DISK --zone ZONE

To detach a disk in the API, perform an empty HTTP POST request to the following URL:

https://www.googleapis.com/compute/v1/projects/PROJECT/zones/ZONE/instances/INSTANCE/detachDisk?deviceName=DISK

For more information, see the detachDisk reference documentation.

Root persistent disk

Each instance has an associated root persistent disk where the root filesystem for the instance is stored. It is possible to create and attach a root persistent disk to an instance during instance creation or by creating the disk separately and attaching it to a new instance. All persistent disk features and limitations apply to root persistent disks.

Create a root persistent disk during instance creation

When you start an instance without specifying a --disk flag, gcloud compute automatically creates a root persistent disk for you using the image that you provided in your request (or the default image). The new root persistent disk is named after the instance by default. For example, if you create an instance using the following command:

user@local~:$ gcloud compute instances create example-instance --image=debian-7

gcloud compute automatically creates a standard persistent boot disk using the latest Debian 7 image, with the name example-instance , and boots your instance off the new persistent disk. By default, the boot disk is also deleted if the instance is deleted. To disable this behavior, you can pass in --no-boot-disk-auto-delete when creating your instance. You can also change the auto-delete state later on.

You can also create multiple instances and root persistent disks by providing more than one instance name:

$ gcloud compute instances create INSTANCE [INSTANCE ...]

Create a stand-alone root persistent disk

You can create a standalone-root persistent disk outside of instance creation and attach it to an instance afterwards. In gcloud compute , this is possible using the standard gcloud compute disks create command. It is possible to create a root persistent disk from an image or a snapshot, using the --image or --source-snapshot flags.

In the API, create a new persistent disk with the sourceImage query parameter in the following URI:

https://www.googleapis.com/compute/v1/projects/PROJECT/zones/ZONE/disks?sourceImage=SOURCE_IMAGE
sourceImage=SOURCE_IMAGE
[ Required ] The URL-encoded, fully-qualified URI of the source image to apply to this persistent disk.

Using an existing root persistent disk

To start an instance with an existing root persistent disk in gcloud compute , provide the boot parameter when you attach the disk . When you create a root persistent disk using a Google-provided image, you must attach it to your instance in read-write mode. If you try to attach it in read-only mode, your instance may be created successfully, but it won't boot up correctly.

In the API, insert an instance with a populated boot field:

[{ ...
'disks': [{
  'deviceName': '<disk-name>',
  'source': '<disk-uri>',
  'diskType': '<disk-type-uri>',
  'boot': 'true',
  ...
}]

When you are using the API to specify a root persistent disk:

Repartitioning a root persistent disk

By default, when you create a root persistent disk with a source image or a source snapshot, your disk is automatically partitioned with enough space for the root filesystem. It is possible to create a root persistent disk with more disk space using the sizeGb field but the additional persistent disk space won't be recognized until you repartition your persistent disk. Follow these instructions to repartition a root persistent disk with additional disk space, using fdisk and resize2fs :

  1. If you haven't already, create your root persistent disk:

    user@local:~$ gcloud compute disks create DISK --source-image=IMAGE --size=60GB
    
  2. Start an instance using the root persistent disk.

    user@local:~$ gcloud compute instances create INSTANCE --disk name=DISK boot=yes
    
  3. Check the size of your disk.

    Although you have specified a size larger than 10GB for your persistent disk, notice that only the 10GB of root disk space appears:

    user@mytestinstance:~$ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           10G  641M  8.9G   7% /
    /dev/root        10G  641M  8.9G   7% /
    none            1.9G     0  1.9G   0% /dev
    tmpfs           377M  116K  377M   1% /run
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           753M     0  753M   0% /run/shm
    
  4. Run fdisk .

    user@mytestinstance:~$ sudo fdisk /dev/sda
    
    Command (m for help): c
    DOS Compatibility flag is not set
    
    Command (m for help): u
    Changing display/entry units to sectors
    

    When prompted, enter in p to print the current state of /dev/sda , which will display the actual size of your root persistent disk. For example, this root persistent disk has ~50GB of space:

    The device presents a logical sector size that is smaller than
    the physical sector size. Aligning to a physical sector (or optimal
    I/O) size boundary is recommended, or performance may be impacted.
    
    Command (m for help): p
    
    Disk /dev/sda: 53.7 GB, 53687091200 bytes
    4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x000d975a
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1            2048    20971519    10484736   83  Linux
    

    Make note of this device ID number for future steps . In this example, the device ID is 83 .

  5. Next, enter d at the prompt to delete the logical partition at /dev/sda so we can resize the partition. This won't delete any files on the system.

    Command (m for help): d
    Selected partition 1
    

    Enter p at the prompt to review and confirm that the original partition has been deleted (notice the empty lines after Device Boot where the partition use to be):

    Command (m for help): p
    
    Disk /dev/sda: 53.7 GB, 53687091200 bytes
    4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x000d975a
    
       Device Boot      Start         End      Blocks   Id  System
    
  6. Next, type n at the prompt to create a new partition. Select the default values for partition type, number, and the first sector when prompted:

    Command (m for help): n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): p
    Partition number (1-4, default 1): 1
    First sector (2048-104857599, default 2048):
    Using default value 2048
    Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599):
    Using default value 104857599
    

    Confirm that your partition was created:

    Command (m for help): p
    
    Disk /dev/sda: 53.7 GB, 53687091200 bytes
    4 heads, 32 sectors/track, 819200 cylinders, total 104857600 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x000d975a
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1            2048   104857599    52427776   83  Linux
    
  7. Check that your device ID is the same ID number that you made note of in step

  8. For this example, the device ID matches the original ID of 83.

  9. Commit your changes by entering w at the prompt:

    Command (m for help): w
    The partition table has been altered!
    
    Calling ioctl() to re-read partition table.
    
    WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
    The kernel still uses the old table. The new table will be used at
    the next reboot or after you run partprobe(8) or kpartx(8)
    Syncing disks.
    
  10. Reboot your instance. This will close your current SSH connection. Wait a couple minutes before performing another ssh connection.

    user@mytestinstance:~$ sudo reboot
    
  11. SSH into your instance.

    user@local:~$ gcloud compute ssh INSTANCE
    
  12. Resize your filesystem to the full size of the partition:

    user@mytestinstance:~$ sudo resize2fs /dev/sda1
    resize2fs 1.42.5 (29-Jul-2012)
    Filesystem at /dev/sda1 is mounted on /; on-line resizing required
    old_desc_blocks = 1, new_desc_blocks = 4
    The filesystem on /dev/sda1 is now 13106944 blocks long.
    
  13. Verify that your filesystem is now the correct size.

    user@mytestinstance:~$ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    rootfs           50G  1.3G   47G   3% /
    /dev/root        50G  1.3G   47G   3% /
    none            1.9G     0  1.9G   0% /dev
    tmpfs           377M  116K  377M   1% /run
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           753M     0  753M   0% /run/shm
    

Persistent disk snapshots

Google Compute Engine offers the ability to take snapshots of your persistent disk and create new persistent disks from that snapshot. This can be useful for backing up data, recreating a persistent disk that might have been lost, or copying a persistent disk. You can create a snapshot of any persistent disk type and apply it to any other persistent disk type. For example, you can create a snapshot of a SSD persistent disk and use it to create a new standard persistent disk, and vice-versa.

Google Compute Engine provides differential snapshots, which allow for better performance and lower storage charges for users. Differential snapshots work in the following manner:

This repeats for all subsequent snapshots of the persistent disk.

The diagram below attempts to illustrate this process:

Diagram describing how to create a snapshot

Snapshots are a global resource. Because they are geo-replicated, they will survive maintenance windows. It is not possible to share a snapshot across projects. You can see a list of snapshots available to a project by running:

$ gcloud compute snapshots list

To list information about a particular snapshot:

$ gcloud compute snapshots describe SNAPSHOT_NAME

Creating a snapshot

Before you create a persistent disk snapshot, you should ensure that you are taking a snapshot that is consistent with the desired state of your persistent disk. If you take a snapshot of your persistent disk in an "unclean" state, it may force a disk check and possibly lead to data loss. To help with this, Google Compute Engine encourages you to make sure that your disk buffers are flushed before you take your snapshot. For example, if your operating system is writing data to the persistent disk, it is possible that your disk buffers are not yet cleared. Follow these instructions to clear your disk buffers:

Linux


Windows


  1. Log onto your Windows instance.
  2. Run the following command in a cmd window:

    gcesysprep
    

After you run the gcesysprep command, your Windows instance will terminate. Afterwards, you can take a snapshot of the root persistent disk.

Create your snapshot using the gcloud compute disks snapshot command:

$ gcloud compute disks snapshot DISK

gcloud compute waits until the operation returns a status of READY or FAILED , or reaches the maximum timeout and returns the last known details of the snapshot.

Creating a new persistent disk from a snapshot

After creating a persistent disk snapshot, you can apply data from that snapshot to new persistent disks. It is only possible to apply data from a snapshot when you first create a persistent disk. You cannot apply a snapshot to an existing persistent disk, or apply a snapshot to persistent disks that belong to a different project than that snapshot.

To apply a data from persistent disk snapshot, run the gcloud compute disks create command with the --source-snapshot flag:

$ gcloud compute disks create DISK --source-snapshot SNAPSHOT_NAME

Restoring a snapshot to a larger size

You can restore a non-root persistent disk snapshot to a larger size than the original snapshot but you must run some extra commands from within the instance for the additional space to be recognized by the instance. For example, if your original snapshot is 500GB, you can choose to restore it to a persistent disk that is 600GB or more. However, the extra 100GB won't be recognized by the instance until you mount and resize the filesystem.

The instructions that follow discuss how to mount and resize your persistent disk using resize2fs as an example. Depending on your operating system and filesystem type, you might need to use a different filesystem resizing tool. Please refer to your operating system documentation for more information.

  1. Create a new persistent disk from your non-root snapshot that is larger than the snapshot size.

    Provide the --size flag to specify a larger persistent disk size. For example:

    me@local~:$ gcloud compute disks create newdiskname \
                       --source-snapshot=my-data-disk-snapshot --size=600GB
    
  2. Attach your persistent disk to an instance.

    me@local~:$ gcloud compute instances attach-disk example-instance --disk=newdiskname
    
  3. ssh into your instance.

    me@local~:$ gcloud compute ssh example-instance
    
  4. Determine the /dev/* location of your persistent disk by running:

    me@example-instance:~$ ls -l /dev/disk/by-id/google-*
    lrwxrwxrwx 1 root root  9 Nov 19 20:49 /dev/disk/by-id/google-mypd -> ../../sda
    lrwxrwxrwx 1 root root  9 Nov 19 21:22 /dev/disk/by-id/google-newdiskname -> ../../sdb # newdiskname is located at /dev/sdb
    
  5. Mount your new persistent disk.

    Create a new mount point. For example, you can create a mount point called /mnt/pd1 .

    user@example-instance:~$ sudo mkdir /mnt/pd1
    

    Mount your persistent disk:

    me@example-instance:~$ sudo mount -a /dev/sdb /mnt/pd1
    
  6. Resize your persistent disk using resize2fs .

    me@example-instance:~$ sudo resize2fs /dev/sdb
    
  7. Check that your persistent disk reflects the new size.

    me@example-instance:~$ df -h
    Filesystem                                              Size  Used Avail Use% Mounted on
    rootfs                                                  296G  671M  280G   1% /
    udev                                                     10M     0   10M   0% /dev
    tmpfs                                                   3.0G  112K  3.0G   1% /run
    /dev/disk/by-uuid/36fd30d4-ea87-419f-a6a4-a1a3cf290ff1  296G  671M  280G   1% /
    tmpfs                                                   5.0M     0  5.0M   0% /run/lock
    /dev/sdb                                                593G  198M  467G   1% /mnt/pd1 # The persistent disk is now ~600GB
    

Restoring a snapshot to a different type

If you would like to change a volume from standard to SSD, snapshot the volume and when you restore from the snapshot, specify the SSD type on the adddisk command. This method also works in reverse to change an SSD volume to a standard volume.

$ gcloud compute disks create newdiskname --source-snapshot=my-data-disk-snapshot --disk-type=pd-standard

Deleting a snapshot

Google Compute Engine provides differential snapshots so that each snapshot only contains data that has changed since the previous snapshot. For unchanged data, snapshots use references to the data in previous snapshots. When you delete a snapshot, Google Compute Engine goes through the following procedures:

  1. The snapshot is immediately marked as DELETED in the system.
  2. If the snapshot has no dependent snapshots, it is deleted outright.
  3. If the snapshot has dependent snapshots:

    1. Any data that is required for restoring other snapshots will be moved into the next snapshot. The size of the next snapshot will increase.
    2. Any data that is not required for restoring other snapshots will be deleted. This lowers the total size of all your snapshots.
    3. The next snapshot will no longer reference the snapshot marked for deletion but will instead reference the existing snapshot before it.

The diagram below attempts to illustrate this process:

Diagram describing the
  process for deleting a snapshot

To delete a snapshot, run:

$ gcloud compute snapshots delete SNAPSHOT_NAME

Attaching multiple persistent disks to one instance

To attach more than one disk to an instance, run gcloud compute instances attach-disk once for each disk to attach or repeat the --disk flag in your gcloud compute instances create invocation.

Attaching a persistent disk to multiple Instances

It is possible to attach a persistent disk to more than one instance. However, if you attach a persistent disk to multiple instances, all instances must attach the persistent disk in read-only mode. It is not possible to attach the persistent disk to multiple instances in read-write mode.

If you attach a persistent disk in read-write mode and then try to attach the disk to subsequent instances, Google Compute Engine returns an error similar to the following:

The disk resource 'DISK' is already being used in read-write
mode

To attach a persistent disk to an instance in read-only mode, review instructions for attaching a persistent disk and set the mode to ro .

Getting persistent disk information

To see a list of persistent disks in the project:

$ gcloud compute disks list

By default, gcloud compute provides an aggregate listing of all your resources across all available zones. If you want a list of resources from select zones, provide the --zones flag in your request.

$ gcloud compute disks list --zones ZONE [ZONE ...]

In the API, you need to make requests to two different methods to get a list of aggregate resources or a list of resources within a zone. To make a request for an aggregate list, make a HTTP GET request to that resource's aggregatedList URI:

https://www.googleapis.com/compute/v1/aggregated/disks

In the client libraries, make a request to the disks().aggregatedList function:

def listAllDisks(auth_http, gce_service):
  request = gce_service.disks().aggregatedList(project=PROJECT)
  response = request.execute(auth_http)

  print response

To make a request for a list of instances within a zone, make a GET request to the following URI:

http://www.googleapis.com/compute/v1/projects/PROJECT/zones/ZONE/disks

In the API client libraries, make a disks().list request:

def listDisks(auth_http, gce_service):
  request = gce_service.disks().list(project=PROJECT,
    zone='ZONE')
  response = request.execute(auth_http)

  print response

Migrating a persistent disk to a different instance in the same zone

To migrate a persistent from one instance to another, you can detach a persistent disk from an instance and reattach it to another instance (either a running instance or a new instance). If you merely want to migrate the information, you can take a persistent disk snapshot and apply it to the new disk.

Persistent disks retain all their information indefinitely until they are deleted, even if they are not attached to a running instance.

Migrating a persistent disk to a different zone

You cannot attach a persistent disk to an instance in another zone. If you want to migrate your persistent disk data to another zone, you can use persistent disk snapshots . To do so:

  1. Create a snapshot of the persistent disk you would like to migrate.
  2. Apply the snapshot to a new persistent disk in your desired zone.

Deleting a persistent disk

When you delete a persistent disk, all its data is destroyed and you will not be able to recover it.

You cannot delete a persistent disk that is assigned to a specific instance. To check whether a disk is assigned, run gcloud compute instances list --format yaml , which will list all persistent disks in use by each instance.

To delete a disk:

$ gcloud compute disks delete DISK

Setting the auto-delete state of a persistent disk

Read-write persistent disks can be automatically deleted when the associated virtual machine instance is deleted. This behavior is controlled by the autoDelete property on the virtual machine instance for a given attached persistent disk and can be updated at any time. Similarly, you can also prevent a persistent disk from being deleted as well by marking the autoDelete value as false.

To set the auto delete state of a persistent disk in gcloud compute , use the gcloud compute instances set-disk-auto-delete command:

$ gcloud compute instances set-disk-auto-delete INSTANCE [--auto-delete | --no-auto-delete] --disk DISK --zone ZONE

In the API, make a HTTP POST request to the following URI:

https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/instances/<instance>/setDiskAutoDelete?deviceName=deviceName,autoDelete=true

Using the client library, use the instances().setDiskAutoDelete method:

def setAutoDelete(gce_service, auth_http):
  request = gce_service.instances().setDiskAutoDelete(project=PROJECT, zone=ZONE, deviceName=DEVICE_NAME, instance=INSTANCE, autoDelete='true')
  response = request.execute(http=auth_http)

  print response

Formatting Disks

Before you can use non-root persistent disks in Google Compute Engine, you need to format and mount them. We provide the safe_format_and_mount tool in our images to assist in this process. The safe_format_and_mount tool can be found at the following location on your virtual machine instance:

/usr/share/google/safe_format_and_mount

The tool performs the following actions:

This can be helpful if you need to use a non-root persistent disk from a startup script , because the tool prevents your script from accidentally reformatting your disks and erasing your data.

safe_format_and_mount works much like the standard mount tool:

$ sudo mkdir MOUNT_POINT $ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" DISK MOUNT_POINT

You can alternatively format and mount disks using standard tools such as mkfs and mount .

Checking an instance's available disk space

If you are not sure how much disk space you have, you can check the disk space of an instance's mounted disks using the following command:

me@my-instance:~$ sudo df -h

To match up a disk's file system name, run:

me@my-instance:~$ ls -l /dev/disk/by-id/google-*
...
lrwxrwxrwx 1 root root 3 MM  dd 07:44 /dev/disk/by-id/google-mypd -> ../../sdb  # google-mypd corresponds to /dev/sdb
lrwxrwxrwx 1 root root 3 MM  dd 07:44 /dev/disk/by-id/google-pd0 -> ../../sdc

me@my-instance:~$ sudo df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             9.4G  839M  8.1G  10% /
....
/dev/sdb              734G  197M  696G   1% /mnt/pd0 # sdb has 696GB of available space left

Authentication required

You need to be signed in with Google+ to do that.

Signing you in...

Google Developers needs your permission to do that.