This document describes changes from the v1beta16 API to the v1 API, and also describes some changes that apply to all API versions. It is intended for developers who are familiar with v1beta16 and Google Compute Engine, and need to understand the changes happening with the v1 API release and alongside the generally available release of Google Compute Engine.
Users who are new to the API and intend to start programming using the v1 API should skip the existing transition guides.
Contents
Summary of changes
The following is a list of changes that apply for users migrating from v1beta16 to v1, and also includes a list of changes that apply for all API versions.
Changes for migrating to v1 from v1beta16
The list of changes here apply specifically for users who are updating their applications to use v1 from v1beta16.
-
Removed scratch boot disks.
As part of the transition to using persistent disks, scratch boot disks have been deprecated. It is not possible to create an instance in the v1 API with a scratch boot disk.
-
Removed the kernels resource.
With new support for embedded kernels, the Kernels resource is no longer required for launching an instance and has been removed completely from the v1 API.
This change will affect how you make requests to the API to create your instances and persistent disks. See the API and gcutil changes section for more information.
-
Require images with an embedded kernel to start an instance and create a persistent root disk
With the v1 API, you cannot specify an image with a Google-provided kernel. Instead, you must provide an image with an embedded kernel and use it to create a root persistent disk before you can start an instance. If you would like to use Google-provided kernels, you can continue to specify them in the v1beta16 API.
For more information, see the New support for custom kernel binaries section.
Changes affecting all API versions
The list of changes here apply to all API versions.
-
Updated persistent disks to use a new model for space and I/O.
Persistent disks have been updated so that persistent disk performance is now tied to the size of the persistent disk itself. The I/O performance of a persistent disk now scales linearly with the size of the persistent disk, up to the maximum I/O caps allowed. We have also simplified persistent disk pricing by removing I/O charges and lowering the price for persistent disk space.
For more information, see the New persistent disk model section.
-
Deprecated all *-d machine types.
Along with the removal of scratch boot disks, we are also deprecating all
-d
machine types which contain extended scratch disk space. It is no longer possible to create an instance using a scratch boot disk. -
Added support for custom kernel binaries.
Previously, all custom and Google-built images were required to use a Google-provided kernel that was injected into the instance during instance startup. We have removed this requirement and are providing images that have embedded community-managed kernels. We're also allowing users to build their own kernels in their images and use them on their virtual machine instances.
For more information, see the New support for custom kernel binaries section.
-
Updated default non-root persistent disk size to 500GB in gcutil.
In gcutil, if users do not explicitly specify the
sizeGb
field for a non-root persistent disk, the persistent disk will automatically have 500GB of disk space upon creation. -
Updated metadata server version to v1
The metadata server has been updated to v1 and users should use the new metadata server URL:
http://metadata.google.internal/computeMetadata/v1
-
Added new required header for v1 metadata server requests.
For security reasons, all requests to the metadata server now requires the
Metadata-Flavor: Google
header to indicate that the request is being made from a user or from installed software that has access to the metadata server. The header data can be any value but for simplicity, we recommended usingTrue
.Additionally, the metadata server will now deny all requests that have the header
X-Forwarded-For
, even if the header data is empty.For more information, see the New metadata server version section.
New persistent disk model
We have lowered prices for persistent disk to $0.04/GB and also included input/output (I/O) operations free of charge, up to the specified performance caps of the persistent disk volume.
The new persistent disk unified model has combined persistent disk space with persistent disk performance. Persistent disk performance now scales linearly with the size of the persistent disk volume, up to the specified performance caps .
For a comprehensive and in-depth look at the new unified persistent disk model and for explicit instructions and scenarios on how to transition to the new persistent disk model, read the Compute Engine Disks: Price, Performance, and Persistence tech article. The rest of this section makes several references to the tech article and we recommend reviewing the tech article in addition to using this transition guide.
Changes for persistent disk users
If you currently use existing persistent disks, the new persistent disks provide better performance and more predictable monthly costs. It is possible you might need larger persistent disk volumes to achieve current performance, but the decrease in persistent disk pricing will likely cause a reduction in costs. Your costs will also be more predictable now that there are no more I/O charges.
Persistent disk boot performance
For persistent root disk volumes, Google Compute Engine provides burst capability that enables short bursts of I/O. This burst capability enables applies to booting, package installation, and any other sporadic short term I/O activity.
Recommended changes
Existing persistent disks volumes will remain unchanged for six months from the launch of the v1 API. During that time, you should create a new persistent disk and migrate your data . After six months, old persistent disks will automatically be converted to use the new performance characteristics. If you do not migrate to a larger volume, you might experience a performance drop. If you're not sure you need to migrate to a larger persistent disk volume, review the Compute Engine Disks: Price, Performance, and Persistence tech article for specific instructions on how to determine if you need to migrate to a larger persistent disk volume.
Changes for scratch disks users
For most existing scratch disk users, persistent disks will offer better performance than scratch disk, along with all the benefits that persistent disks offers. This includes encryption, high availability, high resilience, performance consistency, persistent disk snapshots, dynamic add and remove of persistent disks, and other features. Old scratch disks were restricted by the size of the virtual machine instance. You had to buy specific disk sizes and larger virtual machine instances if you need larger disk sizes than what was available. With persistent disks, you can attach volumes of any size to any size virtual machine.
Although the cost per GB of persistent disks is slightly higher than scratch disk space, the ability to create the right amount of disk space needed and select the right machine type should mean that most people will benefit from the greater flexibility that persistent disk provides, at overall lower costs.
Required changes
All existing instances that use scratch disk volumes will continue to run until June 1, 2014. At that time, all scratch disk volumes will be terminated.
You can not create new scratch disks.
Migrating data from an old data disk to a new persistent disk
It is always recommended that you use a data disk that is separate from your boot disk. It is also a best practice to have procedures to recreate a boot disk on instance failure and procedures to recreate data from durable storage.
The following instructions describe how to copy an old scratch disk or an old persistent disk being used as a data disk. They are general instructions and if you already have procedures for restoring disk data, you should use your instructions.
-
ssh into the instance where your persistent or scratch disk is attached.
user@local:~$ gcutil --project=my-project ssh my-instance
-
Set the following environment variables:
NEWDISK_SIZE_GB=<desired-size> NEWDISK_NAME=<new-disk-name> NEWDISK_MOUNT=/mnt/newdata NEWDISK_ZONE=<zone> # must be the same zone as the current instance # For a Scratch Disk, <olddisk> should be similar to "ephemeral-disk-0" # For a Persistent Disk, <olddisk> should be the name of the Persistent Disk OLDDISK_NAME=<olddisk> OLDDISK_MOUNT=<current-mnt-point>
-
Create a new persistent disk.
user@my-instance:~$ gcutil adddisk ${NEWDISK_NAME} \ --zone=${NEWDISK_ZONE} --size_gb=${NEWDISK_SIZE_GB}
If your instance does not have service accounts enabled, gcutil might prompt you to authenticate. Complete the authentication process as directed.
-
Attach the new persistent disk.
user@my-instance:~$ gcutil attachdisk $(hostname) --disk=${NEWDISK_NAME}
-
Format and mount the new persistent disk.
user@my-instance:~$ sudo mkdir ${NEWDISK_MOUNT} user@my-instance:~$ NEWDISK_DEVICE=$(basename $(readlink /dev/disk/by-id/google-${NEWDISK_NAME})) user@my-instance:~$ sudo /usr/share/google/safe_format_and_mount -m "mkfs.ext4 -F" \ /dev/${NEWDISK_DEVICE} ${NEWDISK_MOUNT}
-
Stop any services that might be writing to the old disk.
-
Remount the old disk in read-only mode (if possible).
user@my-instance:~$ OLDDISK_DEVICE=$(basename $(readlink /dev/disk/by-id/google-${OLDDISK_NAME})) user@my-instance:~$ sudo mount -o remount,ro /dev/${OLDDISK_DEVICE}
-
Copy the data from the old disk to the new persistent disk.
user@my-instance:~$ sudo cp -rax ${OLDDISK_MOUNT}/* ${NEWDISK_MOUNT}
-
Unmount the old and new disks.
user@user:~$ sudo umount ${OLDDISK_MOUNT} user@user~$ sudo umount ${NEWDISK_MOUNT}
-
Mount the new disk at the old mount point.
user@my-instance:~$ sudo mount /dev/${NEWDISK_DEVICE} ${OLDDISK_MOUNT}
-
Remove the temporary new disk mount point.
user@my-instance:~$ sudo rm $(NEWDISK_MOUNT)
Frequently asked questions
- What happens to my old scratch disks?
- Your existing *-d virtual machines and existing virtual machines that have scratch disk for boot volumes will continue to run until March 1, 2014, at which time they will be terminated.
- Can I make new virtual machines with scratch disk space?
-
For a period of time, you can create virtual machines that use the deprecated *-d machine types, and create an instance that uses scratch boot disk space in the v1beta16 API. However, note that we will eventually remove
*-d
machine types and scratch boot volumes, and we recommend that you transition to using persistent disks and non-d
machine types.It is not possible to create scratch boot disks in the v1 API.
- Is this harder to manage than scratch disks?
-
Although root persistent disks are now the recommended way to start an instance, there is currently no single command to automatically create a persistent disk with a virtual machine instance in the API, and to delete a root persistent disk with the virtual machine is terminated. For now, you must provide two separate requests to start a virtual machine (one to create a root persistent disk, and another the create the virtual machine). We are aware of this limitation and have plans to make this easier.
- What if my disk volume is too slow or too small, and I need to grow it?
-
To migrate your disk volume to a larger-sized volume, follow the instructions to Migrating data from an old data disk to a new persistent disk .
- How will you transition existing persistent disk performance to the new model?
-
Existing persistent disk volumes will continue to be throttled the old way for six months, until June 3, 2014. If you expect the new I/O bounds for the volume will be inadequate, you should migrate to an appropriately sized persistent disk volume.
New persistent disk throttling will be used for all volumes created after December 03, 2013.
New support for user-supplied kernel binaries
Previous to the v1 API, users had to use Google-provided kernels that were automatically loaded into memory during instance startup. These kernels were a separate resource altogether from Google-provided images and were required in order to start an instance on Google Compute Engine. With the v1 API, we have removed this restriction and it is now possible to use user-supplied kernels that are loaded from a root persistent disk during instance startup. All new Google-provided images will now include an embedded kernel.
With support for embedded kernels, the
kernel
field is no longer used and has
been removed from the API and from gcutil for the v1 API. For more information
on these changes, see the
API and gcutil changes
section.
The easiest way to get started using an image with an embedded kernel is to create a new root persistent disk with the latest Google-provided images. Alternatively, you can also upgrade your existing root persistent disk or custom image to use a user-supplied kernel and boot loader by following the instructions below.
Upgrading an instance with a root persistent disk to use an embedded kernel
To continue using persistent disks created before the v1 API, you must upgrade the disk to use an embedded kernel.
-
Alias your instance and disk name:
user@local:~$ export INSTANCE=<instance-name> user@local:~$ export DISK=<disk-name> user@local:~$ export PROJECT=<project-id>
-
Write any cached data out to your persistent disk so you can take an accurate snapshot later on:
user@local:~$ gcutil --project=$PROJECT ssh $INSTANCE sudo sync
-
Create a snapshot to back up your data.
user@local:~$ gcutil --project=$PROJECT addsnapshot $DISK-migrate-backup --source_disk=$DISK
-
ssh into the instance currently using the persistent root disk:
user@local:~$ gcutil --project=$PROJECT ssh $INSTANCE
-
Install a kernel and boot loader, based on your operating system:
Debian 7
-
Verify that instance is running Debian Wheezy 7:
user@myinst:~$ cat /etc/os-release | grep PRETTY_NAME PRETTY_NAME="Debian GNU/Linux 7 (wheezy)"
-
Install a Debian 7 kernel:
user@myinst:~$ sudo apt-get install linux-image-amd64
-
Install
grub-pc
:
user@myinst:~$ sudo apt-get install grub-pc
Select the first disk when prompted.
-
Download the latest version of Google Compute Engine guest OS tools.
Check the latest versions on our
compute-image-packages repository
and replace the links in the
example commands below to the correct version before you run the
commands.
Caution: The following examples use a specific version of these packages that might not be the most recent. Please check that you are downloading the latest release of these packages. This ensures that you get the most-up-to-date changes and the most stable release.
user@myinst:~$ curl -L --remote-name-all https://github.com/GoogleCloudPlatform/compute-image-packages/releases/download/1.1.1/google-startup-scripts_1.1.1-1_all.deb \ https://github.com/GoogleCloudPlatform/compute-image-packages/releases/download/1.1.1/google-compute-daemon_1.1.1-1_all.deb \ https://github.com/GoogleCloudPlatform/compute-image-packages/releases/download/1.1.1/python-gcimagebundle_1.1.1-1_all.deb
-
Install the updated packages. Note that you might experience some
warnings when installing these packages, which can be resolved by
running
apt-get -f -y install
(noted below).Install the packages and make sure they are all up to date:
user@myinst:~$ sudo dpkg -i google-startup-scripts_1.1.1-1_all.deb \ google-compute-daemon_1.1.1-1_all.deb \ python-gcimagebundle_1.1.1-1_all.deb user@myinst:~$ sudo apt-get -f -y install
CentOS 6
-
Verify that instance is running CentOS 6:
user@myinst:~$ cat /etc/system-release CentOS release 6.2 (Final)
-
Run the following command to install a CentOS
kernel
:
user@myinst:~$ sudo yum install kernel-xen
-
Record the UUID of your persistent disk:
user@myinst:~$ export UUID=
sudo tune2fs -l /dev/sda1 | grep UUID | awk '{ print $3 }'
user@myinst:~$ echo $UUID -
Create an initial
grub
configuration:
Make a directory to hold you grub configuration and then define your grub configuration by creating a file named
/boot/grub/grub.conf
.user@myinst:~$ sudo mkdir /boot/grub user@myinst:~$ sudo bash -c "echo \"default timeout=2 title CentOS root (hd0,0) kernel /boot/vmlinuz-2.6.32-358.18.1.el6.x86_64 ro root=UUID=$UUID noquiet earlyprintk=ttyS0 loglevel=8 initrd /boot/initramfs-2.6.32-358.18.1.el6.x86_64.img\" > /boot/grub/grub.conf"
-
Install grub:
user@myinst:~$ sudo yum install grub user@myinst:~$ sudo grub-install /dev/sda1
-
Set up the final grub configuration:
user@myinst:~$ sudo bash -c "echo \" find /boot/grub/stage1 root (hd0,0) setup (hd0) quit\" | grub"
You should receive output similar to the following:
grub> find /boot/grub/stage1 find /boot/grub/stage1 (hd0,0) grub> root (hd0,0) root (hd0,0) Filesystem type is ext2fs, partition type 0x83 grub> setup (hd0) setup (hd0) Checking if "/boot/grub/stage1" exists... yes Checking if "/boot/grub/stage2" exists... yes Checking if "/boot/grub/e2fs_stage1_5" exists... yes Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 27 sectors are embedded. succeeded Running "install /boot/grub/stage1 (hd0) (hd0)1+27 p (hd0,0)/boot/grub/stage2 /boot/grub/grub.conf"... succeeded Done. grub> quit
-
Verify that instance is running Debian Wheezy 7:
-
Download the latest version of Google Compute Engine guest OS tools.
Check the latest versions on our
compute-image-packages repository
and replace the links in the
example commands below to the correct version before you run the
commands.
Caution: The following examples use a specific version of these packages that might not be the most recent. Check that you are downloading the latest release of these packages. This ensures that you get the most-up-to-date changes and the most stable release.
user@myinst:~$ sudo yum install https://github.com/GoogleCloudPlatform/compute-image-packages/releases/download/1.1.1/google-compute-daemon-1.1.1-1.noarch.rpm \ https://github.com/GoogleCloudPlatform/compute-image-packages/releases/download/1.1.1/google-startup-scripts-1.1.1-1.noarch.rpm \ https://github.com/GoogleCloudPlatform/compute-image-packages/releases/download/1.1.1/gcimagebundle-1.1.1-1.noarch.rpm
-
To ensure that your instance does not use a static MAC address, run
the following commands to update your
70-persistent-net.rules
files:user@myinst:~$ sudo ln -s /dev/null /etc/udev/rules.d/75-persistent-net-generator.rules user@myinst:~$ sudo chattr -i /dev/null /etc/udev/rules.d/70-persistent-net.rules user@myinst:~$ sudo rm -f /dev/null /etc/udev/rules.d/70-persistent-net.rules user@myinst:~$ sudo mkdir /var/lock/subsys user@myinst:~$ sudo chmod 755 /var/lock/subsys user@myinst:~$ sudo /etc/init.d/sshd restart
Shutdown your instance:
user@myinst:~$ sudo shutdown -h now
Recreate your instance:
user@local:~$ gcutil --project=$PROJECT deleteinstance $INSTANCE --nodelete_boot_pd
user@local:~$ gcutil --project=$PROJECT addinstance $INSTANCE --disk=$DISK,boot [--service_account_scopes=storage-rw]
SSH into your instance and check your kernel.
user@local:~$ gcutil --project=$PROJECT ssh $INSTANCE uname -a
(Optional) Because you are charged for snapshot storage, you might want to delete your snapshot after you are satisfied that the migration is complete.
$ gcutil --project=$PROJECT deletesnapshot $DISK-migrate-backup
You can now issue kernel-related commands.
Accessing a custom image without an embedded kernel
Since the v1 API, images without an embedded kernel are no longer bootable with Compute Engine instances. If you have an image without an embedded kernel, you can still access the data on your image by using it as a data disk:
-
Creating a persistent data disk (non-root persistent disk).
user@local:~$ gcutil --project=my-project adddisk <instance-name> --source_image=<your-image>
-
Attaching the disk as a data disk to your instance.
user@local:~$ gcutil --project=my-project attachdisk --disk=<disk-name> --zone=<zone> <instance-name>
New metadata server version
We're released a new metadata server version v1. The current version, v1beta1, remains available. However, users should transition to v1 for the latest features and fixes. Changes with v1 include:
-
New v1 URL
The new v1 metadata server can be reached at the following URL:
http://metadata.google.internal/computeMetadata/v1
Note that the previous metadata server version, v1beta1, remains available.
-
Requests to the metadata server will now require a security header.
For security reasons, we've added a new requirement that all requests to the the metadata must now include the following header in order for the request to be successful:
Metadata-Flavor: Google
The addition of this header indicates to Google Compute Engine that the request has been made by a user or installed software that is under the user's control. If you do not provide this header, the metadata server will not fulfill your request.
-
Any requests containing the header
X-Forwarded-For
will automatically be rejected.This generally indicates that the request was proxied so the metadata server will reject any requests with this header.
To update your scripts and applications:
-
Update the metadata server URL to:
http://metadata.google.internal/computeMetadata/v1
-
Add the new header to all of your requests.
For example, to make a
curl
request to thedisk/
entry on the metadata server, you would do the following:user@myinst:~$ curl "http://metadata.google.internal/computeMetadata/v1/instance/disks/" -H "Metadata-Flavor: Google" 0/ 1/ 2/
API and gcutil changes
API changes
Parsing API requests
When parsing API requests to a
Image
resource
or to a
Instance
resource
, note that the
kernel
,
preferredKernel
, and
image
properties will no longer be set in the resource
definition of instances and images that use custom kernels. For Instance
resources, the
kernel
and
image
property will be removed:
{ "kind": "compute#instance", "id": "557314301494380929", "creationTimestamp": "2013-09-25T11:47:24.018-07:00", ... "machineType": "<some-machine-type>","image": "<image>", "kernel": "<kernel>", "canIpForward": false, ... }
For Image resources, the
preferredKernel
property will be
removed:
{ "kind": "compute#image", "id": "3655366372269804321", "creationTimestamp": "2012-12-21T12:51:01.806-08:00", ... "sourceType": "RAW","preferredKernel": "<kernel>", "rawDisk": { "containerType": "TAR", "source": "" }, "status": "READY" }
For more information, review the API reference documentation .
Making API requests
-
In the v1 API, you cannot use an image that refers to a Google-provided kernel.
You must provide an image that has an embedded kernel. If you need to use an image that refers to a Google-provided kernel, you can do so using the v1beta16 API.
-
When making direct API requests, you no longer need to provide a kernel or image when starting an instance using a root persistent disk through the API.
By not supplying a kernel parameter, Google Compute Engine knows to use the kernel provided in your persistent disk instead. Your updated API request body should now look similar to the following, without the
kernel
andimage
fields:[{...
"image": "<image-name>", "disk": [{ "deviceName": "<disk-name>", "source": "<disk-uri>", "boot": "true","kernel": <kernel-name>... }]To add an image through the API, you can omit the
preferredKernel
field in your request body:{ "name": "<image-name>",
"preferredKernel":"""sourceType":"<source-uri>" } -
If your application or script uses a
*-d
machine type, you will need to update it to use a non-*-d instance, like so:body = { 'name': NEW_INSTANCE_NAME, 'machineType': 'http://www.googleapis.com/compute/v1/projects/myproject/global/machineTypes/n1-standard-2', # This requests uses a non *-d machine type 'networkInterfaces': [{ 'accessConfigs': [{ 'type': 'ONE_TO_ONE_NAT', 'name': 'External NAT' }], 'network': <fully-qualified-network_url> }], 'disk': [{ 'source': <fully-qualified-disk-url>, 'boot': 'true', 'type': 'PERSISTENT' }] }
-
Persistent root disks are necessary for instance creation, while the
image
field is no longer supported.Scratch boot disks are deprecated and you must specify a root persistent disk during instance creation. In the API, you must first create a boot disk before you can use it. To create a new persistent root disk, make a request to the following URI:
https://www.googleapis.com/compute/v1/projects/<project-id>/zones/<zone>/disks?sourceImage=<fully-qualifed-url-to-image>
Provide a request body that looks similar to the following:
body: { "name": <disk-name> }
After you create your root persistent disk, you can specify it in an instance creation request like so, omitting the
image
field:body = { 'name': NEW_INSTANCE_NAME, 'machineType': <fully-qualified-machine_type_url>, 'networkInterfaces': [{ 'accessConfigs': [{ 'type': 'ONE_TO_ONE_NAT', 'name': 'External NAT' }], 'network': <fully-qualified-network_url> }], 'disk': [{ 'source': <fully-qualified-disk-url>, 'boot': 'true', 'type': 'PERSISTENT' }] }
gcutil changes
-
When creating new instances using gcutil, gcutil will automatically create a root persistent disk.
The
--nopersistent_boot_disk
flag is deprecated and you cannot use that flag with the v1 API. If you would like to, you can still use the flag if you specify--service_version=v1beta16
but this is not recommended. -
If you create a non-root persistent disk using gcutil and do not specify the
--size_gb
flag, a 500GB persistent disk will be created for you by default.This is the default size of non-root persistent disks. You can override this size by providing the
--size_gb
flag. -
Removed all kernel-related commands and parameters from gcutil
All kernel commands have been removed from gcutil. Previous kernel commands, such as
gcutil listkernels
will not work, unless you explicitly specify the v1beta16 API.This applies for kernel-related flags as well, such as
--preferred_kernel
and--kernel
. -
All kernel and image information will appear as empty attributes for image and instance resources
When getting information about images that are using a custom kernel with the
gcutil getimage
andgcutil getinstance
commands, thekernel
and theimage
attribute will appear as blank:$ gcutil --project=<project-id> getimage <image-name> +---------------+-------------------------------+ | name | image | | description | | | creation-time | 2013-09-25T12:47:49.959-07:00 | | kernel | | | deprecation | | | replacement | | | status | READY | +---------------+-------------------------------+ $ gcutil --project=<project-id> getinstance <instance-name> +------------------------+-------------------------------------------------+ | name | instance | | description | | | creation-time | 2013-09-25T11:47:24.018-07:00 | | machine | us-central1-a/machineTypes/n1-standard-1-d | | image | | | kernel | | | zone | us-central1-b | | tags-fingerprint | 42WmSpB8rSM= | | metadata-fingerprint | 42WmSpB8rSM= | | ... | | +------------------------+-------------------------------------------------+