読者です 読者をやめる 読者になる 読者になる

めもめも

このブログに記載の内容は個人の見解であり、必ずしも所属組織の立場、戦略、意見を代表するものではありません。

Introduction to Google Cloud Platform for OpenStackers (Part III)

Disclaimer

This is not an official document of Google Cloud Platform. Please refer to the GCP web site for the official information.

cloud.google.com

Part II of this article series is here.

enakai00.hatenablog.com

In this part, I will give a comparison between OpenStack Nova/Cinder and Compute Engine regarding VM instance management.

Instance size

In OpenStack, you would choose the "Instance Type" to specify the instance size (number of vCPUs and amount of memory). Instance Types are predefined by the sysadmin. General users are not allowed to add custom types.

On the other hand, in Compute Engine, instance sizes are defined as "Machine Type". In addition to choosing one from predefined Machine Types when you create a new instance, you can change the number of vCPUs and amount memory separately, to create your own "Custom Machine Type."

Reference:

Storage options

OpenStack: Ephemeral Disk and Cinder Volume

OpenStack provides two different kinds of instance attached disks, "Ephemeral Disk" and "Cinder Volume". Initially, Ephemeral Disk was intended to be used as a system disk containing operating system files whereas Cinder Volume was to store persistent application data. However, Live Migration is not available with Ephemeral Disk, people often use Cinder Volume for a system disk, too.

When you choose Ephemeral Disk for a system disk, the OS template image (managed by Glance) is cloned into the local storage of compute node, and the local image is directly attached to the instance. When you destroy the instance, the attached image is also destroyed. On the other hand, Cinder Volume provides a persistent disk area (LUN) which resides in the external storage box. In typical configuration, the disk area (LUN) is attached to the compute node using the iSCSI protocol, and attached to the instance as a virtual disk. Even when you destroy the instance, the attached volume remains in the external storage. You can reuse the volume by attaching to a new instance.

  • Comparing Ephemeral Disk and Cinder Volume

Since the disk image is stored in the local storage of the compute node, Ephemeral Disk is sometimes used to achieve better storage performance. It is also possible to use multiple storage boxes with different performance characteristics for the backend device of Cinder Volume. Users can specify the preferred backend in creating a new Cinder Volume.

In addition, applications running on the instance can access the object storage provided by OpenStack Swift.

Compute Engine: Persistent Disk and Local SSD

Compute Engine provides "Persistent Disk" as a persistent storage attached to an instance which corresponds to "Cinder Volume" of OpenStack. It is used for a system disk containing operating system files, and for storing persistent application data as well. The data is automatically encrypted when it goes out of the instance to the physical storage. A single Persistent Disk can be extended up to 64TB, but the maximum total capacity of the attached disks is restricted based on the instance size.

When you need a high performance local storage, you can also use the local SSDs. By attaching eight SSDs with each of 357GB capacity, you can use 3TB of local SSDs in total. Even with local SSDs, Live Migration is available. Since the data in local SSDs are copied between instances during Live Migration, there may be a temporally decrease in storage performance though.

In addition, applications running on the instance can access the object storage provided by Cloud Storage.

Reference:

Instance Metadata

OpenStack: Metadata Service

OpenStack has "Metadata Service" which provides the mechanism for retrieving instance information from the instance guest OS. By accessing the special URL under "http://169.254.169.254/latest/meta-data", you can retrieve instance information such as am instance type, security groups and assigned IP addresses. You can also add custom metadata in the "Key-Value" form.

There is also a special metadata called "user-data". If you specify a text file as uesr-data when creating a new instance, the "cloud-init" agent running in the guest OS will receive it, and do some startup configuration according to its contents. Typically, you can use a script file as user-data which will be executed by cloud-init to install application packages on startup.

Compute Engine: Metadata Server

Compute Engine has "Metadata Server" which provides the mechanism for retrieving instance and project information from the instance guest OS. One of the following URLs are used to access metadata.

Metadata Server provides "Instance metadata" and "Project metadata". Instance metadata is associated to each instance providing information specific to the instance whereas Project metadata is associated to the project and shared by all instances in the project. You can add custom metadata for both Instance metadata and Project metadata in the "Key-Value" form.

There is also a special metadata called "startup-script" and "shutdown-scripts". They are executed when you start or shutdown the instance respectively. Different from OpenStack's user-data, startup-script is executed every time you restart the instance. These are handled by the agent included in "compute-image-packages" which is preinstalled in the official template images.

Reference:

Agents in guest OS

OpenStack: cloud-init

The agent package called "cloud-init" is preinstalled in the standard guest OS images of OpenStack. It handles the initial configurations, at the first boot time, such as extending the root filesystem space, storing a SSH public key and executing the script provided as user-data.

Compute Engine: compute-image-packages

The agent package called "compute-image-packages" is preinstalled in the standard guest OS images of Compute Engine. It handles the initial configurations at the first boot time, and also handles dynamic system configuration while the guest OS is running.

The dynamic system configuration includes adding new SSH public keys and changing network configurations required for HTTP load balancing for example. You can generate and add a new SSH key through Cloud Console or gcloud command after launching an instance. This is done through the agent running on the guest OS as a daemon process.

As a side note, the agent uses Metadata Server for the dynamic configuration. Compute Engine's Metadata Server provides a mechanism to notify the agent about metadata updates, and the agent is triggered with the metadata updates.

Reference:

Accessing other services from applications

In the standard guest OS images of Compute Engine, SDK (including gcloud command) is preinstalled and you can use them with the "Service Account" privilege to access other services of GCP. Through the IAM mechanism, access control is enforced per instance. For example, read-only authority for Cloud Storage is assigned to an instance by default. If you want your application running in the instance to store data in Cloud Storage, you need to assign the read-write authority to the instance. With this mechanism, you don't have to setup passwords or credential codes for applications running in the instance by hands.

Reference:

What's next?

I would give a sample architecture for a typical 3tier web application both on OpenStack and GCP.

enakai00.hatenablog.com