This is not an official document of Google Cloud Platform. Please refer to the GCP web site for the official information.
Part I of this article series is here.
In this part, I describe networking functionality of GCP compared with OpenStack Neutron.
Comparing network models
Using OpenStack Neutron, you can create a virtual router inside your project (tenant) which provides an independent private network. It's just like creating your home network. You can add virtual L2 switches behind the router and assign private IP subnets. The VM instances can communicate through the virtual router using private IPs.
If you want to expose VM instances to an external network, you can assign a floating IP to an instance. Floating IPs are a pool of global IP addresses secured for the project in advance. The virtual router provides the NAT operation between the floating IP and a private IP of the instance.
The Neutron's private network is transparent to Availability Zones since it's configured spanning over all Availability Zones in a Region. However, you cannot configure a virtual network spanning over multiple Regions. The virtual network is solely confined in a single Region, VM instances in different Regions need to communicate over an external network using floating IPs.
It's possible to create multiple virtual routers in a project. In this case, VM instances under different routers cannot communicate directly with private IPs. They need to communicate through an external network with floating IPs.
You can define multiple "Networks" in your GCP project. The "Network" is something comparable to the virtual router in Neutron. But one important difference is that the virtual router connects all Regions through an internal network. (Please see diagrams in the following sections for more concrete images.)
In addition, there are two different types of GCP's "Network", that is, "Legacy network" and "Subnet network." Interestingly, Legacy network provides a single private subnet spanning all the Regions around the globe. You can think that the "Virtual Switch" in the following diagram corresponds to the virtual router in Neutron.
Looking back the history of network architecture, people have used subnets to manage network gears in different physical locations. It's not a hard requirement for the cloud network emulating subnets in the real world. In GCP networking, you can use the tag-based access control mechanism which is independent of the concept of subnets. Using Legacy network, you don't need to be bothered with subnet management any more without compromising the access control.
Having said that, ...
People still need subnets in some cases, currently, GCP uses Subnet network as default. You can create only Subnet networks from Cloud Console. You need to use gcloud command to create Legacy networks.
Subnet network has a similar concept to Neutron's virtual networks. You can create multiple subnets in each Region. However, totally different from Neutron, as all Regions are connected through an internal network, all subnets are routable each other over private IPs. You don't need to use global IPs nor the Internet for inter-region communication.
You can think that "Virtual Switch"s (corresponding to the virtual router of Neutron) are connected through the global internal network.
I will mention the GCP's internal network details in a later section.
Note that Legacy network and Subnet network are not something exclusive. Like creating multiple virtual routers in Neutron, you can create multiple "Networks", and you can mix both Legacy network and Subnet network. When you create a new project, there is a single "Network" named as "default". It has a subnet with /20 range named as "default" in each Region.
Finally, talking about global IPs, GCP provides "External IPs" as a pool of global IPs for each project. There are two types of External IP such as "Static" and "Ephemeral". The "Static" External IPs work similarly as Floating IP of Neutron. You can use them as a permanent resource. On the other hand, the "Ephemeral" ones are assigned automatically when you launch a new instance and there is no guarantee for permanent existence. In other words, when you stop and restart an instance, you may have a different global IP if you're using the "Ephemeral" one.
Neutron provides a concept of "Security Groups" for packet filtering mechanism. A Security Group contains a multiple ACLs independent of specific VM instances. Once you associate a Security Group to an instance, the ACLs are applied to it. Typically, you would define Security Groups corresponding to specific roles of instances such as web servers or DB servers. Since Neutron's virtual networks are defined in each Region, you also need to define Security Groups in each Region.
In GCP Networking, you define firewall rules in each Network which spans all Regions. It specifies a filtering rule using "rule" and "tags". "Tags" are arbitrary labels associated with instances as metadata. You can associate multiple tags to a single instance. On the other hand, "rule" is an ACL specifying an allowed packet flow using the following conditions.
- Source: IP range, subnet, tags
- Destination: tags
With this mechanism, you can separate "tags to specify instance roles" and "rules to specify ACL corresponding to roles". For example, once you define a rule allowing sending packets from any IP to "web-server" tag, you can add the "web-server" tag to any instances to allow TCP80 connection for them. When you define a rule without specifying tags, it allows the specified connection to all instances.
By the way, when you create a new project, the "default" network has the following predefined rules. Can you guess what kind of connections they allow? "10.128.0.0/9" is a subnet containing the underlying subnets in all Regions.
From top to bottom, it allows the following connections.
- ICMP from any external source.
- Any packets between all instances connected to the "default" network.
- RDP from any external source.
- SSH from any external source.
GCP's internal network
Google owns the global internal network to provide various services to external customers. The overall architecture is explained in the following web site.
GCP utilizes this global infrastructure to provide the dedicated global internal network. The connection to external networks are provided at multiple locations called "Edge Points of Presence (PoPs)" around the globe. When you connect to an instance running on GCP from your location, you would reach one of the PoPs at the optimum location, and be routed through the high bandwidth internal network. In addition, since each PoP provides the contents caching mechanism, you can leverage it without any special treatment if you provide external web services from App Engine or Compute Engine on GCP thorough the global load balancer service.
There are some network providers who offer dedicated network facility to connect your office network directly to GCP's network infrastructure. Please refer to the following web site for details.
I will give a comparison between OpenStack Nova/Cinder and Compute Engine regarding VM instance management.