Multi-node Deployments

Subject to requirements, Google may provide ISPs with multiple GGC nodes. This section describes various configuration options and requirements for multi-node GGC deployments.

A GGC Network Location (GNL) is two or more GGC nodes, configured to operate as a single logical grouping.

Data and traffic is spread across all the machines in all the nodes in a GNL. This increases the available storage space, leading to improved GGC performance, and less traffic on your peering and/or transit links.

Configuration options for multi-node deployments

There two commonly used configuration options:

  • Two or more nodes, in two or more metros. Normally these will serve different sets of users, so they should be in different GNLs
  • Two or more nodes, in the same metro. Normally these will serve the same set of users, so they should be grouped into a single GNL

If you have more complex configurations, such as two distinct sets of users in the same metro, please contact us to discuss alternative configuration options.

GNL details

"GGC Network Location" is shown in the ISP portal assets pages, under the "Configuration" tab. Contact us if you need to change this.

The Capacity Forecasting page shows current and predicted traffic levels. These are displayed per GNL, not for each individual node.

Announcing prefixes to nodes in a GNL

All nodes in a GNL are intended to serve the same set of users. You should always advertise identical prefixes over all the BGP sessions, including identical prefix sizes and AS-paths.

If a prefix is missing in the BGP feed at one node in a GNL, some user requests may be served from other Google data locations. This is likely to degrade user experience, and increase traffic on your peering and/or transit links.

You can review BGP data in the ISP portal. Prefixes may be marked as "Inconsistent GNL" if they're not advertised identically across all nodes in the GNL.

Announcing prefixes to nodes in different GNLs

It's common for prefixes to be advertised at multiple GGC nodes. This is a good way to ensure that users are served from GGC in your network, even if the preferred serving location is unavailable or has insufficient capacity. We respect normal BGP logic for prefixes seen at multiple Google data locations. This includes prefix size, AS-path length and BGP community tags.

When we see identically advertised prefixes at different GNLs, we use other information to decide where to serve users. This can lead to unstable and sometimes undesirable traffic flows.

Enabling IPv6 on nodes in a GNL

In each GNL, we require that either all nodes have IPv6 enabled, or all nodes have IPv6 disabled.  If only some of the nodes in a GNL have IPv6 enabled, and others do not, then users may be served from other Google data locations.  This is likely to degrade user experience, and increase traffic on your peering and/or transit links.

 

You can review IPv6 Enablement in the ISP portal on the Configuration > Assets page, on the Configuration tab.  Each cache node will have an option to "Enable IPv6".

Physical location of nodes in a GNL

All nodes in a GNL don’t have to be physically located in the same facility. If nodes in a GNL are hosted in different facilities, we advise that network latency between them should be 10ms or less. The nodes must have sufficient network capacity to the users they will serve; if this traffic crosses your core network you should plan for this.

You may chose to deploy the nodes in a GNL in the same facility, or in different facilities in the same metro, or even in different metros. The details of the deployment will depend on your facilities and design of your core network.

Node failure in a GNL

A GGC node may become unreachable, undergo scheduled maintenance, or have planned upgrades.

When this happens, users will be served from other Google data locations where their prefixes are advertised, the requested data is available, and sufficient serving capacity exists. This includes:

  • other nodes in the same GNL
  • other GGC nodes in your network where we see the prefixes
  • GGC nodes in other ASNs where we see the prefixes
  • Google core data locations

For a short outage, traffic will often shift to Google core data locations, because requested data is not yet available on other nodes in the same GNL. Over time, traffic will return to the GNL, as cached data once again becomes available locally.

Redundancy expectations

The GGC application is designed with redundancy at many levels, including serving locations, servers, network and physical storage. 

GGC deployments are sized according to current and projected user demand. Over-provisioning is not a supported redundancy strategy.

Cache-fill in a multi-node deployment

We recommend that peer-to-peer cache-fill is enabled between all nodes in your network. See the article on cache-fill for more details.

Was this helpful?

How can we improve it?
Search
Clear search
Close search
Main menu
16872013052440216316
true
Search Help Center
true
true
true
false
false