Cloud computing allows hyperscale providers to divide a huge pool of hardware across many customers, which should be more efficient than many small data centers. One source of efficiency is that applications need to provision for peak load, which is wasted at non-peak times. This means the average utilization of server hardware is terribly low, like maybe 10-20%. However, when a huge number of different workloads are consolidated together, then their peak times should be less correlated, which should increase the average utilization (James Hamilton has written about this a number of times). As a cloud customer, you can reap the benefit of this by buying capacity only when you need it.
This week, Google announced a new "machine type" (E2) that takes this further. They know the that adapting applications to effectively use auto-scaling is hard. Instead, Google is going to oversell their hardware. You buy cores and RAM as usual, and Google promises that nearly all the time, they will be there when your application actually wants to use them. However, on some rare occasions they might not, and your application will pause as they shuffle things around to make it available. This is a brilliant idea, and I'm surprised that it has taken this long for a cloud provider to oversell their hardware. Providers are in a much better position to do the necessary "bin packing" than customers, since they can see all the workloads. This seems like a great way to improve the overall efficiency of our global computing infrastructure, and I expect we will see more overselling in the cloud in the future. (In fact, this already happens with higher-level services, such as serverless platforms like AWS Lambda or Google Cloud Run.)
I really would like to know more of the technical details of how exactly this works, and what kind of "worst case" behaviour might happen when using an oversold VM. Unfortunately, we don't have much data to go on. The only description in the announcement is that "applications might see marginally increased response times once per 1,000 requests. For the vast majority of applications, including Google’s own latency-sensitive services, this difference is lost in the noise of other performance variations such as Java garbage collection events, I/O latencies and thread synchronization." Unfortunately, both the magnitude of these pauses (milliseconds of extra latency) and frequency (pauses per time interval) are extremely vague. I suspect part of this is that Google just don't know yet, since there aren't enough users of these machine types. I suspect the other reason for lack of detail is that I suspect Google considers this a competitive advantage.
We also need to look at the cost. Google's pricing depends on what fraction of the month you use the resources (sustained usage), or you can buy cores and RAM in advance (committed use). To make an equal comparison, I computed the average cost of a CPU core/hour as a function of the utilization of the month. The price trends for RAM are identical, just with the prices scaled appropriately.
Overall, E2 machines are cheaper, no matter how much of the month you use them. One notable difference is that the price does not depend on utilization (no "sustained usage" discount). If you use the core for 100% of the month, the discount is small (1% vs N1, 14% vs N2). This is a bit surprising: overselling resources should work better if I buy the resources for the entire month, rather than dynamically. Maybe this simplifies the pricing, or maybe Google is encouraging customers to adopt auto-scaling. Whatever the reason, this means E2 machines are a clear win for things like auto-scaling Kubernetes clusters or periodic batch jobs. It is not a win if you just buy a fixed capacity of machines. If you find yourself in that situation, you should be buying a one year commitment. The "break even" utilization for a one year commitment of E2 machines is 63%. That means if you will be using a machine say 70% of the time over the next year, you should pre-buy. This compares to a break-even point of 72% for N2 machines or 83% for N1. This makes commitments a bigger potential savings.
One interesting difference not shown in this chart is that the price for "custom" sized virtual machines with E2 is the same price as the pre-defined sizes. I suspect this is because Google is going to oversell the hardware, they don't care that the sizes might be uneven. However with standard machines, custom sized VMs cause more "fragmentation" of unused resources across the fleet, so there has to be some additional cost.