Thursday, September 24, 2020

Nested virtualbox VM inside google compute engine

Though guest VM inside google compute engine raises concerns about performance, there are situations where they prove useful. In this post I am going to discuss how we can install a ubuntu guest VM on google compute engine instance.

There are few caveats though - we can install a KVM compatible hypervisor only on Linux VM instances running only on Haswell or newer processors. Also, Haswell based processors are not available in all GCP regions - they are available in certain regions in US(US central) and Europe.

Windows compute engines do not support nested virtualization.

We need to create compute engines supporting nested virtualization off of disk images tagged with a specific license, namely:

"--licenses https://compute.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx";

With these restrictions in mind, lets proceed with the following steps to launch a compute engine which will host a ubuntu guest VM on top of oracle virtualbox.

1) Log into GCP console and launch the cloud shell:

 2) set project:
   gcloud config set project [PROJECT]

3) We would create disk image from ubuntu family tagging it with above license as shown:

$ gcloud compute disks create virtualization-tagged-disk --image-project ubuntu-os-cloud --image-family ubuntu-1804-lts --zone us-central1-a --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"

It might ask for authorization - click on "Authorize".



This will create a disk image called "virtualization-tagged-disk". We can launch a ubuntu VM based off of this image and install oracle virtualbox on that compute engine instance and then launch a guest VM inside virtualbox.

Note: We would have to launch the instance in a GCP region where Haswell processors (N1 family) are available.

4) Select the disk image as shown:



5) Launch an instance selecting the disk and appropriate region and processor:


Once the compute engine instance is started - we can ssh into it and setup a desktop environment so that we can access it via RDP.

Setting up RDP and install oracle virtualbox:

Setup RDP on the compute engine:

1) Once inside the compute engine instance - we can uncomment following series of command to setup the RDP server environment and change the password for root user and then exit and connect back to the box via a remove RDP client:


2) Connect to the box via a remote desktop client client install virtualbox.

curl -O https://download.virtualbox.org/virtualbox/6.1.14/virtualbox-6.1_6.1.14-140239~Ubuntu~bionic_amd64.deb
apt install ./virtualbox-6.1_6.1.14-140239~Ubuntu~bionic_amd64.deb




Note: For better view install xfce4 goodies:

apt-get install xfce4 xfce4-goodies


Tuesday, September 15, 2020

Exploring envoyproxy

With the adoption of micro-services, we need to tackle hosts of issues associated with remote calls and networking - because what used to be an in-process function call now becomes a RPC call that need to be handled by a service which needs to be discovered. Service discovery has its own issues - among others, the most important being able to discover services that are active. Once services are discovered - we need to handle uniform spread of requests among discovered service instances. Traffic encryption becomes another issue that we need to handle once a call goes out over the wire from one micro-service to another.  

Another very obvious requirement in a cluster of micro-services is the need to monitor and trace requests. Without this requirement being taken care, it is difficult to figure out how services are executing on distributed network.

While there still many more issues that we need to take care of in a micro-services' setup - what should be our approach on a high-level to tackle them?

Among others - one approach could be to develop client side libraries - which handle issues of service discovery, load balancing(retry, timeout, circuit breaking etc) and others. This was the approach Netflix Eureka/ribbon/zuul stack had proposed. Eureka client acted as a service proxy - eureka server acting as a service registry and ribbon providing client side load balancing and zuul handling request routing.

While the library approach works - there are few things to consider while we embark on the library journey:

- Business code now get entangled with infrastructure code.

- We are sticking are head out either as an one language shop or we are undertaking quite a complicated job of developing/maintaining client libraries in multiple programming languages.

Is there an easy way to tackle the issues associated with micro-services? Yes! That is what out-of-process proxies like envoy and linkerd provide. All ready existing proxies like Nginx and HA-PROXY etc are adding support for capabilities similar to those provided by envoy proxy. We will discuss linkerd in a later post - here we will talk about envoy proxy.

Envoy's website talks of envoy being a edge and service service proxy - this means we can use envoy for north-south as well as east-west traffic. Envoy is written in modern C++ and most of requests handling run concurrently in lock free code. This make envoy very fast. 

Envoy has lot of goodies to offer - it's out of process architecture straight away boosts developer productivity. The network and myriad of associated issues go away instantly - letting the developer focus on his business problem. Envoy being out of process - provides a huge advantage - we can develop our services in any language of our choice and necessity. 

As mentioned earlier - envoy provides many load balancing features - automatic request retries, circuit breaking, request timeouts, request shadowing, rate limit etc.

With envoy's traffic routing features, we can easily do rolling upgrade of services, blue/green and canary deployment etc.

Envoy provides wire level observability of request traffic and native support for distributed tracing.

Envoy supports HTTP/1.1, HTTP/2 & gRPC - it transparently switches between HTTP/1.1 and HTTP/2.

One very important aspect of envoy proxy is that - while envoy can be configured statically - yet it provides robust APIs for dynamic configuration. What is more - envoy can patched and upgraded without shutting it down via what is called "Hot-Restart". We would configure some of envoy's features in upcoming posts - but before that following concepts about envoy would help.

Envoy proxy concepts:

Downstream: A downstream host connects to Envoy, sends requests, and receives responses.

Upstream: An upstream host receives connections and requests from Envoy and returns responses.

Listeners: They are the addresses where envoy process listens for incoming connections. For example,

0.0.0.0:9090. There can multiple listeners in an envoy process.

Filter chains: Each listener in an envoy process can be configured with filter chains - where each chain consists of one or more filters. Filter chains are selected based on incoming request and some matching criteria.

Routes: Based on matching criteria - requests are delegated to be handled by back-end clusters.

Clusters: Named collection of back-ends called endpoints. Requests are load balanced among the cluster endpoints.

Endpoints: Endpoints are the delegates which handles requests. They are part of cluster definition.

In the next post - we would install envoy and look at traffic routing. Stay tuned.