uffizzi_controller

Uffizzi Resource Controller

A smart proxy service that handles requests from Uffizzi App to the Kubernetes API

This application connects to a Kubernetes (k8s) Cluster to provision Uffizzi users’ preview deployment workloads on their behalf. While it provides a documented REST API for anyone to use, it’s most valuable when used with the open-source uffizzi_app.

Uffizzi Overview

Uffizzi is the Full-stack Previews Engine that makes it easy for your team to preview code changes before merging—whether frontend, backend or microservice. Define your full-stack apps with a familiar syntax based on Docker Compose, then Uffizzi will create on-demand test environments when you open pull requests or build new images. Preview URLs are updated when there’s a new commit, so your team can catch issues early, iterate quickly, and accelerate your release cycles.

Getting started with Uffizzi

The fastest and easiest way to get started with Uffizzi is via the fully hosted version available at https://uffizzi.com, which includes free plans for small teams and qualifying open-source projects.

Alternatively, you can self-host Uffizzi via the open-source repositories available here on GitHub. The remainder of this README is intended for users interested in self-hosting Uffizzi or for those who are just curious about how Uffizzi works.

Uffizzi Architecture

Uffizzi consists of the following components:

To host Uffizzi yourself, you will also need the following external dependencies:

Controller Design

This uffizzi_controller acts as a smart and secure proxy for uffizzi_app and is designed to restrict required access to the k8s cluster. It accepts authenticated instructions from other Uffizzi components, then specifies Resources within the cluster’s control API. It is implemented in Golang to leverage the best officially-supported Kubernetes API client.

The controller is required as a uffizzi_app supporting service and serves these purposes:

  1. Communicate deployment instructions via native Golang API client to the designated Kubernetes cluster(s) from the Uffizzi interface
  2. Provide Kubernetes cluster information back to the Uffizzi interface
  3. Support restricted and secure connection between the Uffizzi interface and the Kubernetes cluster

Example story: New Preview

Dependencies

This controller specifies custom Resources managed by popular open-source controllers:

You’ll want these installed within the Cluster managed by this controller.

Configuration

Environment Variables

You can specify these within credentials/variables.env for use with docker-compose and our Makefile. Some of these may have defaults within configs/settings.yml.

Kubernetes API Server Connection

This process expects to be provided a Kubernetes Service Account within a Kubernetes cluster. You can emulate this with these four pieces of configuration:

Once you’re configured to connect to your cluster (using kubectl et al) then you can get the value for these two environment variables from the output of kubectl cluster-info.

Add those two environment variables to credentials/variables.env.

The authentication token must come from the cluster’s cloud provider, e.g. gcloud config config-helper --format="value(credential.access_token)"

The server certificate must also come from the cluster’s cloud provider, e.g. gcloud container clusters describe uffizzi-pro-production-gke --zone us-central1-c --project uffizzi-pro-production-gke --format="value(masterAuth.clusterCaCertificate)" | base64 --decode

You should write these two values to credentials/token and credentials/ca.crt and the make commands and docker-compose will copy them for you.

Shell

While developing, we most often run the controller within a shell on our workstations. docker-compose will set up this shell and mount the current working directory within the container so you can use other editors from outside. To login into docker container just run: ```shell script make shell


All commands in this "Shell" section should be run inside this shell.

## Compile
After making any desired changes, compile the controller:
```shell script
go install ./cmd/controller/...

Execute

```shell script /go/bin/controller


## Test Connection to Cluster
Once you've configured access to your k8s Cluster (see above), you can test `kubectl` within the shell:
```shell script
kubectl --token=`cat /var/run/secrets/kubernetes.io/serviceaccount/token` --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt get nodes

Tests, Linters

In docker shell:

make test
make lint
make fix_lint

External Testing

Once the controller is running on your workstation, you can make HTTP requests to it from outside of the shell.

Ping controller

```shell script curl localhost:8080
–user “${CONTROLLER_LOGIN}:${CONTROLLER_PASSWORD}”


## Remove all workload from existing environment
This will remove the specified Preview's Namespace and all other Resources.
```shell script
curl -X POST localhost:8080/clean \
     --user "${CONTROLLER_LOGIN}:${CONTROLLER_PASSWORD}" \
     -H "Content-Type: application/json" \
     -d '{ "environment_id": 1 }'

Online API Documentation

Available at http://localhost:8080/docs/

Installation within a Cluster

Functional usage within a Kubernetes Cluster is beyond the scope of this document. For more, join us on Slack or contact us at info@uffizzi.com.

That said, we’ve included a Kubernetes manifest to help you get started at infrastructure/controller.yaml. Review it and change relevant variables before applying this manifest. You’ll also need to install and configure the dependencies identified near the top of this document.