The Supergiant Toolkit

3 apps, 1 mission.

Control, Capacity, and Analyze are built to simplify the administration and operation of Kubernetes. Each application is rooted in a desire to blaze better trails in DevOps, to put the power of enterprise systems in the hands of any aspiring team or tinkerer.

Get Started


How does Supergiant Capacity do it?


For a high-level view of the overall SG Capacity architecture, please consult the following chart:


SG Capacity performs its tasks by applying multiple principles to the primitives it introduces. There are a few conceptual "components" that SG Capacity uses in order to autoscale, which will be outlined below. Some of these "components" are Golang packages or structs. After explaining these components, an overview for their role in the behavior of SG Capacity will be given.

Community Rocks Socks

When these three things are explained well, it is easy for more people to use--or even contribute--to the project, so feel free to suggest improvements to the docs. :grin+:


In SG, any and all K8s clusters are fondly referred to as "kubes." There is no logical distinction between a kube and a cluster other than that SG tools often contain some metadata about the cluster which helps each component do its job. The word "kube" is not new and not proprietary, and it is not used to refer to any unique properties that might be distinct among any given cluster.


The Provider is a logical representation of the cloud used to host the K8s cluster that SG Capacity will be working within. It is used in order to allow the Kubescaler to perform the same actions in any Kube regardless of the cloud used. An example of a "provider" would be AWS, DigitalOcean, or GCE, all of which require unique approaches to performing various actions. These unique approaches must be accommodated in distinct ways with distinct configurations, hence the Provider exists to do exactly that: to allow the same code to be used for multiple cloud implementations.

In some ways, it can be thought of as a translator. The SG Capacity Kubescaler might say "I wish to scale down a node," and a command would be sent to do so. The Provider knows how this should be done on the cloud SG Capacity lives on, and it will be able to take care of this task.


The Kubescaler is the main engine responsible for adding and removing workers, as well as polling the cluster to see resource needs and utilization. It utilizes other components, such as the Kubescaler Config, the UserDataFile (if needed), and the Provider, to carry out its tasks.

The Kubescaler will periodically poll the cluster to ask if there are any pending pods, and if there are pending pods but no worker for them to schedule to, it will create a new worker that best fits the resource requirements of the pods. In addition, if there are empty workers, the Kubescaler will delete them, which terminates the empty machine.

The Kubescaler also allows a measure of flexibility, as outlined below:


There are other configuration files and options, but the Kubescaler Config is the most important and the only one to be specific to SF Capacity. A kubescaler.config file is used to configure the overall behavior. Here's an example:

    "clusterName": "tentakube",
    "kubeAPIHost": "",
    "kubeAPIPort": "443",
    "kubeAPIUser": "thekraken",
    "kubeAPIPassword": "icrusheverything",
    "machineTypes": [
    "masterPrivateAddr": "",
    "providerName": "aws",
    "provider": {
        "awsIAMRole": "kubernetes-node",
        "awsImageID": "ami-cc0900ac",
        "awsKeyID": "ABOAT2Y2ISYANICEPCAR",
        "awsKeyName": "captns-key",
        "awsRegion": "us-west-1",
        "awsSecretKey": "f8YIfCheeVyOuDYASEeAtHisAwyoUk3Y35WinGaT",
        "awsSecurityGroups": "sg-59aff721",
        "awsSubnetID": "subnet-7f8daa24",
        "awsTags": "KubernetesCluster=TentaKube,Creator=CapacityController",
        "awsVolSize": "100",
        "awsVolType": "gp2"
    "sshPubKey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCweEek5LDo9eDE4YLwxiAyZ8YoUgettHepoiNT... ...3OuUThFRRvaL1vnydmrYtg0QYckGlu95yhwAi0Gz9seVD7lKsP8Cq78Og6KoQW467T0EUiQlmwmPhgg4xeypw== iluvmuffins@Porkbeards-MacBook-Pro.local",
    "workersCountMax": 3,
    "workersCountMin": 1,
    "paused": true

The config informs the Kubescaler of all the things it needs to know in order to operate as desired. This includes everything from the Provider chosen for the Kube to the range of workers that can exist at any time.


A worker is a Kubernetes node with some extra metadata that is useful to SG Capacity. For example, this is the Worker struct:

type Worker struct {
    // ClusterName is a kubernetes cluster name.
    ClusterName string `json:"clusterName"`
    // MachineID is a unique id of the provider's virtual machine.
    // required: true
    MachineID string `json:"machineID"`
    // MachineName is a human-readable name of virtual machine.
    MachineName string `json:"machineName"`
    // MachineType is type of virtual machine (eg. 't2.micro' for AWS).
    MachineType string `json:"machineType"`
    // MachineState represent a virtual machine state.
    MachineState string `json:"machineState"`
    // CreationTimestamp is a timestamp representing a time when this machine was created.
    CreationTimestamp time.Time `json:"creationTimestamp"`
    // Reserved is a parameter that is used to prevent downscaling of the worker.
    Reserved bool `json:"reserved"`
    // NodeName represents a name of the kubernetes node that runs on top of that machine.
    NodeName string `json:"nodeName"`
    // NodeLabels represents a labels of the kubernetes node that runs on top of that machine.
    NodeLabels map[string]string `json:"nodeLabels,omitempty"`

Some of the most important metadata above includes things such as MachineState, which is important in making sure that SG Capacity is careful not to disrupt machines that are pending or running during its periodic worker check. Machines that have failed will, however, be deleted.

CreationTimestamp is also vital, as workers have a "grace period" wherein SG Capacity will not delete them so that the worker has time to initialize properly and accept pods.

Reserved denotes whether or not the worker is available for deletion at all during SG Capacity's scanning of workers. Reserved workers will not be deleted or otherwise touched by SG Capacity unless, for some reason, the MachineState specifies a status that permits deletion, such as terminated (for more information on configuration, please see Configuration.

Continue Your Journey?

Ready to install SG Capacity and see it work in person?



How does Supergiant Capacity do it?

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.