Skip to main content

Team

A Team represents a multi-tenancy boundary in Butler, grouping users, clusters, and resource quotas under a single organizational unit.

API Version

butler.butlerlabs.dev/v1alpha1

Scope

Cluster

Short Name

tm

Description

Team is the primary multi-tenancy resource in Butler. Each Team owns a dedicated namespace (team-{name}) where its TenantClusters, ProviderConfigs, and other namespaced resources are created. Teams control access through user and group memberships with role-based permissions, enforce resource quotas across all clusters, and define default cluster configurations.

The Team controller reconciles the following:

  • Creates and manages the team namespace
  • Sets up RBAC (Roles and RoleBindings) for team members
  • Tracks resource usage across all TenantClusters in the team
  • Evaluates quota limits and sets conditions when limits are breached

Spec

FieldTypeRequiredDefaultDescription
displayNamestringNoHuman-readable team name
descriptionstringNoTeam description
accessTeamAccessYesTeam membership configuration
resourceLimitsTeamResourceLimitsNoLimits on team resource consumption
providerConfigRefLocalObjectReferenceNoDefault provider for team clusters
clusterDefaultsClusterDefaultsNoDefault values for new TenantClusters

TeamAccess

FieldTypeRequiredDescription
users[]TeamUserNoDirect user memberships
groups[]TeamGroupNoGroup-based memberships (synced from IdP)

TeamUser

FieldTypeRequiredDefaultDescription
namestringYesUser email address
roleTeamRoleNoviewerRole assigned to this user

TeamGroup

FieldTypeRequiredDefaultDescription
namestringYesGroup name (from identity provider)
roleTeamRoleNoviewerRole assigned to all group members
identityProviderstringNoName of the IdentityProvider CRD to match against. If omitted, matches groups from any configured provider.

Roles

RoleDescriptionPermissions
adminTeam administratorFull access to team resources. Can manage members, create/delete clusters, configure addons, and modify team settings.
operatorCluster operatorCreate, update, and delete clusters and addons. Cannot manage team membership.
viewerRead-only memberRead-only access to team resources. Can view clusters, addons, and team configuration.

TeamResourceLimits

All fields are optional. When a limit is not set, no enforcement is applied for that dimension.

Cluster Limits

FieldTypeValidationDescription
maxClusters*int32min: 0Maximum number of TenantClusters the team can create
maxNodesPerCluster*int32min: 0Maximum worker nodes per individual cluster
maxTotalNodes*int32min: 0Maximum total worker nodes across all clusters

Compute Limits

FieldTypeDescription
maxCPUCores*resource.QuantityMaximum total CPU cores across all clusters
maxMemory*resource.QuantityMaximum total memory across all clusters
maxStorage*resource.QuantityMaximum total storage across all clusters

Per-Cluster Defaults

FieldTypeDefaultDescription
defaultNodeCount*int323Default number of worker nodes when not specified
defaultCPUPerNode*resource.QuantityDefault CPU per worker node
defaultMemoryPerNode*resource.QuantityDefault memory per worker node

Feature Restrictions

FieldTypeDescription
allowedKubernetesVersions[]stringKubernetes versions permitted for this team (e.g., ["1.29.x", "1.30.x"]). Empty means all versions allowed.
allowedProviders[]stringInfrastructure providers this team can use (e.g., ["harvester", "nutanix"]). Empty means all providers allowed.
allowedAddons[]stringAddons this team can install. Empty means all addons allowed.
deniedAddons[]stringAddons this team cannot install. Takes precedence over allowedAddons.

ClusterDefaults

Default values applied to new TenantClusters created in this team when the corresponding field is not explicitly set.

FieldTypeDescription
kubernetesVersionstringDefault Kubernetes version
workerCountint32Default number of worker nodes
workerCPUresource.QuantityDefault CPU per worker
workerMemoryGiint32Default memory per worker (GiB)
workerDiskGiint32Default disk size per worker (GiB)
defaultAddons[]stringAddons installed by default on new clusters

ProviderConfigRef

FieldTypeRequiredDescription
namestringYesName of the ProviderConfig in the team namespace

Status

FieldTypeDescription
conditions[]ConditionStandard Kubernetes conditions (see below)
phasestringCurrent team phase
namespacestringTeam's namespace (team-{name})
observedGenerationint64Last generation reconciled by the controller
clusterCountint32Number of TenantClusters in the team
memberCountint32Total number of team members (users + resolved group members)
resourceUsageTeamResourceUsageCurrent resource consumption
quotaStatusstringOverall quota status
quotaMessagestringHuman-readable quota status message

Conditions

TypeDescription
NamespaceReadyThe team namespace has been created and is available
RBACReadyRoles and RoleBindings have been created for all team members
ReadyThe team is fully reconciled and operational
QuotaExceededOne or more resource limits have been breached

Phases

PhaseDescription
PendingTeam created, waiting for namespace and RBAC setup
ReadyTeam is fully provisioned and operational
TerminatingTeam is being deleted, cleanup in progress
FailedAn error occurred during reconciliation

TeamResourceUsage

FieldTypeDescription
clustersint32Number of active TenantClusters
totalNodesint32Total worker nodes across all clusters
totalCPU*resource.QuantityTotal CPU allocated across all clusters
totalMemory*resource.QuantityTotal memory allocated across all clusters
totalStorage*resource.QuantityTotal storage allocated across all clusters
clusterUtilizationint32Percentage of maxClusters used (0-100)
nodeUtilizationint32Percentage of maxTotalNodes used (0-100)
cpuUtilizationint32Percentage of maxCPUCores used (0-100)
memoryUtilizationint32Percentage of maxMemory used (0-100)

Quota Status Values

ValueDescription
OKAll resource usage is within limits
WarningResource usage is approaching a limit (>80%)
ExceededOne or more limits have been breached
ColumnFieldDescription
Display Namespec.displayNameHuman-readable name
Phasestatus.phaseCurrent lifecycle phase
Namespacestatus.namespaceTeam namespace
Clustersstatus.clusterCountNumber of clusters
Quotastatus.quotaStatusQuota status (OK/Warning/Exceeded)
Agemetadata.creationTimestampResource age

Resource Usage Calculation

The Team controller computes resource usage by aggregating data from all TenantClusters in the team namespace:

  • CPU: Sum of machineTemplate.CPU * workers.replicas across all TenantClusters
  • Memory: Sum of machineTemplate.Memory * workers.replicas across all TenantClusters
  • Storage: Sum of machineTemplate.DiskSize * workers.replicas across all TenantClusters
  • Utilization percentages: Computed as (current usage / limit) * 100, only when the corresponding limit is set

When any utilization exceeds 100%, the controller sets the QuotaExceeded condition to True and updates quotaStatus to Exceeded.

Quota Enforcement

Resource limits are enforced at two levels:

Webhook Enforcement (TenantCluster Admission)

A validating webhook intercepts TenantCluster create and update requests and checks them against the owning Team's resource limits:

  • CPU quota: Sums machineTemplate.CPU * replicas across all existing TenantClusters in the team (excluding self on update), adds the proposed cluster's CPU, and compares against maxCPUCores.
  • Memory quota: Same calculation using machineTemplate.Memory against maxMemory.
  • Storage quota: Same calculation using machineTemplate.DiskSize against maxStorage.
  • Cluster count: Checks current cluster count against maxClusters.
  • Nodes per cluster: Checks proposed worker replicas against maxNodesPerCluster.
  • Total nodes: Sums worker replicas across all clusters against maxTotalNodes.

The webhook also enforces ProviderConfig-level limits (maxClustersPerTeam, maxNodesPerTeam) when present.

If any limit would be exceeded, the webhook rejects the request with a descriptive error message.

Controller Enforcement (Continuous Monitoring)

The Team controller continuously monitors resource usage and updates status conditions. This catches drift caused by external changes and provides visibility into quota health via the QuotaExceeded condition and quotaStatus field.

Group Sync

Teams can reference identity provider groups to automatically grant access to group members. When a user authenticates via SSO, Butler resolves their group memberships and matches them against team group entries.

How Group Matching Works

  1. The user authenticates via an IdentityProvider (OIDC/SSO).
  2. Butler retrieves the user's group memberships from the identity provider:
    • Google Workspace: Groups fetched via Admin SDK (not included in OIDC token).
    • Microsoft Entra ID / Okta: Groups come from the groups claim in the OIDC token.
  3. Group names are normalized before matching:
    • Email-style groups: [email protected] is normalized to engineers.
    • LDAP DN format: CN=Engineers,OU=Groups,DC=company is normalized to engineers.
  4. The normalized group name is compared against spec.access.groups[].name.
  5. If the group entry specifies identityProvider, it only matches groups from that specific IdentityProvider CRD. Otherwise, it matches groups from any configured provider.
  6. Matched users receive the role specified on the group entry.

Group Priority

If a user is matched by both a direct users entry and one or more groups entries, the highest-privilege role wins. Role precedence: admin > operator > viewer.

Finalizer

butler.butlerlabs.dev/team

The Team finalizer ensures proper cleanup when a Team is deleted:

  1. All TenantClusters in the team namespace are deleted (and their finalizers run).
  2. RBAC resources (Roles, RoleBindings) are removed.
  3. The team namespace (team-{name}) is deleted.
  4. The finalizer is removed and the Team resource is deleted.

Examples

Basic Team

apiVersion: butler.butlerlabs.dev/v1alpha1
kind: Team
metadata:
name: platform-team
spec:
displayName: Platform Engineering
description: Core platform infrastructure team
access:
users:
- name: [email protected]
role: admin
- name: [email protected]
role: operator
groups:
- name: platform-engineers
role: operator
- name: platform-viewers
role: viewer
providerConfigRef:
name: harvester-prod

Team with Full Resource Quotas

apiVersion: butler.butlerlabs.dev/v1alpha1
kind: Team
metadata:
name: development
spec:
displayName: Development Team
description: Shared development environment with resource limits
access:
users:
- name: [email protected]
role: admin
groups:
- name: developers
role: operator
identityProvider: google-workspace
resourceLimits:
# Cluster limits
maxClusters: 5
maxNodesPerCluster: 10
maxTotalNodes: 30
# Compute limits
maxCPUCores: "120"
maxMemory: "480Gi"
maxStorage: "2Ti"
# Per-cluster defaults
defaultNodeCount: 3
defaultCPUPerNode: "4"
defaultMemoryPerNode: "16Gi"
providerConfigRef:
name: harvester-dev
clusterDefaults:
kubernetesVersion: "1.30.4"
workerCount: 3
workerCPU: "4"
workerMemoryGi: 16
workerDiskGi: 100
defaultAddons:
- cilium
- metallb
- cert-manager

Team with Feature Restrictions

apiVersion: butler.butlerlabs.dev/v1alpha1
kind: Team
metadata:
name: sandbox
spec:
displayName: Sandbox Team
description: Restricted sandbox for experimentation
access:
users:
- name: [email protected]
role: operator
groups:
- name: interns
role: viewer
resourceLimits:
maxClusters: 2
maxNodesPerCluster: 3
maxTotalNodes: 6
maxCPUCores: "24"
maxMemory: "96Gi"
maxStorage: "500Gi"
# Feature restrictions
allowedKubernetesVersions:
- "1.30.4"
allowedProviders:
- harvester
allowedAddons:
- cilium
- metallb
deniedAddons:
- longhorn
- gpu-operator
providerConfigRef:
name: harvester-sandbox

See Also