Architecture
The Konflux-Workspaces suite is composed by the following modules:
It also require KubeSaw to be installed in the cluster.
REST API
The REST API Server exposes Kubernetes Server compatible endpoints for Workspaces.
Under the hoods, the REST API Server works with InternalWorkspaces. Workspaces are never persisted on storage. Any allowed change performed on Workspaces is reflected by the REST API Server on InternalWorkspaces.
Hence, one of the REST API Server's main aims is to provide its users a Kubernetes-like experience on Workspace virtual custom resources.
Another responsibility of the REST API Server is to authenticate the users performing requests and provide them only the data they're allowed to have. The Authorization logic is simple at the moment. Users can only access the workspace they own and the ones that has been shared with them.
Custom Resource Definitions (CRDs)
The Workspace Custom Resource Definition is simple and minimal. It just contains the information required by the Konflux UI.
Workspaces are never persisted on storage, but always calculated from InternalWorkspaces. Any allowed change performed on Workspaces is reflected by the REST API Server on InternalWorkspaces.
apiVersion: workspaces.konflux-ci.dev/v1alpha1
kind: Workspaces
metadata:
namespace: owner-name
name: my-workspace
spec:
visibility: community | private
status:
owner:
email: string
space:
name: string
targetCluster: string
conditions:
type: string
status: True | False | Unknown
reason: string
message: string
lastTransitionTime: time
Auth
The REST API Server authenticates and authorizes requests before processing them.
Authentication
The Authentication is performed by a Traefik sidecar configured for validate the request JWT token. The sidecar also extracts meaningful fields and injects them as Headers before proxying the request to the REST API Server.
Hence, configuring Authentication is as easy as correctly configuring the Traefik sidecar to use the correct key to validate the JWTs.
Authorization
For authorizing requests, the REST API server fetches information from KubeSaw's resources. Namely, UserSignup and SpaceBindings are checked.
To fetch the correct resources, the REST API Server matches the JWT's sub
and UserSignup's spec.sub
fields.
Endpoints
Workspaces
This section details the endpoints for Workspaces exposed by the REST API Server.
/apis/workspaces.konflux-ci.dev/v1alpha1/
Requests to this workspace will always be authorized, the result varies with respect to the access the requesting user has.
GET
This endpoint returns the list of all the workspaces the user has access to. The workspace can be own by different user.
/apis/workspaces.konflux-ci.dev/v1alpha1/namespaces/{owner}/workspaces/{workspace}
Requests to this workspace will be authorized only if the user has access to the workspace {workspace}
owned by the user {owner}
.
GET
Returns the details for the workspace {workspace}
owned by the user {owner}
.
PUT
Only the owner is allowed to perform this operation.
Allows the user to update the spec
of the workspace {workspace}
owned by the user {owner}
.
PATCH
Only the owner is allowed to perform this operation.
Only Merge and StrategicMerge strategies are supported.
Allows the user to update the spec
of the workspace {workspace}
owned by the user {owner}
.
Audit
Incoming requests and response status codes are logged by the Traefik ingress in its log.
More details are logged by the REST API Server.
Operator
The operator reconciles InternalWorkspaces and ensures the KubeSaw's resources required to provide the final user with a Konflux workspace are in a coherent state.
Custom Resource Definitions (CRDs)
InternalWorkspaces contains information required to build Workspaces and to manage related KubeSaw's resources.
apiVersion: workspaces.konflux-ci.dev/v1alpha1
kind: InternalWorkspace
metadata:
namespace: workspaces-system
name: my-workspace-7ghf2
spec:
displayName: my-workspace
visibility: community | private
owner:
jwtInfo:
email: string
sub: string
userId: string
status:
space:
# whether it is the home KubeSaw's Space for the user or not
isHome: true | false
# the name of the related KubeSaw's Space
name: my-workspace-7ghf2
# the URL of the cluster hosting the related KubeSaw's space
targetCluster: string
owner:
# the name of the owner's KubeSaw's UserSignup
username: string
conditions:
type: string
status: True | False | Unknown
reason: string
message: string
lastTransitionTime: time
Workflows
In this section are detailed the main workflow implemented by this operator.
Home Workspace
When a KubeSaw UserSignup is approved, a Space is created by default. The controller ensures an InternalWorkspace exists for the user's default Space.
This workflow is implemented by the UserSignup Reconciler.
Public Viewer
InternalWorkspaces have a property representing their visibility.
Visibility can be either private
or community
.
A private
InternalWorkspace is visible only by its owner and the users it's directly shared with.
A community
InternalWorkspace is visible by every authenticated users.
If an InternalWorkspace visibility is set to community
, the operator makes sure that a SpaceBinding exists for the special-user kubesaw-authenticated
, the space related to the InternalWorkspace, and the role viewer
.
If the visibility is set to private
, the SpaceBinding is removed.
This workflow is implemented in the InternalWorkspace Reconciler.
Development
In this section you will find the useful information for contributing to the project.
Repository structure
The Konflux-Workspaces repository is organized as a monorepo.
- the
operator
folder contains the code for the Workspaces Operator - the
server
folder contains the code for the REST API Server - the
e2e
folder contains the code for the e2e tests - the
book
folder contains the code for this book - the
ci
folder contains scripts for CI and automation
Operator
The operator code is using the Operator-SDK.
- the
api
folder contains the Go code for Operator's CRDs - the
config
folder contains the YAML manifests - the
internal
folder contains the code for the reconcilers
Run Tests
To run unit tests you can execute the make test
command.
To run e2e tests, take a look at the Run End-to-End Test section.
REST API Server
The REST API Server is respecting the hexagonal architecture architectural pattern and Command Query Responsibility Segregation (CQRS) pattern.
- the
api
folder contains the Go code for Operator's CRDs - the
config
folder contains the YAML manifests - the
rest
folder contains the REST over HTTP layer. This package main responsibilities are to map HTTP requests tocore
's Commands and Queries, trigger thecore
logic, and mapcore
's Responses back to HTTP Responses. - the
core
folder contains the application main logic. It validates requests, access the persistence layer to fetch the correct data and produces a response. - the
persistence
folder contains the persistence layer code. More in details, it contains caches and Kubernetes client implementation.
Following the flow of an HTTP Request, the request will be initially processed by the code in the rest
package.
It validates the HTTP Request and builds a Command or Query to use for invoking the handlers in the core
package.
The core
package performs validation, authorization, may apply some transformation, before invoking the logic in the persistence
package.
In case of a Command it will perform update or create, in case of a Query it will retrieve some data.
Finally, the core
will build a Response and provide it back to the rest
package which will map it to an HTTP Response.
Run Tests
To run unit tests you can execute the make test
command.
To run e2e tests, take a look at the Run End-to-End Test section.
End-To-End tests
e2e tests are implemented following the Behavior-Driven Development (BDD) approach.
Implementation relies on cucumber/godog.
- the
assets
folder contains the assets needed to run the e2e tests, like external CRDs - the
feature
folder contains the BDD Feature files describing the scenarios to test - the
hook
folder contains the godog hooks that are executed before and after each suite/test/step. - the
step
folder contains the Go code implementing the steps. Steps are organized by domain and then for step type (given, when, then, or step).
Run End-to-End tests
To run the End-to-End tests, you need a QUAY repository and admin access to an OpenShift cluster where KubeSaw and Konflux-Workspaces are running.
All in one script
To easily setup the cluster you can refer to the ./hack/demo.sh
script.
This script will install KubeSaw and Konflux-Workspaces, and then execute e2e tests.
Step by step
Install dependencies
You need to define some variables to use in the next steps.
# the tag to use for KubeSaw images build at step 1
export IMAGE_TAG=e2e-test
# the quay.io namespace to use in the next steps
export QUAY_NAMESPACE=my-quay-namespace
Tip
By default the scripts will use
docker
. If you want to use a different tool for building and pushing containers, you can export theIMAGE_BUILDER
variable. As an example, to use podman you willexport IMAGE_BUILDER=podman
.
1. Build KubeSaw fork
As first thing, you'll need to build and push the KubeSaw fork from Konflux-Workspaces.
The
ci/toolchain_manager.sh
script provides help to complete this step.
./ci/toolchain_manager.sh publish "$IMAGE_TAG" -n "$QUAY_NAMESPACE"
2. Install KubeSaw
Once images from our KubeSaw fork are built and published, you need to deploy them in the cluster.
The
ci/toolchain_manager.sh
script provides help to complete this step.
./ci/toolchain_manager.sh deploy "$IMAGE_TAG" -n "$QUAY_NAMESPACE"
3. Install Konflux-Workspace
To build and install the Konflux-Workspaces, you can use the hack/workspaces_install.sh
script.
# remember to export QUAY_NAMESPACE=my-quay-namespace
./hack/workspaces_install.sh
Run the tests
Now that the dependencies are installed, you can run the End-to-End tests by executing the following command:
make -C e2e test
Making a release
As a part of PR #199, we now have infrastructure in place to make releases automatically.
This automation can be triggered by running a workflow dispatch on the Build container images
workflow. This workflow takes the version to be tagged and released as its only input.
This workflow will produce a draft release with autogenerated release notes. It is recommended that
important features get highlighted in the release notes for a release. Kustomize
manifests will
be uploaded as release artifacts, which can be later consumed by infra-deployments.
Deploying a release to infra-deployments
A release will largely be managed by infra-deployments. To update the version of workspaces
running in infra-deployments, you'll need to take the following steps (working in
components/workspaces
):
- Update the image references in
staging/
to point at the new release. - Commit and file a PR against infra-deployments to roll the release out to staging.
- Test out the deployed version on the staging clusters to ensure it functions as expected.
- Update the image references in
production/
to use the same images as the staging manifests. - Commit and file a PR against infra-deployments to roll the release out to production.
- Test out the deployed version on the production clusters to ensure it functions as expected.
There are a few tools that can be used to help determine if a deployment of workspaces is successful
or not. The smoke tests (hack/smoke.sh
) can help determine if workspaces is functioning
correctly, and the konflux_workspaces_available
metric keeps track of the deployed operator's view
of itself and its dependencies.
Deploying a release with changes to the deployment manifests
If a release adds or removes resources when compared to a previous release, care must be taken to ensure that production does not break. In this case, staging and production will need to refer to different kustomize manifests, which will prevent production from pulling in resources that have not yet been tested and validated on staging. Instead of the procedure above, follow these steps:
- Extract the release's manifests into
staging/base/
. Remove the server's route (server/config/server/route.yaml
) from these manifests and from the kustomize manifest. - Update the staging manifests to point at this directory (
../base
instead of../../base
) and to use the updated release. - Commit and file a PR against infra-deployments to roll these changes out to staging.
- Test out the deployed version on the staging clusters to ensure it functions as expected.
- Once staging has stabilized, replace
base/
withstaging/base/
by deleting the former directory and moving the latter over the former. - Update staging's manifests to point at
../../base
instead of../base
. - Update production's manifests to reference the new image version.
- Commit and file a PR against infra-deployments to roll these changes out to production.
- Test out the deployed version on the production clusters to ensure it functions as expected.