Collabora Online for Kubernetes
Deployment Methods
This guide covers two deployment approaches:
Standard COOL/CODE Deployment: COOL/CODE deployment using standard Helm chart.
COOL with COOL Controller Deployment: COOL deployment with COOL Controller using umbrella Helm chart.
Prerequisites
Install helm
Setting up Kubernetes Ingress Controller
Install Nginx Ingress Controller or
Install HAproxy Ingress Controller by HAproxy Technologies or
Note
You can use any of the ingresses but for the following steps I have used nginx as an example.
Openshift uses minimized version of HAproxy called Router that doesn’t support all functionality of HAProxy but for COOL we need advance annotations. Therefore it is recommended deploy HAproxy Kubernetes Ingress in
collabora
namespace.
Deploying COOL/CODE with standard Helm chart
In order for collaborative editing to operate, it is vital to ensure that all users editing the same document and all the clipboard request end up being served by the same pod. Using the WOPI protocol, the https URL includes a unique identifier (WOPISrc
) for use with this document. Thus load balancing can be done by using WOPISrc
– ensuring that all URLs that contain the same WOPISrc
are sent to the same pod.
Create an
cool_values.yaml
(if your setup differs e.g. take an look in then collabora-online/values.yaml ) of the helmchartreplicaCount: 3 ingress: enabled: true className: "nginx" annotations: nginx.ingress.kubernetes.io/enable-access-log: "true" nginx.ingress.kubernetes.io/upstream-hash-by: "$arg_WOPISrc" nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" hosts: - host: test.collabora.online paths: - path: / pathType: Prefix tls: - secretName: tls-secret-name hosts: - test.collabora.online # Ingress settings for HAproxy Ingress Controller by HAproxy Technologies # ingress: # enabled: true # className: "haproxy" # annotations: # haproxy.org/timeout-tunnel: "3600s" # haproxy.org/backend-config-snippet: | # balance url_param WOPISrc check_post # hosts: # - host: test.collabora.online # paths: # - path: / # pathType: Prefix # tls: # - secretName: tls-secret-name # hosts: # - test.collabora.online # Ingress settings for HAproxy Ingress Controller by HAproxy Technologies # ingress: # enabled: true # className: "haproxy" # annotations: # haproxy-ingress.github.io/timeout-tunnel: 3600s # haproxy-ingress.github.io/balance-algorithm: url_param WOPISrc check_post # hosts: # - host: test.collabora.online # paths: # - path: / # pathType: Prefix # tls: # - secretName: tls-secret-name # hosts: # - test.collabora.online autoscaling: enabled: false collabora: aliasgroups: - host: "https://example.integrator.com:443" extra_params: > --o:ssl.enable=false --o:ssl.termination=true --o:num_prespawn_children=4 resources: limits: cpu: "4000m" memory: "8000Mi" requests: cpu: "4000m" memory: "6000Mi"
Note
Horizontal Pod Autoscaling (HPA) is disabled for now. Because after scaling it breaks the collaborative editing and copy/paste. Therefore please set
replicaCount
as per your needs.If you have multiple host and aliases setup set aliasgroups in
my_values.yaml
:collabora: - host: "<protocol>://<host-name>:<port>" # if there are no aliases you can ignore the below line aliases: ["<protocol>://<its-first-alias>:<port>, <protocol>://<its-second-alias>:<port>"] # more host and aliases list is possible
Specify
server_name
when the hostname is not reachable directly for example behind reverse-proxycollabora: server_name: <hostname>:<port>
In Openshift , it is recommended to use Nginx deployment instead of default router. And add
className
in ingress block so that Openshift uses Nginx Ingress Controller instead ofRouter
:ingress: className: "nginx"
Install the helm chart using the command below, it should deploy the Collabora Online
helm repo add collabora https://collaboraonline.github.io/online/ helm install --create-namespace --namespace collabora collabora-online collabora/collabora-online -f cool_values.yaml
To check if everything is setup correctly you can run:
curl 'https://test.collabora.online/'It should return a similar output as below:
HTTP/1.1 200 OK last-modified: Tue, 18 May 2021 10:46:29 user-agent: COOLWSD WOPI Agent 6.4.8 content-length: 2 content-type: text/plain
COOL Controller
The COOL Controller is a solution designed to address specific challenges encountered when deploying Collabora Online (COOL) in a Kubernetes cluster. This controller tackles two main problems: ensuring requests for the same document are routed to the same pod, and managing the performance impact during scale up/down events. Additionally, it provides a cluster overview page and allows administrators to access individual pod admin consoles.
The first problem arises from the need to maintain session consistency in COOL deployments. To achieve this, the controller creates a mapping of Server IDs and RouteTokens. The RouteToken acts as a unique identifier for each pod, ensuring that requests containing the corresponding token are always directed to the correct pod. By maintaining this mapping, the controller guarantees that collaborative editing and copy-paste functionalities work correctly, even in scaled deployments.
During scale up/down events in a COOL deployment within a Kubernetes cluster, performance degradation occurs due to an uneven distribution of requests. Existing sessions remain connected to older pods while new requests are directed to the newly scaled pods, causing potential bottlenecks and sub optimal user experience. The COOL Kubernetes Controller addresses this problem with a document migrator. The controller continuously monitors memory utilization of COOL pods, with a target utilization percentage defined by administrators. If a pod’s memory utilization surpasses the target threshold, it is marked as overloaded. The controller initiates document migration from overloaded pods to less loaded pods, ensuring an even distribution of resources and requests. By actively migrating documents from overloaded to less loaded pods, the COOL Kubernetes Controller ensures a balanced distribution of requests and resources, mitigating performance degradation during scale up/down events. This optimization results in an enhanced user experience and smoother collaborative editing.
In addition to these core functionalities, the COOL Controller provides a cluster overview page that offers a centralized view of the deployment. Administrators can access individual pod admin consoles directly from this page, simplifying the management and monitoring of COOL deployments.
Overall, the COOL Controller enhances the deployment and management of Collabora Online in Kubernetes clusters. By addressing session consistency and performance optimization challenges, it enables a smooth and efficient collaborative editing experience for users.
Access credentials
The COOL Controller can be offered to Collabora partners and customers. Please contact your Collabora Account Manager to learn more, and get credentials.
Get latest version information
Get latest tag name of helm charts, Collabora Online Docker image and COOL Controller Docker image
Create a env variable
CONTROLLER_PRIVATE_TOKEN
with your private access tokenexport CONTROLLER_PRIVATE_TOKEN=<access_token>
run the following script
repositories=(1578 1579 1719 1583 1581 1582) for repo_id in "${repositories[@]}"; do curl -s --header "PRIVATE-TOKEN: $CONTROLLER_PRIVATE_TOKEN" \ --url "https://gitlab.collabora.com/api/v4/projects/5116/registry/repositories/$repo_id/tags?per_page=100" | \ jq -r '.[] | select(.name | test("^(latest.*|snapshot.*|sha256.*)$") | not) | .location' | \ sort -V | \ tail -n 1 done
Deploying Collabora Online with COOL Controller
Prerequisites
You should have a username and access token to access Collabora’s gitlab private container registry to download Docker images of COOL Controller and Collabora Online umbrella helm chart
Login to the gitlab private registry for Collabora Online Umbrella Helm Chart
export CONTROLLER_PRIVATE_TOKEN=<access_token> helm registry login -u <username> -p $CONTROLLER_PRIVATE_TOKEN registry.gitlab.collabora.com/productivity/cool-controller-registry
Use your access_token as password
Steps to Create a Kubernetes secret for pulling Collabora private registry
Create namespace for Collabora Online and COOL Controller:
kubectl create namespace collabora
Setup Kubernetes Secret for COOL Controller in the Collabora Namespace:
Docker login
docker login registry.gitlab.collabora.com/productivity/cool-controller-registry -u <username> -p <controller_access_token>
Create Kubernetes secret
kubectl create secret generic controller-regcred --namespace collabora --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson
replace
<path/to/.docker/config.json>
with path whereconfig.json
is located. Usually, the path would be/home/your_username/.docker/config.json
For more info tutorial
Deploy Collabora Online and COOL Controller
Note
Get latest tag version of COOL and COOL Controller using script
create
umbrella_values.yaml
:collabora-online: replicacount: 2 ingress: enabled: true classname: "nginx" annotations: nginx.ingress.kubernetes.io/upstream-hash-by: "$arg_routetoken" nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600" nginx.ingress.kubernetes.io/proxy-send-timeout: "3600" hosts: - host: test.collabora.online paths: - path: / pathtype: prefix tls: - secretname: tls-secret-name hosts: - test.collabora.online # ingress settings for haproxy ingress controller by haproxy technologies # for this ingress we have filed a bug report related to inconsitent balancing https://github.com/haproxytech/kubernetes-ingress/issues/604 # ingress: # enabled: true # classname: "haproxy" # annotations: # haproxy.org/timeout-tunnel: "3600s" # haproxy.org/backend-config-snippet: | # balance url_param routetoken check_post # hosts: # - host: test.collabora.online # paths: # - path: / # pathtype: prefix # tls: # - secretname: tls-secret-name # hosts: # - test.collabora.online # ingress settings for haproxy ingress controller by haproxy technologies # ingress: # enabled: true # classname: "haproxy" # annotations: # haproxy-ingress.github.io/timeout-tunnel: 3600s # haproxy-ingress.github.io/balance-algorithm: url_param routetoken check_post # hosts: # - host: test.collabora.online # paths: # - path: / # pathtype: prefix # tls: # - secretname: tls-secret-name # hosts: # - test.collabora.online # to fetch cool docker image imagepullsecrets: - name: controller-regcred collabora: aliasgroups: - host: "https://test.integrator.com:443" extra_params: > --o:ssl.enable=false --o:indirection_endpoint.url=https://test.collabora.online/controller/routetoken --o:monitors.monitor[0]=ws://test.collabora.online/controller/ws --o:monitors.monitor[0][@retryinternal]=5 # indirection_endpoint is used on client side to ask for routetoken from controller # monitor is used to maintain a socket connection with controller to get information about load, documents, etc username: <your-admin-console-username> password: <your-admin-console-password> env: - name: pod_name valuefrom: fieldref: fieldpath: metadata.name autoscaling: enabled: true targetmemoryutilizationpercentage: 80 targetcpuutilizationpercentage: 60 resources: limits: cpu: "4000m" memory: "8000mi" requests: cpu: "4000m" memory: "6000mi" cool-controller: replicacount: 2 ingress: enabled: true # for haproxies you need "haproxy" as classname # classname: "haproxy" classname: "nginx" hosts: - host: test.collabora.online paths: - path: "/controller" pathtype: prefix imagepullsecrets: - name: controller-regcred # name of pull secret created on step 2 controller: watchnamespace: "collabora" # namespace where collabora online is installed resourcename: "collabora-online" # resource to watch in that namespace ingressurl: "https://test.collabora.online" ## url from which controller can access the cool pods. the domain on which k8s cluster is exposed using loadbalancer enablehashmapparallelization: true # whether to create hashmap parallely which is much faster # note: don't enable hashmap parallelization if your are using haproxy ingress controller namespacedrole: true # set it to true so that all the role and role binding are restricted to collabora namespace statsinterval: 2000 # interval in millisecond at which the controller fetches new mem and cpu stats from socket connection documentmigrator: enabled: true coolmemoryutilization: 80 # percentage utlization after which document migrator considers migrating documents, use the same value as targetmemoryutilizationpercentage in online_values.yaml coolmemorylimit: "8000mi" # memory limit in megabytes by each cool pod should same as provided in online_values.yaml leaderelection: enabled: true # enables leader election for high availability
Install using helm
Note
Get latest tag version of COOL umbrella helm chart using script
helm install --create-namespace --namespace collabora collabora oci://registry.gitlab.collabora.com/productivity/cool-controller-registry/helm-charts/collabora-online-umbrella --version <version> -f /path/to/umbrella_values.yaml # with nextcloud theming: # helm install --create-namespace --namespace collabora collabora oci://registry.gitlab.collabora.com/productivity/cool-controller-registry/helm-charts/collabora-online-nc-umbrella --version <version> -f /path/to/umbrella_values.yaml
That’s it if everything is setup correctly, Collabora Online should return OK response. Admin cluster overview can be accessed on
https://test.collabora.online/browser/dist/admin/adminClusterOverview.html
Enable direct communication between COOL and COOL Controller within a Kubernetes cluster
This guide configures internal cluster communication between Collabora Online (COOL) and COOL Controller using Kubernetes DNS-based service discovery. By routing traffic through internal cluster DNS rather than external domains, you achieve:
Improved Performance: Eliminates unnecessary network hops through external load balancers
Enhanced Security: Keeps cluster-internal traffic isolated from the public internet
The key principle is using Kubernetes service DNS names
(<service>.<namespace>.svc.cluster.local
) for internal communication
while maintaining external domains for client-facing endpoints.
Prerequisites
This feature requires cool-controller:1.1.5 or later and helm chart version 1.1.8 or higher:
Locate Your Ingress Controller Service
First, identify your Ingress controller’s internal service name. Using NGINX Ingress as an example:
kubectl get svc -n ingress-nginx
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.97.66.73 <none> 80:30080/TCP,443:30443/TCP 6d1h ingress-nginx-controller-admission ClusterIP 10.111.125.121 <none> 443/TCP 6d1h
The internal DNS name follows this format:
<service-name>.<namespace>.svc.cluster.local
For this example:
ingress-nginx-controller.ingress-nginx.svc.cluster.local
Note: Your service name and namespace may differ depending on your Ingress controller type.Adjust accordingly in subsequent steps.
Locate COOL Controller Service
Identify the COOL Controller service in your deployment namespace:
kubectl get svc -n collabora
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE collabora-cool-controller ClusterIP 10.106.76.189 <none> 9000/TCP 4h3m
The internal DNS name:
collabora-cool-controller.collabora.svc.cluster.local
Note
Replace
collabora
with your actual namespace if different.Configure Collabora Online
Update your Collabora Online configuration to use the internal COOL Controller DNS name for monitoring. The
monitors.monitor
parameter establishes WebSocket communication between COOL instances and the controller:collabora: aliasgroups: - host: "https://integrator.local" extra_params: > --o:ssl.enable=false --o:ssl.termination=true --o:ssl.verification=false --o:indirection_endpoint.url=https://test.collabora.online/controller/routeToken --o:monitors.monitor[0]=ws://collabora-cool-controller.collabora.svc.cluster.local:9000/controller/ws --o:monitors.monitor[0][@retryInterval]=5 --o:logging.level=debug username: admin password: admin123 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name
Key configuration points:
indirection_endpoint.url
: Must remain as the external domain since clients use this endpointmonitors.monitor[0]
: Uses internal DNS for cluster-only communicationretryInterval
: Controls reconnection attempts if the controller becomes unavailable
Configure COOL Controller
Update the controller to use the internal Ingress service while preserving the external hostname for proper routing:
controller: watchNamespace: "collabora" resourceName: "collabora-online" ingressUrl: "https://ingress-nginx-controller.ingress-nginx.svc.cluster.local:443" ingressHostname: "test.collabora.online" enableHashmapParallelization: true skipTLSVerification: true documentMigrator: enabled: true coolMemoryUtilization: "50" coolMemoryLimit: "2000Mi" leaderElection: enabled: true
Configuration explanation:
ingressUrl
: Internal cluster DNS address bypasses external load balancersingressHostname
: Original external domain added to theHost
header for proper Ingress routingskipTLSVerification
: Often required for internal communication with self-signed certificates
Additional configuration
Kubernetes cluster monitoring
Install kube-prometheus-stack, a collection of Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
Enable prometheus service monitor, rules and grafana in your
my_values.yaml
:prometheus: servicemonitor: enabled: true labels: release: "kube-prometheus-stack" rules: enabled: true # will deploy alert rules additionalLabels: release: "kube-prometheus-stack" grafana: dashboards: enabled: true # will deploy default dashboards
Note
Use
kube-prometheus-stack
as release name when installing kube-prometheus-stack helm chart because we have passedrelease=kube-prometheus-stack
label in ourmy_values.yaml
. For Grafana Dashboards you may need to enable scan in correct namespaces (or ALL), enabled bysidecar.dashboards.searchNamespace
in Helmchart of grafana (which is part of PrometheusOperator, sografana.sidecar.dashboards.searchNamespace
)
Dynamic/Remote configuration in kubernetes
For big setups, you may not want to restart every pod to modify WOPI hosts. Therefore it is possible to setup an additional webserver to serve a ConfigMap for using Remote/Dynamic Configuration
collabora: env: - name: remoteconfigurl value: https://dynconfig.public.example.com/config/config.json dynamicConfig: enabled: true ingress: enabled: true annotations: "cert-manager.io/issuer": letsencrypt-zprod hosts: - host: "dynconfig.public.example.com" tls: - secretName: "collabora-online-dynconfig-tls" hosts: - "dynconfig.public.example.com" configuration: kind: "configuration" storage: wopi: alias_groups: groups: - host: "https://domain1\\.xyz\\.abc\\.com/" allow: true - host: "https://domain2\\.pqr\\.def\\.com/" allow: true aliases: - "https://domain2\\.ghi\\.leno\\.de/"Note
In current state of COOL
remoteconfigurl
for Remote/Dynamic Configuration should be HTTPS.
Installing Custom Fonts
There are two primary methods for adding custom fonts to your Collabora Online deployment without building a custom Docker image.
Remote Font Configuration: This method involves pointing Collabora Online to a remote server that hosts your font files. See the details in Remote configuration chapter.
PersistentVolumeClaim (PVC): This method uses a Kubernetes PersistentVolume to store the fonts, which are then mounted directly into the Collabora Online pods. More details in PersistentVolume(PVC)
This section focuses on the second method: using a PersistentVolumeClaim.
Method: Using a PersistentVolumeClaim (PVC)
This approach is ideal for managing fonts directly within your Kubernetes cluster. The process involves enabling a feature flag in your deployment configuration, which orchestrates the creation of a PVC and a temporary “font-loader” pod. You will copy your font files to this temporary pod, which saves them to the persistent volume. Finally, a restart of the main Collabora Online deployment will make the new fonts available.
Prerequisites
Before you begin, ensure you have the following:
kubectl
command-line tool configured to access your cluster.Your custom font files (e.g.,
.ttf
,.otf
) available on your local machine.
Step-by-Step Guide
Step 1: Enable Custom Fonts in Your Deployment
You need to enable the custom fonts feature in your Collabora Online configuration. If you are using a Helm chart, this is typically done in your values.yaml
file.
Set deployment.customFonts.enabled
to true
within your deployment block.
Example my_values.yaml
snippet:
deployment:
customFonts:
enabled: true
# pvc:
# size: 1Gi # Optional: Adjust storage size if needed
# storageClassName: "" # Optional: Specify a storage class
Apply the configuration change to your cluster. For Helm, you would run:
helm upgrade --install <release-name> <chart-name> -f my_values.yaml -n <namespace>
Applying this change will trigger the creation of a new PVC and a temporary pod named <deployment-name>-custom-fonts
. This pod’s purpose is to provide a temporary mount point for you to upload your fonts to the PVC.
Step 2: Copy Font Files to the PVC
Once the temporary pod is in the Running
state, use the kubectl cp
command to copy your local font files into it. The pod mounts the PVC at the /mnt/fonts
directory.
kubectl cp <path-to-local-fonts-directory> <namespace>/<deployment-name>-custom-fonts:/mnt/fonts
Replace the placeholders:
<path-to-local-fonts-directory>
: The path to the folder on your local machine containing your font files (e.g. path/to/custom_fonts/directory).<namespace>
: The Kubernetes namespace where Collabora Online is deployed (e.g. collabora).<deployment-name>
: The name of your Collabora Online deployment (e.g. collabora-online).
Step 3: Restart the Collabora Online Deployment
To make the main Collabora Online pods recognize the new fonts, you must perform a rolling restart of the deployment. This forces the pods to re-mount the PVC and rebuild their font cache.
kubectl -n <namespace> rollout restart deployment/<deployment-name>
Step 4: Cleanup and Verification
The temporary <deployment-name>-custom-fonts
pod is designed for single use and will automatically terminate itself one hour after creation. You can also delete it manually if you wish.
To verify that the fonts were installed correctly:
Open a document in your Collabora Online instance.
Click the font selection dropdown menu in the toolbar.
Your newly added fonts should now appear in the list.
Note
The font cache is built when the Collabora Online pods start. If you add more fonts later, you will need to repeat step 3.