Collabora Online for Kubernetes

In order for Collaborative Editing and copy/paste to function correctly on kubernetes, it is vital to ensure that all users editing the same document and all the clipboard request end up being served by the same pod. Using the WOPI protocol, the https URL includes a unique identifier (WOPISrc) for use with this document. Thus load balancing can be done by using WOPISrc – ensuring that all URLs that contain the same WOPISrc are sent to the same pod.

Deploying Collabora Online in Kubernetes

  1. Install helm

  2. Setting up Kubernetes Ingress Controller

    1. Nginx:

      Install Nginx Ingress Controller

    2. HAProxy:

      Install HAProxy Ingress Controller

    Note

    Openshift uses minimized version of HAproxy called Router that doesn’t support all functionality of HAProxy but for COOL we need advance annotations Therefore it is recommended deploy HAproxy Kubernetes Ingress in collabora namespace

  3. Create an my_values.yaml (if your setup differs e.g. take an look in then values.yaml ./collabora-online/values.yaml) of the helmchart

    1. HAproxy:

    replicaCount: 3
    
    ingress:
       enabled: true
       className: "haproxy"
       annotations:
          haproxy.org/timeout-tunnel: "3600s"
          haproxy.org/backend-config-snippet: |
             balance url_param WOPISrc check_post
             hash-type consistent
       hosts:
          - host: chart-example.local
            paths:
            - path: /
              pathType: ImplementationSpecific
    
    image:
       tag: "latest"
    
    autoscaling:
       enabled: false
    
    collabora:
       aliasgroups:
          - host: "https://example.integrator.com:443"
       extra_params: --o:ssl.enable=false --o:ssl.termination=true
       # for production enviroment we recommend appending `extra_params`
       # with `--o:num_prespawn_children=4`. It defines number of child
       # processes to keep started in advance and waiting for new clients
    
    resources:
       limits:
          cpu: "1800m"
          memory: "2000Mi"
       requests:
          cpu: "1800m"
          memory: "2000Mi"
    
    # for production enviroment we recommend the following resource values
    # resources:
       # limits:
          # cpu: "8000m"
          # memory: "8000Mi"
       # requests:
          # cpu: "4000m"
          # memory: "6000Mi"
    
    1. Nginx:

    replicaCount: 3
    
    ingress:
       enabled: true
       className: "nginx"
       annotations:
          nginx.ingress.kubernetes.io/upstream-hash-by: "$arg_WOPISrc"
          nginx.ingress.kubernetes.io/proxy-body-size: "0"
          nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
          nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
       hosts:
          - host: chart-example.local
            paths:
            - path: /
              pathType: ImplementationSpecific
    
    image:
       tag: "latest"
    
    autoscaling:
       enabled: false
    
    collabora:
       aliasgroups:
          - host: "https://example.integrator.com:443"
       extra_params: --o:ssl.enable=false --o:ssl.termination=true
       # for production enviroment we recommend appending `extra_params` with `--o:num_prespawn_children=4`. It defines number of child processes to keep started in advance and waiting for new clients
    
    resources:
       limits:
          cpu: "1800m"
          memory: "2000Mi"
       requests:
          cpu: "1800m"
          memory: "2000Mi"
    
    # for production enviroment we recommend the following resource values
    # resources:
       # limits:
          # cpu: "8000m"
          # memory: "8000Mi"
       # requests:
          # cpu: "4000m"
          # memory: "6000Mi"
    

    Note

    • Horizontal Pod Autoscaling (HPA) is disabled for now. Because after scaling it breaks the collaborative editing and copy/paste. Therefore please set replicaCount as per your needs.

    • If you have multiple host and aliases setup set aliasgroups in my_values.yaml:

    collabora:
       - host: "<protocol>://<host-name>:<port>"
         # if there are no aliases you can ignore the below line
         aliases: ["<protocol>://<its-first-alias>:<port>, <protocol>://<its-second-alias>:<port>"]
       # more host and aliases list is possible
    
    • Specify server_name when the hostname is not reachable directly for example behind reverse-proxy

    collabora:
       server_name: <hostname>:<port>
    
    • For production enviroment we recommended following resource values. We recommend appending extra_params with --o:num_prespawn_children=4. It defines number of child processes to keep started in advance and waiting for new clients

    resources:
       limits:
          cpu: "8000m"
          memory: "8000Mi"
       requests:
          cpu: "4000m"
          memory: "6000Mi"
    
    • In Openshift , it is recommended to use HAproxy deployment instead of default router. And add className in ingress block so that Openshift uses HAProxy Ingress Controller instead of Router:

    ingress:
       className: "haproxy"
    
  4. Install helm-chart using below command, it should deploy the collabora-online

    helm repo add collabora https://collaboraonline.github.io/online/
    helm install --create-namespace --namespace collabora collabora-online collabora/collabora-online -f my_values.yaml
    
  5. Follow only if you are using NodePort service type in HAProxy and/or using minikube to setup, otherwise skip

    1. Each container port is mapped to a NodePort port via the Service object. To find those ports

      kubectl get svc --namespace=haproxy-controller
      

      Example output:

      |----------------|---------|--------------|------------|------------------------------------------|
      |NAME            |TYPE     |CLUSTER-IP    |EXTERNAL-IP |PORT(S)                                   |
      |----------------|---------|--------------|------------|------------------------------------------|
      |haproxy-ingress |NodePort |10.108.214.98 |<none>      |80:30536/TCP,443:31821/TCP,1024:30480/TCP |
      |----------------|---------|--------------|------------|------------------------------------------|
      

      In this instance, the following ports were mapped:

      • Container port 80 to NodePort 30536

      • Container port 443 to NodePort 31821

      • Container port 1024 to NodePort 30480

  6. Additional step if deploying on minikube for testing:

    1. Get minikube ip:

      minikube ip
      

      Example output:

      192.168.0.106
      
    2. Add hostname to /etc/hosts:

      192.168.0.106   chart-example.local
      
    3. To check if everything is setup correctly you can run:

      curl -I -H 'Host: chart-example.local' 'http://192.168.0.106:30536/'
      

      It should return a similar output as below:

      HTTP/1.1 200 OK
      last-modified: Tue, 18 May 2021 10:46:29
      user-agent: COOLWSD WOPI Agent 6.4.8
      content-length: 2
      content-type: text/plain
      

Kubernetes cluster monitoring

  1. Install kube-prometheus-stack, a collection of Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.

  2. Enable prometheus service monitor, rules and grafana in your my_values.yaml:

    prometheus:
       servicemonitor:
          enabled: true
          labels:
             release: "kube-prometheus-stack"
       rules:
          enabled: true # will deploy alert rules
          additionalLabels:
             release: "kube-prometheus-stack"
    grafana:
       dashboards:
          enabled: true # will deploy default dashboards
    

    Note

    Use kube-prometheus-stack as release name when installing kube-prometheus-stack helm chart because we have passed release=kube-prometheus-stack label in our my_values.yaml. For Grafana Dashboards you may need to enable scan in correct namespaces (or ALL), enabled by sidecar.dashboards.searchNamespace in Helmchart of grafana (which is part of PrometheusOperator, so grafana.sidecar.dashboards.searchNamespace)

Dynamic/Remote configuration in kubernetes

For big setups, you may not want to restart every pod to modify WOPI hosts. Therefore it is possible to setup an additional webserver to serve a ConfigMap for using Remote/Dynamic Configuration

collabora:
   env:
      - name: remoteconfigurl
        value: https://dynconfig.public.example.com/config/config.json

 dynamicConfig:
   enabled: true

   ingress:
      enabled: true
      annotations:
      "cert-manager.io/issuer": letsencrypt-zprod
      hosts:
      - host: "dynconfig.public.example.com"
      tls:
      - secretName: "collabora-online-dynconfig-tls"
         hosts:
            - "dynconfig.public.example.com"

   configuration:
      kind: "configuration"
      storage:
         wopi:
         alias_groups:
            groups:
            - host: "https://domain1\\.xyz\\.abc\\.com/"
               allow: true
            - host: "https://domain2\\.pqr\\.def\\.com/"
               allow: true
               aliases:
                  - "https://domain2\\.ghi\\.leno\\.de/"

Note

In current state of COOL remoteconfigurl for Remote/Dynamic Configuration should be HTTPS.

Installing Custom Fonts

There are two primary methods for adding custom fonts to your Collabora Online deployment without building a custom Docker image.

  1. Remote Font Configuration: This method involves pointing Collabora Online to a remote server that hosts your font files. See the details in Remote configuration chapter.

  2. PersistentVolumeClaim (PVC): This method uses a Kubernetes PersistentVolume to store the fonts, which are then mounted directly into the Collabora Online pods. More details in PersistentVolume(PVC)

This section focuses on the second method: using a PersistentVolumeClaim.

Method: Using a PersistentVolumeClaim (PVC)

This approach is ideal for managing fonts directly within your Kubernetes cluster. The process involves enabling a feature flag in your deployment configuration, which orchestrates the creation of a PVC and a temporary “font-loader” pod. You will copy your font files to this temporary pod, which saves them to the persistent volume. Finally, a restart of the main Collabora Online deployment will make the new fonts available.

Prerequisites

Before you begin, ensure you have the following:

  • kubectl command-line tool configured to access your cluster.

  • Your custom font files (e.g., .ttf, .otf) available on your local machine.

Step-by-Step Guide

Step 1: Enable Custom Fonts in Your Deployment

You need to enable the custom fonts feature in your Collabora Online configuration. If you are using a Helm chart, this is typically done in your values.yaml file.

Set deployment.customFonts.enabled to true within your deployment block.

Example my_values.yaml snippet:

Example values.yaml snippet
deployment:
  customFonts:
    enabled: true
    # pvc:
    #   size: 1Gi # Optional: Adjust storage size if needed
    #   storageClassName: "" # Optional: Specify a storage class

Apply the configuration change to your cluster. For Helm, you would run:

helm upgrade --install <release-name> <chart-name> -f my_values.yaml -n <namespace>

Applying this change will trigger the creation of a new PVC and a temporary pod named <deployment-name>-custom-fonts. This pod’s purpose is to provide a temporary mount point for you to upload your fonts to the PVC.

Step 2: Copy Font Files to the PVC

Once the temporary pod is in the Running state, use the kubectl cp command to copy your local font files into it. The pod mounts the PVC at the /mnt/fonts directory.

kubectl cp <path-to-local-fonts-directory> <namespace>/<deployment-name>-custom-fonts:/mnt/fonts

Replace the placeholders:

  • <path-to-local-fonts-directory>: The path to the folder on your local machine containing your font files (e.g. path/to/custom_fonts/directory).

  • <namespace>: The Kubernetes namespace where Collabora Online is deployed (e.g. collabora).

  • <deployment-name>: The name of your Collabora Online deployment (e.g. collabora-online).

Step 3: Restart the Collabora Online Deployment

To make the main Collabora Online pods recognize the new fonts, you must perform a rolling restart of the deployment. This forces the pods to re-mount the PVC and rebuild their font cache.

kubectl -n <namespace> rollout restart deployment/<deployment-name>

Step 4: Cleanup and Verification

The temporary <deployment-name>-custom-fonts pod is designed for single use and will automatically terminate itself one hour after creation. You can also delete it manually if you wish.

To verify that the fonts were installed correctly:

  1. Open a document in your Collabora Online instance.

  2. Click the font selection dropdown menu in the toolbar.

  3. Your newly added fonts should now appear in the list.

Note

The font cache is built when the Collabora Online pods start. If you add more fonts later, you will need to repeat step 3.

Kubernetes Security Context for Restricted Environments

In Kubernetes environments with strict Pod Security Standards, CODE/COOL requires specific security configurations to maintain proper jail creation and isolation while adhering to security policies.

Running with Minimal Capabilities

CODE/COOL needs specific Linux capabilities for proper functionality. Configure the securityContext with only essential capabilities:

securityContext:
  allowPrivilegeEscalation: true
  privileged: false
  readOnlyRootFilesystem: false
  runAsNonRoot: true
  seccompProfile:
    type: "RuntimeDefault"
  capabilities:
    add:
      - "SYS_CHROOT"
      - "SYS_ADMIN"

Note

allowPrivilegeEscalation can’t be false with SYS_ADMIN capability.

Custom Seccomp Profile for Maximum Restriction

For environments requiring zero capabilities, use a custom seccomp profile to allow only necessary syscalls.

Enable installCOOLSeccompProfile, this creates DaemonSet which downloads cool-seccomp-profile.json and installs seccomp profile to nodes’ /var/lib/kubelet/seccomp directory.

installCOOLSeccompProfile: true

Update securityContext to use cool-seccomp-profile.json

securityContext:
  privileged: false
  runAsNonRoot: true
  allowPrivilegeEscalation: false
  seccompProfile:
    type: "Localhost"
    localhostProfile: "cool-seccomp-profile.json"
  capabilities:
    drop: ["ALL"]

OpenShift Security Context for Restricted Environments

OpenShift enforces strict security policies through Security Context Constraints (SCCs). Collabora Online requires specific configurations based on your environment’s security requirements and access level.

Security Configuration Options

The following table helps you choose the appropriate security configuration:

Security Configuration Comparison

Approach

Admin Access

Security Level

Use Case

Custom Seccomp Profile

Required

Maximum

Production with all the syscalls Collabora Online requires

Restricted SCC

Not Required

Medium

Limited admin access environments

Privileged SCC

Required

Minimal

Less secure

Option 1: Custom Seccomp Profile

Best for: Production environments requiring maximum security with fine-grained syscall control with syscalls required for Collabora Online allowed.

Custom SCC removes seccomp restrictions while maintaining all other security controls. Document isolation is handled through a custom seccomp profile that allows only necessary syscalls. Each document gets isolated with Linux usernamespaces and chroot.

Step 1: Create Custom SCC

collabora-restricted-v2.yaml
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  annotations:
    kubernetes.io/description: |
      collabora-restricted-v2 maintains restricted-v2 security posture
      while enabling custom seccomp profiles for Collabora Online.
      No capabilities are granted. Custom seccomp profile allows necessary syscalls.
  name: collabora-restricted-v2
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
allowHostPID: false
allowHostPorts: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities: []
defaultAddCapabilities: null
fsGroup:
  type: RunAsAny
groups: []
priority: 10
readOnlyRootFilesystem: false
requiredDropCapabilities:
  - ALL
runAsUser:
  type: MustRunAs
  uid: 1001
seLinuxContext:
  type: MustRunAs
seccompProfiles:
  - runtime/default
  - localhost/cool-seccomp-profile.json
supplementalGroups:
  type: RunAsAny
userNamespaceLevel: AllowHostLevel
users: []
volumes:
  - configMap
  - csi
  - downwardAPI
  - emptyDir
  - ephemeral
  - persistentVolumeClaim
  - projected
  - secret

Apply the SCC:

oc apply -f collabora-restricted-v2.yaml

Step 2: Configure Service Account Bindings

# Bind custom SCC to application service account
oc adm policy add-scc-to-user collabora-restricted-v2 -z collabora-online -n collabora

# Bind privileged SCC to DaemonSet service account for seccomp profile installation
oc adm policy add-scc-to-user privileged -z collabora-online-daemonset -n collabora

Step 3: Configure Helm Values

values.yaml
 serviceAccount:
   create: true

 daemonSetServiceAccount:
   create: true

 installCOOLSeccompProfile: true

 securityContext:
   allowPrivilegeEscalation: false
   privileged: false
   readOnlyRootFilesystem: false
   runAsNonRoot: true
   seccompProfile:
     type: "Localhost"
     localhostProfile: "cool-seccomp-profile.json"
   capabilities:
     drop: ["ALL"]

Option 2: Restricted SCC Approach

Best for: Environments without admin access or where SCC modification is not permitted.

Uses the restricted SCC with emptyDir volumes and disabled capabilities.

values.yaml
 securityContext:
   allowPrivilegeEscalation: false
   privileged: false
   readOnlyRootFilesystem: false
   runAsNonRoot: true
   seccompProfile:
     type: "RuntimeDefault"
   capabilities:
     drop: ["ALL"]

 collabora:
   extra_params: >
     --o:ssl.enable=false
     --o:security.capabilities=false
     --o:child_root_path=/tmp/coolwsd-child-roots
     --o:cache_files.path=/tmp/coolwsd-cache

 extraVolumeMounts:
   - name: coolwsd-child-roots
     mountPath: /tmp/coolwsd-child-roots
   - name: coolwsd-cache
     mountPath: /tmp/coolwsd-cache

 extraVolumes:
   - name: coolwsd-child-roots
     emptyDir: {}
   - name: coolwsd-cache
     emptyDir: {}

Warning

This approach disables Collabora’s document isolation security mechanisms. Without Linux user namespaces or chroot jails, documents from different users or sessions are not isolated at the process level.

Option 3: Privileged SCC

Best for: Environments where immediate functionality is required without security restrictions.

oc adm policy add-scc-to-user privileged -z default -n collabora
values.yaml
 serviceAccount:
   create: true

Warning

This approach provides full host access and should be carefully evaluated against your security requirements before implementation.

Select the option that best aligns with your security requirements, administrative access level, and operational constraints.

Useful commands to check what is happening

Where is this pods, are they ready?

kubectl -n collabora get pod

Example output:

NAME                                READY   STATUS    RESTARTS   AGE
collabora-online-5fb4869564-dnzmk   1/1     Running   0          28h
collabora-online-5fb4869564-fb4cf   1/1     Running   0          28h
collabora-online-5fb4869564-wbrv2   1/1     Running   0          28h

What is the outside host that multiple coolwsd servers actually answering?

kubectl get ingress -n collabora

Example output:

|-----------|------------------|--------------------------|------------------------|-------|
| NAMESPACE |       NAME       |           HOSTS          |         ADDRESS        | PORTS |
|-----------|------------------|--------------------------|------------------------|-------|
| collabora | collabora-online |chart-example.local       |                        |  80   |
|-----------|------------------|--------------------------|------------------------|-------|

To uninstall the helm chart:

helm uninstall collabora-online -n collabora