Collabora Online for Kubernetes
In order for Collaborative Editing and copy/paste to function correctly on kubernetes, it is vital to ensure that all users editing the same document and all the clipboard request end up being served by the same pod. Using the WOPI protocol, the https URL includes a unique identifier (WOPISrc) for use with this document. Thus load balancing can be done by using WOPISrc – ensuring that all URLs that contain the same WOPISrc are sent to the same pod.
Deploying Collabora Online in Kubernetes
Install helm
Setting up Kubernetes Ingress Controller
Nginx:
Install Nginx Ingress Controller
HAProxy:
Install HAProxy Ingress Controller
Note
Openshift uses minimized version of HAproxy called Router that doesn’t support all functionality of HAProxy but for COOL we need advance annotations Therefore it is recommended deploy HAproxy Kubernetes Ingress in
collabora
namespaceCreate an
my_values.yaml
(if your setup differs e.g. take an look in thenvalues.yaml ./collabora-online/values.yaml
) of the helmchartHAproxy:
replicaCount: 3 ingress: enabled: true className: "haproxy" annotations: haproxy.org/timeout-tunnel: "3600s" haproxy.org/backend-config-snippet: | balance url_param WOPISrc check_post hash-type consistent hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific image: tag: "latest" autoscaling: enabled: false collabora: aliasgroups: - host: "https://example.integrator.com:443" extra_params: --o:ssl.enable=false --o:ssl.termination=true # for production enviroment we recommend appending `extra_params` # with `--o:num_prespawn_children=4`. It defines number of child # processes to keep started in advance and waiting for new clients resources: limits: cpu: "1800m" memory: "2000Mi" requests: cpu: "1800m" memory: "2000Mi" # for production enviroment we recommend the following resource values # resources: # limits: # cpu: "8000m" # memory: "8000Mi" # requests: # cpu: "4000m" # memory: "6000Mi"
Nginx:
replicaCount: 3 ingress: enabled: true className: "nginx" annotations: nginx.ingress.kubernetes.io/upstream-hash-by: "$arg_WOPISrc" nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-read-timeout: "600" nginx.ingress.kubernetes.io/proxy-send-timeout: "600" hosts: - host: chart-example.local paths: - path: / pathType: ImplementationSpecific image: tag: "latest" autoscaling: enabled: false collabora: aliasgroups: - host: "https://example.integrator.com:443" extra_params: --o:ssl.enable=false --o:ssl.termination=true # for production enviroment we recommend appending `extra_params` with `--o:num_prespawn_children=4`. It defines number of child processes to keep started in advance and waiting for new clients resources: limits: cpu: "1800m" memory: "2000Mi" requests: cpu: "1800m" memory: "2000Mi" # for production enviroment we recommend the following resource values # resources: # limits: # cpu: "8000m" # memory: "8000Mi" # requests: # cpu: "4000m" # memory: "6000Mi"
Note
Horizontal Pod Autoscaling (HPA) is disabled for now. Because after scaling it breaks the collaborative editing and copy/paste. Therefore please set
replicaCount
as per your needs.If you have multiple host and aliases setup set aliasgroups in
my_values.yaml
:
collabora: - host: "<protocol>://<host-name>:<port>" # if there are no aliases you can ignore the below line aliases: ["<protocol>://<its-first-alias>:<port>, <protocol>://<its-second-alias>:<port>"] # more host and aliases list is possible
Specify
server_name
when the hostname is not reachable directly for example behind reverse-proxy
collabora: server_name: <hostname>:<port>
For production enviroment we recommended following resource values. We recommend appending
extra_params
with--o:num_prespawn_children=4
. It defines number of child processes to keep started in advance and waiting for new clients
resources: limits: cpu: "8000m" memory: "8000Mi" requests: cpu: "4000m" memory: "6000Mi"
In Openshift , it is recommended to use HAproxy deployment instead of default router. And add
className
in ingress block so that Openshift uses HAProxy Ingress Controller instead ofRouter
:
ingress: className: "haproxy"
Install helm-chart using below command, it should deploy the collabora-online
helm repo add collabora https://collaboraonline.github.io/online/ helm install --create-namespace --namespace collabora collabora-online collabora/collabora-online -f my_values.yaml
Follow only if you are using
NodePort
service type in HAProxy and/or using minikube to setup, otherwise skipEach container port is mapped to a
NodePort
port via theService
object. To find those portskubectl get svc --namespace=haproxy-controller
Example output:
|----------------|---------|--------------|------------|------------------------------------------| |NAME |TYPE |CLUSTER-IP |EXTERNAL-IP |PORT(S) | |----------------|---------|--------------|------------|------------------------------------------| |haproxy-ingress |NodePort |10.108.214.98 |<none> |80:30536/TCP,443:31821/TCP,1024:30480/TCP | |----------------|---------|--------------|------------|------------------------------------------|
In this instance, the following ports were mapped:
Container port
80
toNodePort
30536
Container port
443
toNodePort
31821
Container port
1024
toNodePort
30480
Additional step if deploying on minikube for testing:
Get minikube ip:
minikube ip
Example output:
192.168.0.106
Add hostname to
/etc/hosts
:192.168.0.106 chart-example.local
To check if everything is setup correctly you can run:
curl -I -H 'Host: chart-example.local' 'http://192.168.0.106:30536/'
It should return a similar output as below:
HTTP/1.1 200 OK last-modified: Tue, 18 May 2021 10:46:29 user-agent: COOLWSD WOPI Agent 6.4.8 content-length: 2 content-type: text/plain
Kubernetes cluster monitoring
Install kube-prometheus-stack, a collection of Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
Enable prometheus service monitor, rules and grafana in your
my_values.yaml
:prometheus: servicemonitor: enabled: true labels: release: "kube-prometheus-stack" rules: enabled: true # will deploy alert rules additionalLabels: release: "kube-prometheus-stack" grafana: dashboards: enabled: true # will deploy default dashboards
Note
Use
kube-prometheus-stack
as release name when installing kube-prometheus-stack helm chart because we have passedrelease=kube-prometheus-stack
label in ourmy_values.yaml
. For Grafana Dashboards you may need to enable scan in correct namespaces (or ALL), enabled bysidecar.dashboards.searchNamespace
in Helmchart of grafana (which is part of PrometheusOperator, sografana.sidecar.dashboards.searchNamespace
)
Dynamic/Remote configuration in kubernetes
For big setups, you may not want to restart every pod to modify WOPI hosts. Therefore it is possible to setup an additional webserver to serve a ConfigMap for using Remote/Dynamic Configuration
collabora: env: - name: remoteconfigurl value: https://dynconfig.public.example.com/config/config.json dynamicConfig: enabled: true ingress: enabled: true annotations: "cert-manager.io/issuer": letsencrypt-zprod hosts: - host: "dynconfig.public.example.com" tls: - secretName: "collabora-online-dynconfig-tls" hosts: - "dynconfig.public.example.com" configuration: kind: "configuration" storage: wopi: alias_groups: groups: - host: "https://domain1\\.xyz\\.abc\\.com/" allow: true - host: "https://domain2\\.pqr\\.def\\.com/" allow: true aliases: - "https://domain2\\.ghi\\.leno\\.de/"Note
In current state of COOL
remoteconfigurl
for Remote/Dynamic Configuration should be HTTPS.
Installing Custom Fonts
There are two primary methods for adding custom fonts to your Collabora Online deployment without building a custom Docker image.
Remote Font Configuration: This method involves pointing Collabora Online to a remote server that hosts your font files. See the details in Remote configuration chapter.
PersistentVolumeClaim (PVC): This method uses a Kubernetes PersistentVolume to store the fonts, which are then mounted directly into the Collabora Online pods. More details in PersistentVolume(PVC)
This section focuses on the second method: using a PersistentVolumeClaim.
Method: Using a PersistentVolumeClaim (PVC)
This approach is ideal for managing fonts directly within your Kubernetes cluster. The process involves enabling a feature flag in your deployment configuration, which orchestrates the creation of a PVC and a temporary “font-loader” pod. You will copy your font files to this temporary pod, which saves them to the persistent volume. Finally, a restart of the main Collabora Online deployment will make the new fonts available.
Prerequisites
Before you begin, ensure you have the following:
kubectl
command-line tool configured to access your cluster.Your custom font files (e.g.,
.ttf
,.otf
) available on your local machine.
Step-by-Step Guide
Step 1: Enable Custom Fonts in Your Deployment
You need to enable the custom fonts feature in your Collabora Online configuration. If you are using a Helm chart, this is typically done in your values.yaml
file.
Set deployment.customFonts.enabled
to true
within your deployment block.
Example my_values.yaml
snippet:
deployment:
customFonts:
enabled: true
# pvc:
# size: 1Gi # Optional: Adjust storage size if needed
# storageClassName: "" # Optional: Specify a storage class
Apply the configuration change to your cluster. For Helm, you would run:
helm upgrade --install <release-name> <chart-name> -f my_values.yaml -n <namespace>
Applying this change will trigger the creation of a new PVC and a temporary pod named <deployment-name>-custom-fonts
. This pod’s purpose is to provide a temporary mount point for you to upload your fonts to the PVC.
Step 2: Copy Font Files to the PVC
Once the temporary pod is in the Running
state, use the kubectl cp
command to copy your local font files into it. The pod mounts the PVC at the /mnt/fonts
directory.
kubectl cp <path-to-local-fonts-directory> <namespace>/<deployment-name>-custom-fonts:/mnt/fonts
Replace the placeholders:
<path-to-local-fonts-directory>
: The path to the folder on your local machine containing your font files (e.g. path/to/custom_fonts/directory).<namespace>
: The Kubernetes namespace where Collabora Online is deployed (e.g. collabora).<deployment-name>
: The name of your Collabora Online deployment (e.g. collabora-online).
Step 3: Restart the Collabora Online Deployment
To make the main Collabora Online pods recognize the new fonts, you must perform a rolling restart of the deployment. This forces the pods to re-mount the PVC and rebuild their font cache.
kubectl -n <namespace> rollout restart deployment/<deployment-name>
Step 4: Cleanup and Verification
The temporary <deployment-name>-custom-fonts
pod is designed for single use and will automatically terminate itself one hour after creation. You can also delete it manually if you wish.
To verify that the fonts were installed correctly:
Open a document in your Collabora Online instance.
Click the font selection dropdown menu in the toolbar.
Your newly added fonts should now appear in the list.
Note
The font cache is built when the Collabora Online pods start. If you add more fonts later, you will need to repeat step 3.
Useful commands to check what is happening
Where is this pods, are they ready?
kubectl -n collabora get pod
Example output:
NAME READY STATUS RESTARTS AGE
collabora-online-5fb4869564-dnzmk 1/1 Running 0 28h
collabora-online-5fb4869564-fb4cf 1/1 Running 0 28h
collabora-online-5fb4869564-wbrv2 1/1 Running 0 28h
What is the outside host that multiple coolwsd servers actually answering?
kubectl get ingress -n collabora
Example output:
|-----------|------------------|--------------------------|------------------------|-------|
| NAMESPACE | NAME | HOSTS | ADDRESS | PORTS |
|-----------|------------------|--------------------------|------------------------|-------|
| collabora | collabora-online |chart-example.local | | 80 |
|-----------|------------------|--------------------------|------------------------|-------|
To uninstall the helm chart:
helm uninstall collabora-online -n collabora