we're working on setting up ODK Central on Azure/Kubernetes. It's possible, and the setup is more specific to Kubernetes as to Azure.
The best approach would be to have a Helm chart setting up ODK Central in Kubernetes in the same way the docker-compose file orchestrates Central in a Docker swarm/stack. We simply reproduced the orchestration from docker-compose through the fantastic tool rancher on Kubernetes.
In the cloud (Azure or otherwise):
- Install rancher in two steps as shown under "get started" here. Our IT crew still have sparkly eyes when they talk about how easy it was to set up rancher. They used openshift before (with less sparkle and more horror in their eyes).
On a local machine:
- On a local machine (we did it on Ubuntu 19.10) clone odk central and submodules
- Build the images for nginx and service and publish them to dockerhub - feel free to use our images linked here.
- Note the settings (ssh, email, domain) are not updated, we override the defaults in the runtime config for the docker images.
git clone https://github.com/opendatakit/central
git submodule update -i
docker build . -f nginx.dockerfile -t dbcawa/odk_nginx
docker build . -f service.dockerfile -t dbcawa/odk_service
docker push dbcawa/odk_nginx
docker push dbcawa/odk_service
In rancher (rancher terminology italizised, values bold):
- create a namespace e.g. odk
- Create a config map (we named it odk-service) with
-
Key default.json, value (paste this.
-
Key local.json, value:
-
{
"default": {
"database": {
"host": "postgres",
"user": "POSTGRES_USER",
"password": "POSTGRES_PASSWORD",
"database": "POSTGRES_DATABASE"
},
"email": {
"serviceAccount": "no-reply-odk@yourdomain.com",
"transport": "smtp",
"transportOpts": {
"host": "OUR.SMTP.SERVER",
"port": OUR.SMTP.PORT
}
},
"env": {"domain": "odkcentral.yourdomain.com"},
"external": {}
}
}
-
Create three workloads, postgres, nginx, and service.
-
Workload postgres
- Docker image postgres9.6
-
Environment variables (used in local.json):
- POSTGRES_DATABASE
- POSTGRES_PASSWORD
- POSTGRES_USER
- Volumes:
-
name postgres, persistent volume claim postgres,
mount point /var/lib/postgresql/data - size 10 GB
-
name postgres, persistent volume claim postgres,
-
Workload nginx
- Environment variables (see .env):
- DOMAIN (your custom domain, e.g. odkcentral.myserver.com)
- SSL_TYPE we used selfsign, as we reverse proxy and have SSL certs
- SYSADMIN_EMAIL
- Docker image dbcawa/odk_nginx (or use your own)
- Port mapping 80, 443 > randomly chosen by rancher from currently available ports (e.g. 31234). I believe we let rancher choose the ports initially from available free ports, then copied them as the persistent ports to run on.
- Scaling policy: rolling (start new, then stop old)
- Environment variables (see .env):
-
Workload service
- Docker image dbcawa/odk_service
- Environment variables:
- DOMAIN (same as DOMAIN in nginx: your custom domain). Is this a duplication from the domain in local.json?
- Volumes:
-
volume type persistent volume claim, name service-transfer, persistent volume claim service-transfer,
mount point /data/transfer, persistent, 100GB -
volume type config map, name odk-service, default mode 644,
config map name odk-service, optional no, items all keys,
mount point /usr/odk/config
-
volume type persistent volume claim, name service-transfer, persistent volume claim service-transfer,
- Command ./wait-for-it.sh,postgres:5432,--,./start-odk.sh
- Scaling policy: rolling (start new, then stop old)
The result is ODK Central's nginx running on an IP:PORT as shown in rancher. You now have to reverse proxy that IP:PORT to a friendly host name (e.g. odkcentral.yourdomain.com).
So overall the setup seems to work, although we have a few gremlins between the SSL certs and our email server. Reported here.
However, we can create web users on the command line in workload service, running pod, shell:
odk-cmd --email example@opendatakit.org user-create
odk-cmd --email example@opendatakit.org user-promote
Hope that helps!