Installing ODK central in microsoft azure cloud

What is the problem? Please be detailed.
I'm researching ODK central for my company.
Is there instructions on how to install odk central in Microsoft Azure cloud service?
Is it compatible with Docker for Azure (https://docs.docker.com/docker-for-azure/) ?

What ODK tool and version are you using? And on what device and operating system version?
ODK central latest version

Hi @Yohan_Prasetia_Siswa,

we're working on setting up ODK Central on Azure/Kubernetes. It's possible, and the setup is more specific to Kubernetes as to Azure.

The best approach would be to have a Helm chart setting up ODK Central in Kubernetes in the same way the docker-compose file orchestrates Central in a Docker swarm/stack. We simply reproduced the orchestration from docker-compose through the fantastic tool rancher on Kubernetes.

In the cloud (Azure or otherwise):

  • Install rancher in two steps as shown under "get started" here. Our IT crew still have sparkly eyes when they talk about how easy it was to set up rancher. They used openshift before (with less sparkle and more horror in their eyes).

On a local machine:

  • On a local machine (we did it on Ubuntu 19.10) clone odk central and submodules
  • Build the images for nginx and service and publish them to dockerhub - feel free to use our images linked here.
  • Note the settings (ssh, email, domain) are not updated, we override the defaults in the runtime config for the docker images.
git clone https://github.com/opendatakit/central
git submodule update -i
docker build . -f nginx.dockerfile -t dbcawa/odk_nginx
docker build . -f service.dockerfile -t dbcawa/odk_service
docker push dbcawa/odk_nginx
docker push dbcawa/odk_service

In rancher (rancher terminology italizised, values bold):

  • create a namespace e.g. odk
  • Create a config map (we named it odk-service) with
    • Key default.json, value (paste this.

    • Key local.json, value:

{
  "default": {
    "database": {
      "host": "postgres",
      "user": "POSTGRES_USER",
      "password": "POSTGRES_PASSWORD",
      "database": "POSTGRES_DATABASE"
    },
    "email": {
      "serviceAccount": "no-reply-odk@yourdomain.com",
      "transport": "smtp",
      "transportOpts": {
        "host": "OUR.SMTP.SERVER",
        "port": OUR.SMTP.PORT
      }
    },
    "env": {"domain": "odkcentral.yourdomain.com"},
    "external": {}
  }
}
  • Create three workloads, postgres, nginx, and service.

  • Workload postgres

    • Docker image postgres9.6
    • Environment variables (used in local.json):
      • POSTGRES_DATABASE
      • POSTGRES_PASSWORD
      • POSTGRES_USER
    • Volumes:
      • name postgres, persistent volume claim postgres,
        mount point /var/lib/postgresql/data
      • size 10 GB
  • Workload nginx

    • Environment variables (see .env):
      • DOMAIN (your custom domain, e.g. odkcentral.myserver.com)
      • SSL_TYPE we used selfsign, as we reverse proxy and have SSL certs
      • SYSADMIN_EMAIL
    • Docker image dbcawa/odk_nginx (or use your own)
    • Port mapping 80, 443 > randomly chosen by rancher from currently available ports (e.g. 31234). I believe we let rancher choose the ports initially from available free ports, then copied them as the persistent ports to run on.
    • Scaling policy: rolling (start new, then stop old)
  • Workload service

    • Docker image dbcawa/odk_service
    • Environment variables:
      • DOMAIN (same as DOMAIN in nginx: your custom domain). Is this a duplication from the domain in local.json?
    • Volumes:
      • volume type persistent volume claim, name service-transfer, persistent volume claim service-transfer,
        mount point /data/transfer, persistent, 100GB
      • volume type config map, name odk-service, default mode 644,
        config map name odk-service, optional no, items all keys,
        mount point /usr/odk/config
    • Command ./wait-for-it.sh,postgres:5432,--,./start-odk.sh
    • Scaling policy: rolling (start new, then stop old)

The result is ODK Central's nginx running on an IP:PORT as shown in rancher. You now have to reverse proxy that IP:PORT to a friendly host name (e.g. odkcentral.yourdomain.com).

So overall the setup seems to work, although we have a few gremlins between the SSL certs and our email server. Reported here.
However, we can create web users on the command line in workload service, running pod, shell:

odk-cmd --email example@opendatakit.org user-create
odk-cmd --email example@opendatakit.org user-promote

Hope that helps!

3 Likes

To use an existing smtp email relay, see https://github.com/opendatakit/central/issues/105

To use an external database requiring SSL, add "ssl": {"rejectUnauthorized": false} to the db config, see https://github.com/opendatakit/central/issues/96

The command should be ./wait-for-it.sh postgres:5432 -- ./start-odk.sh without commas.

One can also collate the two configs into one config default.json. This simplifies the k8s config map.

1 Like

Current build process:

# First time: git clone git@github.com:opendatakit/central.git
git pull
git submodule update -i
docker build . -f service.dockerfile -t dbcawa/odk_service:VERSION -t dbcawa/odk_service:latest
docker build . -f nginx.dockerfile -t dbcawa/odk_nginx:VERSION  -t dbcawa/odk_nginx:latest
docker push dbcawa/odk_service
docker push dbcawa/odk_nginx

VERSION: ODK Central version (e.g. 0.7.0) plus our build (e.g. .1)

1 Like