Thanks Matt! I'll try this in ca 6h and will report back here. The super quick turnaround of this much appreciated!
Edit: tried and looking good (SSL) but with a few bugs
Compressed steps to upgrade
- Central to latest origin/master
- central-frontend to latest origin/master (was at tag 1.2.2 but there are newer commits)
- central-backend to branch origin/db-connect
~/central/$ git fetch --all && git pull
~/central/client$ git pull origin master
~/central/server$ git fetch --all && git checkout origin/db-connect
~/central/$ docker-compose build && docker-compose stop && docker-compose up -d
~/central/$ docker-compose logs -f --tail=200 service nginx
+9d97de6397317dc4ce046d60cbb2ccaf4e967f40 client (v1.2.2-14-g9d97de63) (latest master)
+93732a496bc9daa40cdbb36247c908649a96f10a server (v1.2.1-4-g93732a4) (latest db-connect)
Config (verified through
git diff showing only my changes):
files/service/config.json.template - credentials replaced with allcaps.
- "host": "postgres",
- "user": "odk",
- "password": "odk",
- "database": "odk"
+ "host": "DBHOST",
+ "user": "DBUSER",
+ "password": "DBPASS",
+ "database": "DBNAME",
+ "ssl": true
- "host": "mail",
- "port": 25
+ "host": "mail-relay.lan.fyi",
+ "port": 587
Starting up looks good!
~/central$ docker-compose logs -f --tail=200 service nginx
Attaching to central_nginx_1, central_service_1
nginx_1 | writing a new nginx configuration file..
nginx_1 | starting nginx without certbot..
service_1 | wait-for-it.sh: waiting 15 seconds for postgres:5432
service_1 | wait-for-it.sh: postgres:5432 is available after 0 seconds
service_1 | generating local service configuration..
service_1 | running migrations..
service_1 | starting cron..
service_1 | using 4 worker(s) based on available memory (8153436)..
service_1 | starting server.
I can see that migrations (using knex) have run or at least didn't fail, and there's no trace of SSL errors in the logs.
(Mind that the postgres:5432 is the local db I'm not using. I use the hosted DB with TLS/SSL.)
I was able to demote an admin via GUI and promote them again via the command line - no SSL issues in the logs. I believe this process used slonik.
docker-compose exec service odk-cmd --email EMAIL user-promote
Bugs and issues
I'm running the latest master for central and the front-end, and the latest origin/db-connect for the backend. I'm not sure whether these versions are supposed to mesh together. The following bugs are likely the same as with the latest tag (1.2.1/1.2.2) I'd get through the standard install steps.
Authentication bug "login not persistent"
The site loads, but authentication is shifty - reloading the site requires me to login again. This happens also in a private browser window, so it's not a stale cookie.
After some to and fro logging in again and again, a site reload seems to remember my login, both reload and forced reload. This is puzzling, but not a blocker as not many users access the server directly.
Authentication bug "export not permitted to Administrator"
Exporting to ZIP fails on 403.1
"The authentication you provided does not have rights to perform that action." although my account is an admin account.
This error also occurs on forms with no submissions (export without media).
However, download of smaller datasets to ZIP (or larger ones but without media files) seems to work fine in a private browser tab.
Bug "export stalls for more than 10 seconds causing timeout"
Exporting larger forms with attachments times out after 10 seconds - Central takes too long to start streaming, our cache cuts off. Likely a config on our end.
Server has 13G of free disk space, 8G of RAM (2G used), 4G of swap, CPU usage is low. Unlikely to be a hardware bottleneck, but could there be a lag in the software somewhere?
Bug "new form inaccessible for a few minutes"
Creating a new form works, lets me upload a form def, but doesn't immediately show up in the form list.
Bug "form draft pages inaccessible for a few minutes"
Creating a new form draft immediately redirects me to the index page, later processes the event, and much later shows the draft.
# I hit "create new draft"
172.30.1.9 - - [12/Aug/2021:02:45:04 +0000] "POST /v1/projects/2/forms/build_Field-observation-form1-4_1571300489/draft HTTP/1.1" 200 16 "https://odkc-uat.dbca.wa.gov.au/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
nginx_1 | 172.30.1.10 - - [12/Aug/2021:02:45:05 +0000] "GET /v1/projects/2/forms/build_Field-observation-form1-4_1571300489/versions HTTP/1.1" 200 624 "https://odkc-uat.dbca.wa.gov.au/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
service_1 | ::ffff:172.19.0.9 - - [12/Aug/2021:02:45:05 +0000] "GET /v1/projects/2/forms/build_Field-observation-form1-4_1571300489/versions HTTP/1.0" 200 624
# I get bumped to the index page
# a few seconds later this happens
service_1 | [2021-08-12T02:45:11.852Z] start processing event form.update.draft.set::2021-08-12T02:45:04.884Z::95e60ac8-991c-4f97-aecd-4650953602b2 (1 jobs)
service_1 | [2021-08-12T02:45:11.915Z] finish processing event form.update.draft.set::2021-08-12T02:45:04.884Z::95e60ac8-991c-4f97-aecd-4650953602b2
# Minutes later, I can visit the form overview and see the new draft. The "upload new form def" works.
Seems fixed - performance issues
Exporting submissions and media via ruODK was super slow, but seems to work at normals speeds now.
@Matthew_White am I running the right versions together?
What would you like me to do with the bugs I found?