ODK Central with custom database with SSL

1. What is the problem? Be very detailed.
In short: I need the slonik version of adding "ssl": {"rejectUnauthorized": false} to the db config.

I'm installing ODK Central 1.2 for the first time. My department requires to use a hosted database, which I access through the intranet. For reasons outside of my control, the access happens with SSL.

As per node-postgres docs on SSL, adding "ssl": {"rejectUnauthorized": false} to the database connection dict in files/service/config.json.template worked with knex, but doesn't seem to work with the new db driver slonik.

2. What app or server are you using and on what device and operating system? Include version numbers.
ODK Central master as of https://github.com/getodk/central/commit/e16b79530756bbd825f68275b3b6e53951409474

  • SSL self-sign
  • hosted PG database

3. What you have you tried to fix the problem?
What I haven't yet tried is using my own SSL certificates for the db, I'm waiting on our IT dept to provide me those.

My build steps:

# modify config
vim ~/central/files/service/config.json.template
vim docker-compose.yml

# build and restart docker-compose
docker-compose build && docker-compose restart
# docker-compose build && docker-compose stop && docker-compose up -d

# inspect logs 
docker-compose logs -f service nginx

Attempt 1

No dice with ODK Central latest master (23 July 2021) using "database?ssl=true"

service_1             | (node:24) UnhandledPromiseRejectionWarning: error: database "DBNAME?ssl=true" does not exist

Attempt 2

Attempting ?ssl=1 as per https://github.com/gajus/slonik/issues/55#issuecomment-489398983

service_1             | (node:23) UnhandledPromiseRejectionWarning: error: database "DBNAME?ssl=1" does not exist

Attempt 3

Attempting what worked with knex before as per https://node-postgres.com/features/ssl

"database": "DBNAME",
"ssl": {"rejectUnauthorized": false}

Logs show no errors, however shell commands do:

docker-compose exec service odk-cmd --email EMAIL user-promote
ConnectionError: SSL connection is required. Please specify SSL options and retry.
    at Object.createConnection (/usr/odk/node_modules/slonik/dist/src/factories/createConnection.js:54:23)
    at processTicksAndRejections (internal/process/task_queues.js:85:5)

This indicates that "ssl": {"rejectUnauthorized": false} does not work any more with slonik - that's a regression for my use case.

Attempt 4

https://github.com/gajus/slonik/issues/159 refers to https://github.com/brianc/node-postgres/blob/f0bf3cda7b05be77e84c067a231bbb9db7c96c39/CHANGELOG.md#pg810

Add &ssl=no-verify option to connection string and PGSSLMODE=no-verify environment variable support for the pure JS driver. This is equivalent of passing { ssl: { rejectUnauthorized: false } } to the client/pool constructor.

With ssl=no-verify appended to the dbname, no errors in logs, but errors on shell commands:

docker-compose exec service odk-cmd --email EMAIL user-promote
ConnectionError: SSL connection is required. Please specify SSL options and retry.
    at Object.createConnection (/usr/odk/node_modules/slonik/dist/src/factories/createConnection.js:54:23)
    at processTicksAndRejections (internal/process/task_queues.js:85:5)

Attempt 5

Using environment variables. I assume I put PGSSLMODE=no-verify into the docker-compose file as environment variables to both service (I think that's the one that needs it) and nginx (for good measure).

Again, no errors in logs, but errors on shell commands.

docker-compose exec service odk-cmd --email EMAIL user-promote
ConnectionError: SSL connection is required. Please specify SSL options and retry.
    at Object.createConnection (/usr/odk/node_modules/slonik/dist/src/factories/createConnection.js:54:23)
    at processTicksAndRejections (internal/process/task_queues.js:85:5)

With the above, I'm fresh out of ideas. @issa @LN @Matthew_White any pointers would be highly welcome.
4. What steps can we take to reproduce the problem?
See above build steps.

5. Anything else we should know or have? If you have a test form or screenshots or logs, attach below.

Attempt 4/5 look like the likely path to take. Did you try ALSO specifying ssl=1? It looks to me like you’re successfully passing on an ssl modifier but not actually using ssl. Was going quickly and thought the attribute name was different for specifying ssl and modifying it but they’re the same.

Ah, shoot. Combining 4 and 5 also results in

docker-compose exec service odk-cmd --email EMAIL user-promote
ConnectionError: SSL connection is required. Please specify SSL options and retry.
    at Object.createConnection (/usr/odk/node_modules/slonik/dist/src/factories/createConnection.js:54:23)
    at processTicksAndRejections (internal/process/task_queues.js:85:5)

However, the frontend is coming up now - a win. That means the other plumbing is set up correctly (DNS et al).

But this error message on d-c stop / up -d indicates that appending parameters to the db name is definitely not working for me:

service_1             | wait-for-it.sh: waiting 15 seconds for postgres:5432
service_1             | wait-for-it.sh: postgres:5432 is available after 0 seconds
service_1             | generating local service configuration..
service_1             | running migrations..
service_1             | (node:24) UnhandledPromiseRejectionWarning: error: database "DBNAME?ssl=no-verify" does not exist
service_1             |     at Parser.parseErrorMessage (/usr/odk/node_modules/pg-protocol/dist/parser.js:278:15)
service_1             |     at Parser.handlePacket (/usr/odk/node_modules/pg-protocol/dist/parser.js:126:29)
service_1             |     at Parser.parse (/usr/odk/node_modules/pg-protocol/dist/parser.js:39:38)
service_1             |     at TLSSocket.<anonymous> (/usr/odk/node_modules/pg-protocol/dist/index.js:10:42)
service_1             |     at TLSSocket.emit (events.js:203:13)
service_1             |     at addChunk (_stream_readable.js:294:12)
service_1             |     at readableAddChunk (_stream_readable.js:275:11)
service_1             |     at TLSSocket.Readable.push (_stream_readable.js:210:10)
service_1             |     at TLSWrap.onStreamRead (internal/stream_base_commons.js:166:17)

My next avenue is to see whether I can procure a server or client certificate so that Central's connection to our hosted db is actually SSL verified.

Thanks for sharing your attempts, @Florian_May!

Just to keep things linked, I'll note that this relates to this GitHub issue and this forum topic.

And as background, one issue here is that Knex and Slonik connect to the database in very different ways. (We used Knex before v1.2 and now use Slonik.) Knex uses a configuration object that supports many options. On the other hand, Slonik uses a connection string. The Central database config continues to be an object, but internally, it is converted to a connection string before being passed to Slonik. That conversion will use only a subset of the options that Knex supported and will ignore the ssl option.

That's probably part of the reason why attempt 3 doesn't work. (Though I'm not sure why the logs don't show errors.)

As for attempt 4, the pg package seems to support ssl=no-verify. However, Slonik doesn't seem to pass the connection string directly to pg. Instead, it seems to use the pg-connection-string package to parse the connection string, and pg-connection-string doesn't seem to support ssl=no-verify. One intriguing option that I haven't looked much into is that pg-connection-string seems to have a similar option to ssl=no-verify: sslmode=no-verify. And interestingly, in most cases where SSL options are specified, Slonik itself seems to specify false for rejectUnauthorized. Given that, I'm slightly surprised that ssl=true and ssl=1 don't work.

I'm realizing that while we now mostly use Slonik, we actually do still use Knex for database migrations. I think we pass the database config directly to Knex without first converting it to a connection string, which is probably why appending parameters doesn't work in some cases. Actually, I'm now a little surprised that that approach did work in the other forum topic.

Out of curiosity, for attempt 5, did you specify the environment variable in addition to ssl=no-verify or instead of ssl=no-verify? What happens if you specify the environment variable without specifying a value for ssl?

Thanks for the thorough reply, Matt!

In attempt 4 and 5, I used only one option. In my follow-up, I used both.

I read my errors as I can't append parameters to the db name at all.

My IT crew pointed me to a pem certificate at https://docs.microsoft.com/en-us/azure/postgresql/concepts-ssl-connection-security which I'll try next. Here's hoping that Slonik supports the config setting!

1 Like

Yeah, I'm now unsure whether that approach is possible. In particular, I think the UnhandledPromiseRejectionWarning comes from this. That approach is definitely a workaround, so it'd be better anyway for us to find an alternative.

I'm guessing that Slonik does, but right now Central doesn't have a way to pass those options to Slonik.

The Central team will discuss further over the coming days. I'll post an update here once we know more. In the meantime, let us know what else you learn/try!

1 Like

Solution: If using an external, hosted database, and that database requires TLS/SSL, you must provide a root certificate.
Reference for Postgres hosted on Azure: https://docs.microsoft.com/en-us/azure/postgresql/concepts-ssl-connection-security
ODK Central uses slonik, which in turn uses node-postgres. SSL options: https://node-postgres.com/features/ssl

It appears that ca is the correct option. The node-postgres and azure docs are unclear as to which is which. These are not as beginner friendly as the ODK docs.
Migrations, still using knex, meanwhile use the option rejectUnauthorized: false.

  1. Download the Microsoft PEM certificate linked from https://docs.microsoft.com/en-us/azure/postgresql/concepts-ssl-connection-security into my home directory on the server running Central (e.g. using wget).
cd ~
wget https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem
  1. vim ~/central/files/service/config.json.template (comments inserted for clarity, not in production config)
{
  "default": {
    "database": {
      "host": "DBHOST",
      "user": "DBUSER",
      "password": "DBPASS",
      "database": "DBNAME",
      "ssl": {
        "rejectUnauthorized": false, # this is for migrations using knex
        ca: fs.readFileSync('/home/USERNAME/DigiCertGlobalRootG2.crt.pem').toString() # this is for ops using slonik
      },
    },

... # other config, eg. custom email
2 Likes

Glad you have a solution! It sounds like everything is working? Where did you place this configuration?

@Matthew_White sorry, meant to reply sooner. I've edited the post marked as Solution with further details for clarity.

1 Like

Thanks, @Florian_May!

It's interesting, I actually don't think the ca option is being passed to Slonik. We construct a connection string to pass to Slonik, and we don't use the ca option when constructing that connection string. The ca option would be passed to Knex though.

I'm also surprised that you're able to call fs.readFileSync(), since that would involve executing JavaScript rather than parsing static JSON values. We use the config package to parse the config file, and I believe that config parses .json files as static JSON. (config supports loading dynamic .js files, but we don't use that functionality.)

In any case, this has been a useful discussion, and we have a clearer sense of the changes we'll need to make (probably for v1.4). Hopefully all this will be simpler before long!

1 Like

I'm upgrading my prod server from 1.1 to latest (1.2) and am running into trouble.
Specifying the SSL cert as ca: fs.readFileSync('/home/USERNAME/DigiCertGlobalRootG2.crt.pem').toString() throws parser errors in the Central service. I've permutated all trailing commas / quotes / comments / indents but haven't found a way to provide the SSL root certificate via the config. This is blocking my upgrade path to ODK Central versions using slonik.

I'm also surprised that you're able to call fs.readFileSync()

On my UAT server, this somehow worked.
On my production server, I'm getting parsing errors indicating that this is not valid JSON. The node-postgres docs on SSL indeed show usage of the SSL parameters from JS. I agree this shouldn't have worked on my UAT server, but seeing I can access my old DB I will absolutely not touch that server until I've rescued all the data from that old DB.

Would anyone be able to take a look at this? This post boils down to the question: "How to provide a root certificate for an external database which requires TLS/SSL via the ODK Central config".

Unfortunately, as I understand it so far, this isn't possible at the moment. I think we could find a way to pass the ca option through to Knex. But unlike Knex, Slonik doesn't directly accept node-postgres options and will only connect to the database via a connection string. We construct that connection string ourselves, but when we do so, we ignore all SSL options. That makes me think that a code change will be required in order to pass SSL options through to Slonik. I'll discuss what that would involve with the rest of the Central team and report back here.

However, one point of confusion I have is why you didn't have to specify ca when you were running v1.1. If your database required that option, wouldn't you have needed to specify that for Knex in v1.1?

When you were running v1.1, what did your database config look like? Did you specify anything under "ssl" other than "rejectUnauthorized": false? I think not, but just wanted to check.

1 Like

Cheers! I'll report back here with versions and Configs in a few hrs. External db with TLS/SSL is a hard requirement for our record keeping, so it would be great if it were possible past v1.1.

1 Like

It's not ready for production yet, but I'm working on a way to pass a subset of SSL options through to Slonik. I'm hoping that if we can just pass "rejectUnauthorized": false to both Slonik and Knex, that'll match what worked for you in v1.1 and will allow you to connect in v1.2.

1 Like

Oh that's fantastic, thanks! That would help me greatly.

FYI here are my configs:

Server 1

Test server, works, but slow performance, and timeouts when exporting submissions and media to ZIP

versions:
e16b79530756bbd825f68275b3b6e53951409474
 70ec99f02885a06e37709db6319bfdf96fac84eb client (v1.2.2)
 59556ecea91f0a25678cd6da0084adb7e66ca099 server (v1.2.1)

Database: TLS/SSL enabled, Azure hosted Postgres, upgraded from ODK Central 0.6
Config (sensitive credentials replaced by allcaps):

{
  "default": {
    "database": {
      "host": "DBHOST",
      "user": "DBUSER",
      "password": "DBPASS",
      "database": "DBNAME",
      ssl: {
        rejectUnauthorized: false,
        ca: fs.readFileSync('/home/USER/DigiCertGlobalRootG2.crt.pem').toString(),
        # key: fs.readFileSync('/home/USER/DigiCertGlobalRootG2.crt.pem').toString(),
        # cert: fs.readFileSync('/home/USER/DigiCertGlobalRootG2.crt.pem').toString(),
      },
    },

Note the missing quotes, trailing commas, and use of JS functions inside a JSON config.
However, the connection to the db works.

Server 2

Prod server, drafts are broken from db migrations to v1.2 and back to 1.1, but db connects.
Versions:

versions:
86d3ebc27ca8e3feb1b6a107da7f8f4fbb21f3f1
 cddb691e40e84aabff87b9d427e22a50282d6f99 client (v1.1.2)
 a33bc6fb3c34fe38894b0e9d0bb404f81da325e6 server (v1.1.1)

Database: TLS/SSL enabled, Azure hosted Postgres, upgraded from ODK Central 0.6. Lives on same host as test db.
Config:

{
  "default": {
    "database": {
      "host": "DBHOST",
      "user": "DBUSER",
      "password": "DBPASS",
      "database": "DBNAME",
      "ssl": {
        "rejectUnauthorized": false
      }
    },

Note the absence of trailing commas, full quotation of keys, and absence of comments as well as JS functions. Any of these threw parser errors, crashing the service container.

Use cases

Quick write up for core team discussion. Feature request: support external DB with TLS/SSL enabled.

Our IT and records keeping policies mandate the use of our hosted DBs, which come with failsafes and maintained backups. Production data in docker containers is deemed too flimsy, and storing the submissions with media attachments would blow out our server size within a field season. The DBs have TLS enabled for additional security. It's great to have a stateless server on one side, and all state and growth on the other side in an external, secure DB!

We could accept ignoring the TLS/SSL via "rejectUnauthorized: false", but would prefer to use the readily available certificate. I've noticed a certificates folder in the build process, could it be an option to specify an internal path to a DB SSL certificate and copy the certificate file into the source prior to docker-compose build?

Thanks, @Florian_May! I have additional thoughts that I'll write up over the next couple of days, but I wanted to give you an update on the PR. We're double-checking some things related to the PR, but I think it's ready to try on your test server.

With the PR in place, you'll be able to specify true for ssl:

{
  "default": {
    "database": {
      "host": "DBHOST",
      "user": "DBUSER",
      "password": "DBPASS",
      "database": "DBNAME",
      "ssl": true
    },

Doing so will automatically specify false for rejectUnauthorized. The above configuration should work the same as how this configuration would have worked in v1.1:

{
  "default": {
    "database": {
      "host": "DBHOST",
      "user": "DBUSER",
      "password": "DBPASS",
      "database": "DBNAME",
      "ssl": {
        "rejectUnauthorized": false
      }
    },

We're taking this approach because it matches how Slonik works. Whenever SSL is specified, Slonik seems to specify false for rejectUnauthorized. I've made a note of that behavior on the Slonik repo (here) because it'd be nice to have the option to specify true. However, given that that's how configuring Slonik works today, I think we should match it.

To deploy the PR on your test server:

  • Log into the server and navigate to the project folder (cd central).
  • cd server
  • git fetch
  • git checkout origin/db-connect
  • cd ..
  • Follow the regular upgrade instructions, starting at docker-compose build.

Let us know if this works!

1 Like

Thanks Matt! I'll try this in ca 6h and will report back here. The super quick turnaround of this much appreciated!

Edit: tried and looking good (SSL) but with a few bugs

Steps

Compressed steps to upgrade

  • Central to latest origin/master
  • central-frontend to latest origin/master (was at tag 1.2.2 but there are newer commits)
  • central-backend to branch origin/db-connect
~/central/$ git fetch --all && git pull
~/central/client$ git pull origin master
~/central/server$ git fetch --all && git checkout origin/db-connect
~/central/$ docker-compose build && docker-compose stop && docker-compose up -d
~/central/$ docker-compose logs -f --tail=200 service nginx

Versions:

versions:
e16b79530756bbd825f68275b3b6e53951409474(latest master)
+9d97de6397317dc4ce046d60cbb2ccaf4e967f40 client (v1.2.2-14-g9d97de63) (latest master)
+93732a496bc9daa40cdbb36247c908649a96f10a server (v1.2.1-4-g93732a4) (latest db-connect)

Config (verified through git diff showing only my changes):
files/service/config.json.template - credentials replaced with allcaps.

{
   "default": {
     "database": {
-      "host": "postgres",
-      "user": "odk",
-      "password": "odk",
-      "database": "odk"
+      "host": "DBHOST",
+      "user": "DBUSER",
+      "password": "DBPASS",
+      "database": "DBNAME",
+      "ssl": true
     },
     "email": {
       "serviceAccount": "no-reply@${DOMAIN}",
       "transport": "smtp",
       "transportOpts": {
-        "host": "mail",
-        "port": 25
+        "host": "mail-relay.lan.fyi",
+        "port": 587
       }
     },
     "xlsform": {

Logs

Starting up looks good!
~/central$ docker-compose logs -f --tail=200 service nginx

Attaching to central_nginx_1, central_service_1
nginx_1               | writing a new nginx configuration file..
nginx_1               | starting nginx without certbot..
service_1             | wait-for-it.sh: waiting 15 seconds for postgres:5432
service_1             | wait-for-it.sh: postgres:5432 is available after 0 seconds
service_1             | generating local service configuration..
service_1             | running migrations..
service_1             | starting cron..
service_1             | using 4 worker(s) based on available memory (8153436)..
service_1             | starting server.

I can see that migrations (using knex) have run or at least didn't fail, and there's no trace of SSL errors in the logs.
(Mind that the postgres:5432 is the local db I'm not using. I use the hosted DB with TLS/SSL.)

I was able to demote an admin via GUI and promote them again via the command line - no SSL issues in the logs. I believe this process used slonik.

docker-compose exec service odk-cmd --email EMAIL user-promote
'{"success":true}'

Bugs and issues

I'm running the latest master for central and the front-end, and the latest origin/db-connect for the backend. I'm not sure whether these versions are supposed to mesh together. The following bugs are likely the same as with the latest tag (1.2.1/1.2.2) I'd get through the standard install steps.

Authentication bug "login not persistent"

The site loads, but authentication is shifty - reloading the site requires me to login again. This happens also in a private browser window, so it's not a stale cookie.
After some to and fro logging in again and again, a site reload seems to remember my login, both reload and forced reload. This is puzzling, but not a blocker as not many users access the server directly.

Authentication bug "export not permitted to Administrator"

Exporting to ZIP fails on 403.1 "The authentication you provided does not have rights to perform that action." although my account is an admin account.
This error also occurs on forms with no submissions (export without media).
However, download of smaller datasets to ZIP (or larger ones but without media files) seems to work fine in a private browser tab.
Puzzling.

Bug "export stalls for more than 10 seconds causing timeout"

Exporting larger forms with attachments times out after 10 seconds - Central takes too long to start streaming, our cache cuts off. Likely a config on our end.
Server has 13G of free disk space, 8G of RAM (2G used), 4G of swap, CPU usage is low. Unlikely to be a hardware bottleneck, but could there be a lag in the software somewhere?

Bug "new form inaccessible for a few minutes"

Creating a new form works, lets me upload a form def, but doesn't immediately show up in the form list.

Bug "form draft pages inaccessible for a few minutes"

Creating a new form draft immediately redirects me to the index page, later processes the event, and much later shows the draft.

# I hit "create new draft"
 172.30.1.9 - - [12/Aug/2021:02:45:04 +0000] "POST /v1/projects/2/forms/build_Field-observation-form1-4_1571300489/draft HTTP/1.1" 200 16 "https://odkc-uat.dbca.wa.gov.au/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
nginx_1               | 172.30.1.10 - - [12/Aug/2021:02:45:05 +0000] "GET /v1/projects/2/forms/build_Field-observation-form1-4_1571300489/versions HTTP/1.1" 200 624 "https://odkc-uat.dbca.wa.gov.au/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0"
service_1             | ::ffff:172.19.0.9 - - [12/Aug/2021:02:45:05 +0000] "GET /v1/projects/2/forms/build_Field-observation-form1-4_1571300489/versions HTTP/1.0" 200 624
# I get bumped to the index page

# a few seconds later this happens
service_1             | [2021-08-12T02:45:11.852Z] start processing event form.update.draft.set::2021-08-12T02:45:04.884Z::95e60ac8-991c-4f97-aecd-4650953602b2 (1 jobs)
service_1             | [2021-08-12T02:45:11.915Z] finish processing event form.update.draft.set::2021-08-12T02:45:04.884Z::95e60ac8-991c-4f97-aecd-4650953602b2

# Minutes later, I can visit the form overview and see the new draft. The "upload new form def" works.

Seems fixed - performance issues

Exporting submissions and media via ruODK was super slow, but seems to work at normals speeds now.

@Matthew_White am I running the right versions together?
What would you like me to do with the bugs I found?

Awesome, thanks for trying out the PR!

I actually recommend staying at v1.2.2. In general, commits on the master branch of central-frontend that are after a release tag probably haven't gone through QA and may or may not even match the master branch of central-backend. The db-connect branch of central-backend branches off the v1.2.1 tag, so the only way it differs from the latest release is the changes related to the database connection.

Excellent! It sounds like the server is able to connect to the database via both Slonk and Knex. :tada:

As part of v1.2, we changed how some things work around login and logout. One change is that it shouldn't be possible to log in twice in two different tabs. If you log in in one tab, then try to log in in a second tab, the second tab should show the message, "A user is already logged in. Please refresh the page to continue." It sounds like you didn't see that message, right? Could you have been using an older version of the frontend, from before v1.2? This shouldn't be needed, but you could also try clearing your browser cache.

Also, what happens if you use your browser to clear the cookies and site data for the frontend? You could also check that the browser isn't set to block cookies or site data for the frontend.

This issue doesn't feel related to the db-connect PR. It could be related to another part of v1.2. If it ends up not being resolved, I recommend creating a separate forum topic.

That does sounds puzzling! This sounds like an issue with your cookie, since that's what's used to authenticate most file downloads. The cookie is removed whenever you log out, including when you log out in a different tab. Maybe something related to the first issue above led to the cookie being removed? If you end up creating a forum topic for the issue above, and this issue doesn't end up being resolved, that topic might be a good place to discuss this issue as well.

We've tried to make that part of the code high-performance, but there's also a fair amount of complexity there. 10 seconds seems pretty short for a timeout!

This one seems quite unusual to me! Some tasks are run after a form is created, and those tasks usually take a few seconds. However, even if those tasks take a while or even fail, the form should be immediately visible in the form list. Could something be caching the API response for the form list?

This feels related to the previous issue. The fact that you don't see the new draft for minutes and the fact that you're immediately redirected are probably related. It sounds like the API is telling the frontend that a draft doesn't exist. When you create a new draft, you're redirected to the Draft Status page. If the frontend tries to render that page, then is told that a draft doesn't actually exist (a contradiction), it will redirect to the index page.

The service log indicates that a task was started and finished for the draft soon after it was created. That should mean that the draft was successfully committed to the database. So I'm again thinking that there's some issue communicating that change to the frontend.

It's not clear that this is related to the db-connect PR, so I'd recommend creating a separate forum topic to discuss this issue and the previous issue together.

Awesome, this is starting to make sense.

I've deleted my cookies, and site reload now keeps me logged in, and allows me to export data.

I've seen the words "varnish cache' in the timeout message - this indicates that our IT have some kind of cache which I will kindly ask them to disable, and increase the timeout. I expect this could also fix the auth issues. I'll create new topics for any leftover issues.

Edit: our IT disabled the newly installed cache server for my ODK Central installs.
They advise to set headers to cache-control: private on the software side (ODK Central) to enforce no caching. Seems that would be a sensible default for Central, seeing it produces unexpected behaviour with caching enabled. (Crusty Django SSR dev speaking here...)

Since the SSL issue seems fixed, could it be merged?

1 Like

Hello All, Just wanted to know can we connect single Central Backend instance to two Central Frontends? and where to configure the APIs in frontend.
Thanks in advance.