I previously posted some queries about having an ODK installation with huge amount of data, that its making it impossible for the server to handle @LN @yanokwa
I launched another different project recently, with a proper form design (images resolution less than 1024px, question logic is optimized, max 5 images in one form, and a big memory big CPU resource server, etc.). However, it is again looking like crashing due to excessive data load. Roughly, i have 140 data points (field variables) in the form, excluding the ODK internal variables overhead. So far I have around 45000 form submissions, meaning more than 6 million data points already. There are min 3 and max 5 images in each submission. I am using aggregate 2.0.5 (latest), tomcat 8, mysql 5.7, 8GB RAM and 4 core CPU AWS server (can upgrade too, since it is cloud). So far, it has consumed around 100GB of disk space. The server is not really going into overload due to simultaneous users at the front end (this happened last time). It seems that tomcat has started to crash upon loading aggregate now, randomly for now, but I fear it will be frequent since the data is increasing further.
I need to know what are my options moving forward:
- Any limits/benchmark as to when aggregate starts to face issues due to data? I tried to put my load in the form of data points (6 million) if it helps to calculate anything.
- Till where can tomcat stretch the loading of aggregate (perhaps this question is more suitable for tomcat forum, but anyhow asking)?
- Can it become a MYSQL issue too? I read somewhere that it starts to get slower near 10 million mark.
- What to do for going ahead? Should I migrate to CENTRAL? Will it handle this much load?
Thanks for sharing your expert opinions.
Are you doing exports from Aggregate? If so, I expect that's what causes problems and I would encourage you to stop doing them. You can instead pull with Briefcase making sure to only pull new submissions.
We have stressed tested Central with millions of submissions and confirmed that it remains performant. We have stress tested with media at smaller scale and confirmed performance but have not done tests with tens of thousands of images. I would expect that exports would take a while but would not affect the rest of Central's functions. At that scale, I would encourage using the OData feed to do analysis instead of relying on exports. Accessing images on PowerBI - #2 by LN has more on pulling images into PowerBI/Excel using the Central API.
Well, I am not doing the exports during the day hour, so this is not the reason. I do exports at night hour, which takes around 20-30 minutes (the latest dump today was 85MB CSV file - 47000 submissions), and I make sure that I don't leave it pending for the next day hours (it finishes within night time window).
Briefcase is not going faster than the incoming number of submissions, unfortunately. I am getting around 6000 new form submissions every day, and briefcase takes about 5 to 6 hours to complete this pull (of course, for newer data only).
I pull the data into a custom dashboard, also at night and totally offpeak time, which takes around 40 minutes or so.
Good to know about Central being war-tested.
My current concern is, how long do I have before Aggregate,Tomcat or MySQL gives up, if someone can make a calculated assumption. Moving forward, of course I will move to Central, which seems to be the only way out.
@Saad These are architectural questions that go beyond the help I'm able to provide on the forum. At a high-level, if you are on AWS and using Aggregate, I'd use multiple Tomcat servers on EC2 that connect to a big RDS database with a read-only replica for reporting via SQL queries.