ODK Aggregate : Failure - will retry later while trying to export CSV file

Congrats!

I'm glad you worked through the (very-confusing) sequence of steps and were able to get your system up and running again.

(It's been years since I worked on the PAUSED mechanism -- I was clearly mistaken about how it functions. I'm glad that it automatically resumed.)

I´ll complete my answer for the future:

  • Publishing is restored in both fusion tables and google spreadsheet (significantly slower in the spreadsheet), and now both status is "ACTIVE" (there is no button Mitch)

  • Briefcase gives me a "SUCCESS" message after pulling the whole thing.

  • CSV generation works without any problem.

Really this is the best!

IT WORKED!!!!

thank you very much!!!!!

the publishing started automatically, I guess that the 5-7 errors I was having per second was due to trying to publish.

I am extremely happy, not only for resolving this issue but for understanding it. Now I think I could do it again :slight_smile:

Mitch you are the best!

You are very close. Note that in step (5), the instructions say:

  1. Now, for each data table name and XXXXX value in step (2), execute a query of the form:SELECT * FROM opendatakit.YOUR_TABLE_NAME WHERE _PARENT_AURI = "XXXXXX"

Take special note of the backquote around the opendatakit.-qualified table name, and the double-quotes around the XXXXXX value. If the original query specified _DOM_AURI instead of_PARENT_AURI, use that. If this query does not return a value, you may also need to add the AND _TOP_LEVEL_AURI = "YYYYY" qualifier (but that should not be necessary), if it was present in the original error message.

In the queries you tried, you missed this part of the instructions (replace MAPFRE_JUNIO2016_G2_FOTO1_BN in your select statements with opendatakit.MAPFRE_JUNIO2016_G2_FOTO1_BN )

These instructions are saying to select and show to you the entries in the data tables that correspond to a specific submission (the one with instanceID uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 ).

========================

If the publisher is paused, I believe the "PAUSED" is actually a button, and you can click on that to get it to resume.

That, of course, will only be successful if you fix the data corruption.

One last thing and hoping I can fix the main problem, since this is secondary: I tried to publish another spreadsheet (didnt´t work) and another fusion table (just existing submissions, 1420, when I had already published 1476 in the original) again, but at the end I decided to delete them. The two original spreadsheet and fusion table are both "BOTH Upload Existing & Stream New Submission Data" but now the status shows "PAUSED". How can I make them active again?

Ok, I think I´m seeing some progress in the sense of figuring out, what´s happening. I see where did you took the query you gave me, from Logs, App Engine, Background, All versions:

If I enlarge the first error I can see what was happening:

And I followed your instructions. But there is a second error, and it is different:

where it says:

org.opendatakit.aggregate.task.CsvWorkerImpl failureRecovery: Exception caught: org.opendatakit.common.datamodel.ODKEnumeratedElementException: Attachment errors:SELECT * FROM MAPFRE_JUNIO2016_G2_FOTO1_BN WHERE _TOP_LEVEL_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 AND _PARENT_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 is missing an attachment instance OR has extra copies. for MapfreJunio2016

(BTW, in MAPFRE_JUNIO2016_G2_FOTO1_BN G2 stands for group1 since in this form there are several fields displayed in the same screen, and FOTO1 means "picture1" which is a required field... I don´t know if that is relevant)

Then I decided to query:

SELECT * FROM MAPFRE_JUNIO2016_G2_FOTO1_BN WHERE _TOP_LEVEL_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" AND _PARENT_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139"

But in this case instead of finding 2 entities there is no match. Also I can see that by searching this folder in the instances folder created by ODK Briefcase, instead of having an XML file and a picture/s it is empty.

So I guess now I should find a way of deleting uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 in the database since it is a void submission, but I don´t know how to do this. ¿how can I do this?

What about all the rest of errors that I am finding in APP Engine,Default Service that I posted before? should I worry about them? (Because I do)

Why is this happening? How can I prevent this to happen again? could large pictures be a problem? (right now the pulled process made by briefcase is 4,24GB which includes XML files and pictures. This is 65% of what I am expecting)

Thank you very much in advance.

Ok So I see there are two errors constantly happening:

one of them:

09:55:58.212GET500960 B5.9 sAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3

0.1.0.2 - - [07/Jun/2016:09:55:58 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3 HTTP/1.1" 500 960 http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=5901 cpu_ms=460 cpm_usd=3.1940599999999997e-7 loading_request=0 instance=00c61b117c0e9e3d18748ca60cc00676c90a86e4344dfcefb7184ce68bcd8de904 app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-07|00:56:08.408633-07|10.94.34.134|-1592936713"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

09:56:01.221org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Beginning servlet processing

09:56:01.227org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Request is running on frontend

09:56:01.228org.opendatakit.aggregate.task.UploadSubmissionsWorkerImpl uploadAllSubmissions: Beginning UPLOAD service: uuid:8979e040-b496-49c1-b8de-f355d69d4c46 form MapfreJunio2016

09:56:01.228org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms

09:56:02.449org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -73233127

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -13406599

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -52076838

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -57872843

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -78707824

And the other one

09:56:03.646GET2020 B1.4 sAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A7ad28143-2a7a-4624-8807-215f05fd0102

0.1.0.2 - - [07/Jun/2016:09:56:03 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A7ad28143-2a7a-4624-8807-215f05fd0102 HTTP/1.1" 202 - http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=1400 cpu_ms=120 cpm_usd=1.39586e-7 loading_request=0 instance=00c61b117c0e9e3d18748ca60cc00676c90a86e4344dfcefb7184ce68bcd8de904 app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-07|00:56:08.408751-07|10.94.34.134|-1592936713"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

09:56:03.652org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Beginning servlet processing

09:56:03.658org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Request is running on frontend

09:56:03.658org.opendatakit.aggregate.task.UploadSubmissionsWorkerImpl uploadAllSubmissions: Beginning UPLOAD service: uuid:28dbf50b-2fcf-4424-800f-9c90eb823a09 form MapfreJunio2016

09:56:03.658org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms

09:56:03.856org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -14278749

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -17503047

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -34874322

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -50316469

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -18963331

So I try to run your query but changing the uuid to see if i can see two entries like the first thing I did but it says no entities matched. I checked and the publishing is still not working, but when I pull with Briefcase I can see it is downloading more instances, so I guess the system keeps receiving forms...

I´ve done exactly what do told me, now I´m pulling with briefcase to see if it works, but ends with FAILED

The problem is that I´m struggling to understand what I am doing, so the steps for Repairing a Filled-in Form Submission are not straight forward for me. I´ll keep trying...

I´m going to try Mitch, for me this is dancing on thin ice... let´s see...!

Thanks!

Have you worked through the database repair steps described here:

https://github.com/opendatakit/opendatakit/wiki/Aggregate-AppEngine-Troubleshooting#repairing-a-filled-in-form-submission

I.e.,

org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: org.opendatakit.aggregate.exception.ODKExternalServiceException: org.opendatakit.common.datamodel.ODKEnumeratedElementException: SELECT * FROM MAPFRE_JUNIO2016_G3_REPORTE WHERE _TOP_LEVEL_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 AND _PARENT_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 multiple-choice valueSet is missing an entry OR has an extra copy of one

and then, on the datastore tab,

entering in the query box:

SELECT * FROM opendatakit.MAPFRE_JUNIO2016_G3_REPORTE WHERE _TOP_LEVEL_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" AND _PARENT_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139"

you'll find 2 entries with matching _ORDINAL_NUMBER values (1).

Per the instructions (step 6 & 7), delete the older one of these.

Well Now in another project I´m having the very same issue. wort timing ever. In the middle of a big running project :S

There are not incomplete submissions according to ODK.

I was publishing a google spreadsheet and a fusion table (Running in google app engine with billing enabled)

I´m looking at the log and I´m having an average of 4 to 7 error per minute, basically like this:

18:14:55.688GET500956 B550 msAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3

0.1.0.2 - - [06/Jun/2016:18:14:55 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3 HTTP/1.1" 500 956 http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=550 cpu_ms=231 cpm_usd=2.5548e-7 loading_request=0 instance=00c61b117c327d751511c82e8ba427e6081789412cae8a6b2a33f690566e8b31 app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-06|09:15:00.884913-07|10.106.162.86|1409825575"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

Seaching for OutOfMemory in the log and I don´t find anything.

Right now I just pulled all the information with Briefcase, and As far as I can see I have more than what I had in my published tables.

My biggest concern is that I could loose submissions :S.

Any help is very appreciated.

Regards,

This would indicate a data corruption issue.

See these instructions: https://github.com/opendatakit/opendatakit/wiki/Aggregate-AppEngine-Troubleshooting#reparing-a-filled-in-form-submission

w.r.t. ODK Briefcase starting to pull data from the beginning every time, if, after you pull data, you issue a push back up to the same server, this will set tracking flags such that those records will be ignored and skipped over the next time you pull.

ODK Briefcase stops at the first corrupted record because it is fetching data in ascending marked-as-complete timestamp order. Until the corrupted record is repaired or removed, it will not proceed to other records.

Data corruption is more likely to occur if your server memory is becoming exhausted or if you are submitting data over very-low-bandwidth communications channels (e.g., satellite).

Hi guys,

I´m kind of new here, but I already have this problem:

On ODK Aggregate, when I want to export my data to .csv file, It´s showing me an error (Failure - will retry later). it was working fine but I guess the DDBB reach a certain volume that made it fail.

I have read that it is related to global max_allowed_packet which has to be set to a higher value, with the following expression

set global max_allowed_packet = 1073741824;

I have my ODK aggregate deployed in a google app engine, so i tied to type that in the cloud shell but nothing happened, I still have the same problem, and as you can imaging it´s the worst moment to happend...

I have a excel macro that reads the csv file and download the media files, renames and save everything in a file, so then i can open each picture from a link in an excel spreadsheet.

Now I can only download the the CSV using ODK briefcase by:

1- Pulling all the data.

2- Exporting the CSV (I can only do it if I firstly pull the data)

The problem is that by doing this I have to download the media files twice (in briefcase and in my macro), and the CSV file exported by briefcase contains the local link in my computer to the media files, but not the cloud link to the media files, which in my case it is super convenient. I figure out that the link to the media is a concatenation of the ID number and some other things, so I manage to bypass this problem, but I had to change my whole procedure in the middle of the work, making it much more manual and not allowing me to use the tools I developped to automate all this work.

Anyone knows how can I in ODK aggregate deployed in google app engine execute this procedure?

set global max_allowed_packet = 1073741824;

IF that really is the problem, which i hope it is.

Thank you very much in advance.

Regards,

Pablo

One more thing, everytime I pull information with briefcase it downloads the whole project instead of just downloading the new entries,that could be something easy to improve in future versions, my excel macro that downloads everything from the csv does that it is muc more convenient this way.

Regards,

Pablo

I am trying to pull all the information with briefcase, and looking at the log I have this:

...

fetching instance 3048 ...

fetching instance 3049 ...

fetching instance 3050 ...

fetching instance 3051 ...

fetching instance 3052 ...

fetching instance 3053 ...

retrieving next chunk of instances from server...

FAILED.

I don´t know if it failed because there is nothing more to pull or because actually there is something elses happening... right now I don´t know how many entries I have so 3053? I don´t know... the filter in aggregate does not work so i don´t know how could I do this...

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to a topic in the Google Groups "ODK Community" group.

To unsubscribe from this topic, visit https://groups.google.com/d/topic/opendatakit/hjMKcHkk1tc/unsubscribe.

To unsubscribe from this group and all its topics, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to a topic in the Google Groups "ODK Community" group.

To unsubscribe from this topic, visit https://groups.google.com/d/topic/opendatakit/hjMKcHkk1tc/unsubscribe.

To unsubscribe from this group and all its topics, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt
Software Engineer
University of Washington
mitche...@gmail.com

Hi Guys,

I am trying to export JSON format. The last downloaded file size is 43.7 MB. Now when I try to export, it is showing 'Generation in progress'. Nothing happens. When I checked the log (catalina.out) I am getting the following:

/var/log/tomcat7# tail -f catalina.out
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
INFO: wrapping up JSON generation for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
WARNING: stale JSON activity - do not save file in PersistentResults table for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
INFO: iteration 46 before issuing query for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
INFO: wrapping up JSON generation for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
WARNING: stale JSON activity - do not save file in PersistentResults table for NCD_DMS_8Nov2016_v26

Please help me. This is a live project and all are waiting for this output to proceed.

Thanks in advance
Krishna

I am getting heap memory issue

Jan 09, 2017 11:43:23 AM org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler handleError
SEVERE: Unexpected error occurred in scheduled task.
java.lang.OutOfMemoryError: Java heap space

··· On Monday, January 9, 2017 at 11:30:47 AM UTC+5:30, krishn...@gmail.com wrote: > On Wednesday, June 8, 2016 at 10:57:43 PM UTC+5:30, Mitch Sundt wrote: > > On Wed, Jun 8, 2016 at 1:23 AM, wrote: > > El miércoles, 8 de junio de 2016, 9:30:55 (UTC+2), tomadedat...@gmail.com escribió: > > > El martes, 7 de junio de 2016, 23:26:31 (UTC+2), Mitch Sundt escribió: > > > > On Tue, Jun 7, 2016 at 10:05 AM, Pablo Rodríguez wrote: > > > > On Tue, Jun 7, 2016 at 5:32 PM, Pablo Rodríguez wrote: > > > > On Tue, Jun 7, 2016 at 10:09 AM, Pablo Rodríguez wrote: > > > > On Tue, Jun 7, 2016 at 9:51 AM, Pablo Rodríguez wrote: > > > > On Tue, Jun 7, 2016 at 9:22 AM, wrote: > > > > El martes, 7 de junio de 2016, 0:55:47 (UTC+2), Mitch Sundt escribió: > > > > > On Mon, Jun 6, 2016 at 9:39 AM, Pablo Rodríguez wrote: > > > > > On Fri, May 6, 2016 at 6:22 PM, Mitch Sundt wrote: > > > > > On Fri, May 6, 2016 at 5:52 AM, wrote: > > > > > El viernes, 6 de mayo de 2016, 12:30:39 (UTC+2), nuncaestardes...@gmail.com escribió: > > > > > > El martes, 3 de mayo de 2016, 14:10:28 (UTC+2), nuncaestardes...@gmail.com escribió:

Congrats!

I'm glad you worked through the (very-confusing) sequence of steps and were able to get your system up and running again.

(It's been years since I worked on the PAUSED mechanism -- I was clearly mistaken about how it functions. I'm glad that it automatically resumed.)

I´ll complete my answer for the future:

  • Publishing is restored in both fusion tables and google spreadsheet (significantly slower in the spreadsheet), and now both status is "ACTIVE" (there is no button Mitch)

  • Briefcase gives me a "SUCCESS" message after pulling the whole thing.

  • CSV generation works without any problem.

Really this is the best!

IT WORKED!!!!

thank you very much!!!!!

the publishing started automatically, I guess that the 5-7 errors I was having per second was due to trying to publish.

I am extremely happy, not only for resolving this issue but for understanding it. Now I think I could do it again :slight_smile:

Mitch you are the best!

You are very close. Note that in step (5), the instructions say:

  1. Now, for each data table name and XXXXX value in step (2), execute a query of the form:SELECT * FROM opendatakit.YOUR_TABLE_NAME WHERE _PARENT_AURI = "XXXXXX"

Take special note of the backquote around the opendatakit.-qualified table name, and the double-quotes around the XXXXXX value. If the original query specified _DOM_AURI instead of_PARENT_AURI, use that. If this query does not return a value, you may also need to add the AND _TOP_LEVEL_AURI = "YYYYY" qualifier (but that should not be necessary), if it was present in the original error message.

In the queries you tried, you missed this part of the instructions (replace MAPFRE_JUNIO2016_G2_FOTO1_BN in your select statements with opendatakit.MAPFRE_JUNIO2016_G2_FOTO1_BN )

These instructions are saying to select and show to you the entries in the data tables that correspond to a specific submission (the one with instanceID uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 ).

========================

If the publisher is paused, I believe the "PAUSED" is actually a button, and you can click on that to get it to resume.

That, of course, will only be successful if you fix the data corruption.

One last thing and hoping I can fix the main problem, since this is secondary: I tried to publish another spreadsheet (didnt´t work) and another fusion table (just existing submissions, 1420, when I had already published 1476 in the original) again, but at the end I decided to delete them. The two original spreadsheet and fusion table are both "BOTH Upload Existing & Stream New Submission Data" but now the status shows "PAUSED". How can I make them active again?

Ok, I think I´m seeing some progress in the sense of figuring out, what´s happening. I see where did you took the query you gave me, from Logs, App Engine, Background, All versions:

If I enlarge the first error I can see what was happening:

And I followed your instructions. But there is a second error, and it is different:

where it says:

org.opendatakit.aggregate.task.CsvWorkerImpl failureRecovery: Exception caught: org.opendatakit.common.datamodel.ODKEnumeratedElementException: Attachment errors:SELECT * FROM MAPFRE_JUNIO2016_G2_FOTO1_BN WHERE _TOP_LEVEL_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 AND _PARENT_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 is missing an attachment instance OR has extra copies. for MapfreJunio2016

(BTW, in MAPFRE_JUNIO2016_G2_FOTO1_BN G2 stands for group1 since in this form there are several fields displayed in the same screen, and FOTO1 means "picture1" which is a required field... I don´t know if that is relevant)

Then I decided to query:

SELECT * FROM MAPFRE_JUNIO2016_G2_FOTO1_BN WHERE _TOP_LEVEL_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" AND _PARENT_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139"

But in this case instead of finding 2 entities there is no match. Also I can see that by searching this folder in the instances folder created by ODK Briefcase, instead of having an XML file and a picture/s it is empty.

So I guess now I should find a way of deleting uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 in the database since it is a void submission, but I don´t know how to do this. ¿how can I do this?

What about all the rest of errors that I am finding in APP Engine,Default Service that I posted before? should I worry about them? (Because I do)

Why is this happening? How can I prevent this to happen again? could large pictures be a problem? (right now the pulled process made by briefcase is 4,24GB which includes XML files and pictures. This is 65% of what I am expecting)

Thank you very much in advance.

Ok So I see there are two errors constantly happening:

one of them:

09:55:58.212GET500960 B5.9 sAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3

0.1.0.2 - - [07/Jun/2016:09:55:58 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3 HTTP/1.1" 500 960 http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=5901 cpu_ms=460 cpm_usd=3.1940599999999997e-7 loading_request=0 instance=00c61b117c0e9e3d18748ca60cc00676c90a86e4344dfcefb7184ce68bcd8de904 app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-07|00:56:08.408633-07|10.94.34.134|-1592936713"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

09:56:01.221org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Beginning servlet processing

09:56:01.227org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Request is running on frontend

09:56:01.228org.opendatakit.aggregate.task.UploadSubmissionsWorkerImpl uploadAllSubmissions: Beginning UPLOAD service: uuid:8979e040-b496-49c1-b8de-f355d69d4c46 form MapfreJunio2016

09:56:01.228org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms

09:56:02.449org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -73233127

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -13406599

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -52076838

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -57872843

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -78707824

And the other one

09:56:03.646GET2020 B1.4 sAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A7ad28143-2a7a-4624-8807-215f05fd0102

0.1.0.2 - - [07/Jun/2016:09:56:03 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A7ad28143-2a7a-4624-8807-215f05fd0102 HTTP/1.1" 202 - http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=1400 cpu_ms=120 cpm_usd=1.39586e-7 loading_request=0 instance=00c61b117c0e9e3d18748ca60cc00676c90a86e4344dfcefb7184ce68bcd8de904 app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-07|00:56:08.408751-07|10.94.34.134|-1592936713"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

09:56:03.652org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Beginning servlet processing

09:56:03.658org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Request is running on frontend

09:56:03.658org.opendatakit.aggregate.task.UploadSubmissionsWorkerImpl uploadAllSubmissions: Beginning UPLOAD service: uuid:28dbf50b-2fcf-4424-800f-9c90eb823a09 form MapfreJunio2016

09:56:03.658org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms

09:56:03.856org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -14278749

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -17503047

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -34874322

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -50316469

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -18963331

So I try to run your query but changing the uuid to see if i can see two entries like the first thing I did but it says no entities matched. I checked and the publishing is still not working, but when I pull with Briefcase I can see it is downloading more instances, so I guess the system keeps receiving forms...

I´ve done exactly what do told me, now I´m pulling with briefcase to see if it works, but ends with FAILED

The problem is that I´m struggling to understand what I am doing, so the steps for Repairing a Filled-in Form Submission are not straight forward for me. I´ll keep trying...

I´m going to try Mitch, for me this is dancing on thin ice... let´s see...!

Thanks!

Have you worked through the database repair steps described here:

https://github.com/opendatakit/opendatakit/wiki/Aggregate-AppEngine-Troubleshooting#repairing-a-filled-in-form-submission

I.e.,

org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: org.opendatakit.aggregate.exception.ODKExternalServiceException: org.opendatakit.common.datamodel.ODKEnumeratedElementException: SELECT * FROM MAPFRE_JUNIO2016_G3_REPORTE WHERE _TOP_LEVEL_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 AND _PARENT_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 multiple-choice valueSet is missing an entry OR has an extra copy of one

and then, on the datastore tab,

entering in the query box:

SELECT * FROM opendatakit.MAPFRE_JUNIO2016_G3_REPORTE WHERE _TOP_LEVEL_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" AND _PARENT_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139"

you'll find 2 entries with matching _ORDINAL_NUMBER values (1).

Per the instructions (step 6 & 7), delete the older one of these.

Well Now in another project I´m having the very same issue. wort timing ever. In the middle of a big running project :S

There are not incomplete submissions according to ODK.

I was publishing a google spreadsheet and a fusion table (Running in google app engine with billing enabled)

I´m looking at the log and I´m having an average of 4 to 7 error per minute, basically like this:

18:14:55.688GET500956 B550 msAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3

0.1.0.2 - - [06/Jun/2016:18:14:55 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3 HTTP/1.1" 500 956 http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=550 cpu_ms=231 cpm_usd=2.5548e-7 loading_request=0 instance=00c61b117c327d751511c82e8ba427e6081789412cae8a6b2a33f690566e8b31 app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-06|09:15:00.884913-07|10.106.162.86|1409825575"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

Seaching for OutOfMemory in the log and I don´t find anything.

Right now I just pulled all the information with Briefcase, and As far as I can see I have more than what I had in my published tables.

My biggest concern is that I could loose submissions :S.

Any help is very appreciated.

Regards,

This would indicate a data corruption issue.

See these instructions: https://github.com/opendatakit/opendatakit/wiki/Aggregate-AppEngine-Troubleshooting#reparing-a-filled-in-form-submission

w.r.t. ODK Briefcase starting to pull data from the beginning every time, if, after you pull data, you issue a push back up to the same server, this will set tracking flags such that those records will be ignored and skipped over the next time you pull.

ODK Briefcase stops at the first corrupted record because it is fetching data in ascending marked-as-complete timestamp order. Until the corrupted record is repaired or removed, it will not proceed to other records.

Data corruption is more likely to occur if your server memory is becoming exhausted or if you are submitting data over very-low-bandwidth communications channels (e.g., satellite).

Hi guys,

I´m kind of new here, but I already have this problem:

On ODK Aggregate, when I want to export my data to .csv file, It´s showing me an error (Failure - will retry later). it was working fine but I guess the DDBB reach a certain volume that made it fail.

I have read that it is related to global max_allowed_packet which has to be set to a higher value, with the following expression

set global max_allowed_packet = 1073741824;

I have my ODK aggregate deployed in a google app engine, so i tied to type that in the cloud shell but nothing happened, I still have the same problem, and as you can imaging it´s the worst moment to happend...

I have a excel macro that reads the csv file and download the media files, renames and save everything in a file, so then i can open each picture from a link in an excel spreadsheet.

Now I can only download the the CSV using ODK briefcase by:

1- Pulling all the data.

2- Exporting the CSV (I can only do it if I firstly pull the data)

The problem is that by doing this I have to download the media files twice (in briefcase and in my macro), and the CSV file exported by briefcase contains the local link in my computer to the media files, but not the cloud link to the media files, which in my case it is super convenient. I figure out that the link to the media is a concatenation of the ID number and some other things, so I manage to bypass this problem, but I had to change my whole procedure in the middle of the work, making it much more manual and not allowing me to use the tools I developped to automate all this work.

Anyone knows how can I in ODK aggregate deployed in google app engine execute this procedure?

set global max_allowed_packet = 1073741824;

IF that really is the problem, which i hope it is.

Thank you very much in advance.

Regards,

Pablo

One more thing, everytime I pull information with briefcase it downloads the whole project instead of just downloading the new entries,that could be something easy to improve in future versions, my excel macro that downloads everything from the csv does that it is muc more convenient this way.

Regards,

Pablo

I am trying to pull all the information with briefcase, and looking at the log I have this:

...

fetching instance 3048 ...

fetching instance 3049 ...

fetching instance 3050 ...

fetching instance 3051 ...

fetching instance 3052 ...

fetching instance 3053 ...

retrieving next chunk of instances from server...

FAILED.

I don´t know if it failed because there is nothing more to pull or because actually there is something elses happening... right now I don´t know how many entries I have so 3053? I don´t know... the filter in aggregate does not work so i don´t know how could I do this...

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to a topic in the Google Groups "ODK Community" group.

To unsubscribe from this topic, visit https://groups.google.com/d/topic/opendatakit/hjMKcHkk1tc/unsubscribe.

To unsubscribe from this group and all its topics, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to a topic in the Google Groups "ODK Community" group.

To unsubscribe from this topic, visit https://groups.google.com/d/topic/opendatakit/hjMKcHkk1tc/unsubscribe.

To unsubscribe from this group and all its topics, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt
Software Engineer
University of Washington
mitche...@gmail.com

Hi Guys,

I am trying to export JSON format. The last downloaded file size is 43.7 MB. Now when I try to export, it is showing 'Generation in progress'. Nothing happens. When I checked the log (catalina.out) I am getting the following:

/var/log/tomcat7# tail -f catalina.out
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
INFO: wrapping up JSON generation for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
WARNING: stale JSON activity - do not save file in PersistentResults table for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
INFO: iteration 46 before issuing query for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
INFO: wrapping up JSON generation for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile
WARNING: stale JSON activity - do not save file in PersistentResults table for NCD_DMS_8Nov2016_v26

Please help me. This is a live project and all are waiting for this output to proceed.

Thanks in advance
Krishna

I am getting heap memory issue

Jan 09, 2017 11:43:23 AM org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler handleError
SEVERE: Unexpected error occurred in scheduled task.
java.lang.OutOfMemoryError: Java heap space

This is the memory allocations

java -XX:+PrintFlagsFinal -version | grep -iE 'HeapSize|PermSize|ThreadStackSize'

uintx AdaptivePermSizeWeight                    = 20              {product}           
 intx CompilerThreadStackSize                   = 0               {pd product}        
uintx ErgoHeapSizeLimit                         = 0               {product}           
uintx HeapSizePerGCThread                       = 87241520        {product}           
uintx InitialHeapSize                          := 130784000       {product}           
uintx LargePageHeapSizeThreshold                = 134217728       {product}           
uintx MaxHeapSize                              := 2092957696      {product}           
uintx MaxPermSize                               = 174063616       {pd product}        
uintx PermSize                                  = 21757952        {pd product}        
 intx ThreadStackSize                           = 1024            {pd product}        
 intx VMThreadStackSize                         = 1024            {pd product}        

java version "1.7.0_79"
OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-1~deb8u1)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)

Please help me solving this issue

··· On Monday, January 9, 2017 at 11:45:50 AM UTC+5:30, krishn...@gmail.com wrote: > On Monday, January 9, 2017 at 11:30:47 AM UTC+5:30, krishn...@gmail.com wrote: > > On Wednesday, June 8, 2016 at 10:57:43 PM UTC+5:30, Mitch Sundt wrote: > > > On Wed, Jun 8, 2016 at 1:23 AM, wrote: > > > El miércoles, 8 de junio de 2016, 9:30:55 (UTC+2), tomadedat...@gmail.com escribió: > > > > El martes, 7 de junio de 2016, 23:26:31 (UTC+2), Mitch Sundt escribió: > > > > > On Tue, Jun 7, 2016 at 10:05 AM, Pablo Rodríguez wrote: > > > > > On Tue, Jun 7, 2016 at 5:32 PM, Pablo Rodríguez wrote: > > > > > On Tue, Jun 7, 2016 at 10:09 AM, Pablo Rodríguez wrote: > > > > > On Tue, Jun 7, 2016 at 9:51 AM, Pablo Rodríguez wrote: > > > > > On Tue, Jun 7, 2016 at 9:22 AM, wrote: > > > > > El martes, 7 de junio de 2016, 0:55:47 (UTC+2), Mitch Sundt escribió: > > > > > > On Mon, Jun 6, 2016 at 9:39 AM, Pablo Rodríguez wrote: > > > > > > On Fri, May 6, 2016 at 6:22 PM, Mitch Sundt wrote: > > > > > > On Fri, May 6, 2016 at 5:52 AM, wrote: > > > > > > El viernes, 6 de mayo de 2016, 12:30:39 (UTC+2), nuncaestardes...@gmail.com escribió: > > > > > > > El martes, 3 de mayo de 2016, 14:10:28 (UTC+2), nuncaestardes...@gmail.com escribió:

Hi there,

You are out of RAM. Allocate more RAM to Tomcat.

Alternatively, use Briefcase to download the data and export that to
CSV (or JSON if you can write a script to convert XML or CSV to JSON).

Yaw

··· On Mon, Jan 9, 2017 at 1:45 AM, wrote: > On Monday, January 9, 2017 at 11:45:50 AM UTC+5:30, krishn...@gmail.com wrote: >> On Monday, January 9, 2017 at 11:30:47 AM UTC+5:30, krishn...@gmail.com wrote: >> > On Wednesday, June 8, 2016 at 10:57:43 PM UTC+5:30, Mitch Sundt wrote: >> > > Congrats! >> > > >> > > >> > > I'm glad you worked through the (very-confusing) sequence of steps and were able to get your system up and running again. >> > > >> > > >> > > (It's been years since I worked on the PAUSED mechanism -- I was clearly mistaken about how it functions. I'm glad that it automatically resumed.) >> > > >> > > >> > > >> > > >> > > On Wed, Jun 8, 2016 at 1:23 AM, wrote: >> > > I´ll complete my answer for the future: >> > > >> > > >> > > >> > > - Publishing is restored in both fusion tables and google spreadsheet (significantly slower in the spreadsheet), and now both status is "ACTIVE" (there is no button Mitch) >> > > >> > > - Briefcase gives me a "SUCCESS" message after pulling the whole thing. >> > > >> > > - CSV generation works without any problem. >> > > >> > > >> > > >> > > Really this is the best! >> > > >> > > >> > > >> > > >> > > >> > > El miércoles, 8 de junio de 2016, 9:30:55 (UTC+2), tomadedat...@gmail.com escribió: >> > > >> > > > IT WORKED!!!! >> > > >> > > > >> > > >> > > > thank you very much!!!!! >> > > >> > > > >> > > >> > > > the publishing started automatically, I guess that the 5-7 errors I was having per second was due to trying to publish. >> > > >> > > > >> > > >> > > > I am extremely happy, not only for resolving this issue but for understanding it. Now I think I could do it again :) >> > > >> > > > >> > > >> > > > Mitch you are the best! >> > > >> > > > >> > > >> > > > El martes, 7 de junio de 2016, 23:26:31 (UTC+2), Mitch Sundt escribió: >> > > >> > > > > You are very close. Note that in step (5), the instructions say: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 5) Now, for each data table name and XXXXX value in step (2), execute a query of the form:SELECT * FROM `opendatakit.YOUR_TABLE_NAME` WHERE _PARENT_AURI = "XXXXXX" >> > > >> > > > > >> > > >> > > > > Take special note of the backquote around the opendatakit.-qualified table name, and the double-quotes around the XXXXXX value. If the original query specified _DOM_AURI instead of_PARENT_AURI, use that. If this query does not return a value, you may also need to add the AND _TOP_LEVEL_AURI = "YYYYY" qualifier (but that should not be necessary), if it was present in the original error message. >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > In the queries you tried, you missed this part of the instructions (replace MAPFRE_JUNIO2016_G2_FOTO1_BN in your select statements with `opendatakit.MAPFRE_JUNIO2016_G2_FOTO1_BN` ) >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > These instructions are saying to select and show to you the entries in the data tables that correspond to a specific submission (the one with instanceID uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 ). >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > ======================== >> > > >> > > > > If the publisher is paused, I believe the "PAUSED" is actually a button, and you can click on that to get it to resume. >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > That, of course, will only be successful if you fix the data corruption. >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > On Tue, Jun 7, 2016 at 10:05 AM, Pablo Rodríguez wrote: >> > > >> > > > > >> > > >> > > > > One last thing and hoping I can fix the main problem, since this is secondary: I tried to publish another spreadsheet (didnt´t work) and another fusion table (just existing submissions, 1420, when I had already published 1476 in the original) again, but at the end I decided to delete them. The two original spreadsheet and fusion table are both "BOTH Upload Existing & Stream New Submission Data" but now the status shows "PAUSED". How can I make them active again? >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > On Tue, Jun 7, 2016 at 5:32 PM, Pablo Rodríguez wrote: >> > > >> > > > > >> > > >> > > > > Ok, I think I´m seeing some progress in the sense of figuring out, what´s happening. I see where did you took the query you gave me, from Logs, App Engine, Background, All versions: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > If I enlarge the first error I can see what was happening: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > And I followed your instructions. But there is a second error, and it is different: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > where it says: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > org.opendatakit.aggregate.task.CsvWorkerImpl failureRecovery: Exception caught: org.opendatakit.common.datamodel.ODKEnumeratedElementException: Attachment errors:SELECT * FROM MAPFRE_JUNIO2016_G2_FOTO1_BN WHERE _TOP_LEVEL_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 AND _PARENT_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 is missing an attachment instance OR has extra copies. for MapfreJunio2016 >> > > >> > > > > (BTW, in MAPFRE_JUNIO2016_G2_FOTO1_BN G2 stands for group1 since in this form there are several fields displayed in the same screen, and FOTO1 means "picture1" which is a required field... I don´t know if that is relevant) >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > Then I decided to query: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > SELECT * FROM MAPFRE_JUNIO2016_G2_FOTO1_BN WHERE _TOP_LEVEL_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" AND _PARENT_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > But in this case instead of finding 2 entities there is no match. Also I can see that by searching this folder in the instances folder created by ODK Briefcase, instead of having an XML file and a picture/s it is empty. >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > So I guess now I should find a way of deleting uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 in the database since it is a void submission, but I don´t know how to do this. ¿how can I do this? >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > What about all the rest of errors that I am finding in APP Engine,Default Service that I posted before? should I worry about them? (Because I do) >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > Why is this happening? How can I prevent this to happen again? could large pictures be a problem? (right now the pulled process made by briefcase is 4,24GB which includes XML files and pictures. This is 65% of what I am expecting) >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > Thank you very much in advance. >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > On Tue, Jun 7, 2016 at 10:09 AM, Pablo Rodríguez wrote: >> > > >> > > > > >> > > >> > > > > Ok So I see there are two errors constantly happening: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > one of them: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:55:58.212GET500960 B5.9 sAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 0.1.0.2 - - [07/Jun/2016:09:55:58 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3 HTTP/1.1" 500 960 http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=5901 cpu_ms=460 cpm_usd=3.1940599999999997e-7 loading_request=0 instance=00c61b117c0e9e3d18748ca60cc00676c90a86e4344dfcefb7184ce68bcd8de904 app_engine_release=1.9.38 trace_id=- >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > { >> > > >> > > > > >> > > >> > > > > metadata: {…} >> > > >> > > > > protoPayload: {…} >> > > >> > > > > insertId: "2016-06-07|00:56:08.408633-07|10.94.34.134|-1592936713" >> > > >> > > > > log: "appengine.googleapis.com/request_log" >> > > >> > > > > httpRequest: {…} >> > > >> > > > > operation: {…} } >> > > >> > > > > >> > > >> > > > > 09:56:01.221org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Beginning servlet processing >> > > >> > > > > >> > > >> > > > > 09:56:01.227org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Request is running on frontend >> > > >> > > > > >> > > >> > > > > 09:56:01.228org.opendatakit.aggregate.task.UploadSubmissionsWorkerImpl uploadAllSubmissions: Beginning UPLOAD service: uuid:8979e040-b496-49c1-b8de-f355d69d4c46 form MapfreJunio2016 >> > > >> > > > > >> > > >> > > > > 09:56:01.228org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms >> > > >> > > > > >> > > >> > > > > 09:56:02.449org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms >> > > >> > > > > >> > > >> > > > > 09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -73233127 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -13406599 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -52076838 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -57872843 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -78707824 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > And the other one >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:03.646GET2020 B1.4 sAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A7ad28143-2a7a-4624-8807-215f05fd0102 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 0.1.0.2 - - [07/Jun/2016:09:56:03 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A7ad28143-2a7a-4624-8807-215f05fd0102 HTTP/1.1" 202 - http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=1400 cpu_ms=120 cpm_usd=1.39586e-7 loading_request=0 instance=00c61b117c0e9e3d18748ca60cc00676c90a86e4344dfcefb7184ce68bcd8de904 app_engine_release=1.9.38 trace_id=- >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > { >> > > >> > > > > >> > > >> > > > > metadata: {…} >> > > >> > > > > protoPayload: {…} >> > > >> > > > > insertId: "2016-06-07|00:56:08.408751-07|10.94.34.134|-1592936713" >> > > >> > > > > log: "appengine.googleapis.com/request_log" >> > > >> > > > > httpRequest: {…} >> > > >> > > > > operation: {…} } >> > > >> > > > > >> > > >> > > > > 09:56:03.652org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Beginning servlet processing >> > > >> > > > > >> > > >> > > > > 09:56:03.658org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: Request is running on frontend >> > > >> > > > > >> > > >> > > > > 09:56:03.658org.opendatakit.aggregate.task.UploadSubmissionsWorkerImpl uploadAllSubmissions: Beginning UPLOAD service: uuid:28dbf50b-2fcf-4424-800f-9c90eb823a09 form MapfreJunio2016 >> > > >> > > > > >> > > >> > > > > 09:56:03.658org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms >> > > >> > > > > >> > > >> > > > > 09:56:03.856org.opendatakit.aggregate.form.FormFactory internalGetForms: FormCache: using cached list of Forms >> > > >> > > > > >> > > >> > > > > 09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -14278749 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -17503047 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -34874322 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -50316469 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > 09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].: Time left on lock: -18963331 >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > So I try to run your query but changing the uuid to see if i can see two entries like the first thing I did but it says no entities matched. I checked and the publishing is still not working, but when I pull with Briefcase I can see it is downloading more instances, so I guess the system keeps receiving forms... >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > On Tue, Jun 7, 2016 at 9:51 AM, Pablo Rodríguez wrote: >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > I´ve done exactly what do told me, now I´m pulling with briefcase to see if it works, but ends with FAILED >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > The problem is that I´m struggling to understand what I am doing, so the steps for Repairing a Filled-in Form Submission are not straight forward for me. I´ll keep trying... >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > On Tue, Jun 7, 2016 at 9:22 AM, wrote: >> > > >> > > > > I´m going to try Mitch, for me this is dancing on thin ice... let´s see...! >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > Thanks! >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > El martes, 7 de junio de 2016, 0:55:47 (UTC+2), Mitch Sundt escribió: >> > > >> > > > > >> > > >> > > > > > Have you worked through the database repair steps described here: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > https://github.com/opendatakit/opendatakit/wiki/Aggregate-AppEngine-Troubleshooting#repairing-a-filled-in-form-submission >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > I.e., >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet doGet: org.opendatakit.aggregate.exception.ODKExternalServiceException: org.opendatakit.common.datamodel.ODKEnumeratedElementException: SELECT * FROM MAPFRE_JUNIO2016_G3_REPORTE WHERE _TOP_LEVEL_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 AND _PARENT_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 multiple-choice valueSet is missing an entry OR has an extra copy of one >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > and then, on the datastore tab, >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > entering in the query box: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > SELECT * FROM `opendatakit.MAPFRE_JUNIO2016_G3_REPORTE` WHERE _TOP_LEVEL_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" AND _PARENT_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > you'll find 2 entries with matching _ORDINAL_NUMBER values (1). >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Per the instructions (step 6 & 7), delete the older one of these. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > On Mon, Jun 6, 2016 at 9:39 AM, Pablo Rodríguez wrote: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Well Now in another project I´m having the very same issue. wort timing ever. In the middle of a big running project :S >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > There are not incomplete submissions according to ODK. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > I was publishing a google spreadsheet and a fusion table (Running in google app engine with billing enabled) >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > I´m looking at the log and I´m having an average of 4 to 7 error per minute, basically like this: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > 18:14:55.688GET500956 B550 msAppEngine-Google; (+http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3 >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > 0.1.0.2 - - [06/Jun/2016:18:14:55 +0200] "GET /gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3 HTTP/1.1" 500 956 http://olmedo-servicios-integrales.appspot.com/gae/watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "olmedo-servicios-integrales.appspot.com" ms=550 cpu_ms=231 cpm_usd=2.5548e-7 loading_request=0 instance=00c61b117c327d751511c82e8ba427e6081789412cae8a6b2a33f690566e8b31 app_engine_release=1.9.38 trace_id=- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > { >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > metadata: {…} >> > > >> > > > > >> > > >> > > > > > protoPayload: {…} >> > > >> > > > > >> > > >> > > > > > insertId: "2016-06-06|09:15:00.884913-07|10.106.162.86|1409825575" >> > > >> > > > > >> > > >> > > > > > log: "appengine.googleapis.com/request_log" >> > > >> > > > > >> > > >> > > > > > httpRequest: {…} >> > > >> > > > > >> > > >> > > > > > operation: {…} } >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Seaching for OutOfMemory in the log and I don´t find anything. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Right now I just pulled all the information with Briefcase, and As far as I can see I have more than what I had in my published tables. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > My biggest concern is that I could loose submissions :S. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Any help is very appreciated. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Regards, >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > On Fri, May 6, 2016 at 6:22 PM, Mitch Sundt wrote: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > This would indicate a data corruption issue. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > See these instructions: https://github.com/opendatakit/opendatakit/wiki/Aggregate-AppEngine-Troubleshooting#reparing-a-filled-in-form-submission >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > w.r.t. ODK Briefcase starting to pull data from the beginning every time, if, after you pull data, you issue a push back up to the same server, this will set tracking flags such that those records will be ignored and skipped over the next time you pull. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > ODK Briefcase stops at the first corrupted record because it is fetching data in ascending marked-as-complete timestamp order. Until the corrupted record is repaired or removed, it will not proceed to other records. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Data corruption is more likely to occur if your server memory is becoming exhausted or if you are submitting data over very-low-bandwidth communications channels (e.g., satellite). >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > > On Fri, May 6, 2016 at 5:52 AM, wrote: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > El viernes, 6 de mayo de 2016, 12:30:39 (UTC+2), nuncaestardes...@gmail.com escribió: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > El martes, 3 de mayo de 2016, 14:10:28 (UTC+2), nuncaestardes...@gmail.com escribió: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > Hi guys, >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > I´m kind of new here, but I already have this problem: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > On ODK Aggregate, when I want to export my data to .csv file, It´s showing me an error (Failure - will retry later). it was working fine but I guess the DDBB reach a certain volume that made it fail. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > I have read that it is related to global max_allowed_packet which has to be set to a higher value, with the following expression >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > set global max_allowed_packet = 1073741824; >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > I have my ODK aggregate deployed in a google app engine, so i tied to type that in the cloud shell but nothing happened, I still have the same problem, and as you can imaging it´s the worst moment to happend... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > I have a excel macro that reads the csv file and download the media files, renames and save everything in a file, so then i can open each picture from a link in an excel spreadsheet. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > Now I can only download the the CSV using ODK briefcase by: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > 1- Pulling all the data. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > 2- Exporting the CSV (I can only do it if I firstly pull the data) >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > The problem is that by doing this I have to download the media files twice (in briefcase and in my macro), and the CSV file exported by briefcase contains the local link in my computer to the media files, but not the cloud link to the media files, which in my case it is super convenient. I figure out that the link to the media is a concatenation of the ID number and some other things, so I manage to bypass this problem, but I had to change my whole procedure in the middle of the work, making it much more manual and not allowing me to use the tools I developped to automate all this work. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > Anyone knows how can I in ODK aggregate deployed in google app engine execute this procedure? >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > set global max_allowed_packet = 1073741824; >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > IF that really is the problem, which i hope it is. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > Thank you very much in advance. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > Regards, >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > > Pablo >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > One more thing, everytime I pull information with briefcase it downloads the whole project instead of just downloading the new entries,that could be something easy to improve in future versions, my excel macro that downloads everything from the csv does that it is muc more convenient this way. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > Regards, >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > > Pablo >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > I am trying to pull all the information with briefcase, and looking at the log I have this: >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > ... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > fetching instance 3048 ... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > fetching instance 3049 ... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > fetching instance 3050 ... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > fetching instance 3051 ... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > fetching instance 3052 ... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > fetching instance 3053 ... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > retrieving next chunk of instances from server... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > FAILED. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > I don´t know if it failed because there is nothing more to pull or because actually there is something elses happening... right now I don´t know how many entries I have so 3053? I don´t know... the filter in aggregate does not work so i don´t know how could I do this... >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > -- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > -- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Post: opend...@googlegroups.com >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Unsubscribe: opendatakit...@googlegroups.com >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Options: http://groups.google.com/group/opendatakit?hl=en >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > --- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > You received this message because you are subscribed to the Google Groups "ODK Community" group. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > For more options, visit https://groups.google.com/d/optout. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > -- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Mitch Sundt >> > > >> > > > > >> > > >> > > > > > Software Engineer >> > > >> > > > > >> > > >> > > > > > University of Washington >> > > >> > > > > >> > > >> > > > > > mitche...@gmail.com >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > -- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > -- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Post: opend...@googlegroups.com >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Unsubscribe: opendatakit...@googlegroups.com >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Options: http://groups.google.com/group/opendatakit?hl=en >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > --- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > You received this message because you are subscribed to a topic in the Google Groups "ODK Community" group. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > To unsubscribe from this topic, visit https://groups.google.com/d/topic/opendatakit/hjMKcHkk1tc/unsubscribe. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > To unsubscribe from this group and all its topics, send an email to opendatakit...@googlegroups.com. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > For more options, visit https://groups.google.com/d/optout. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > -- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > -- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Post: opend...@googlegroups.com >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Unsubscribe: opendatakit...@googlegroups.com >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Options: http://groups.google.com/group/opendatakit?hl=en >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > --- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > You received this message because you are subscribed to the Google Groups "ODK Community" group. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > For more options, visit https://groups.google.com/d/optout. >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > -- >> > > >> > > > > >> > > >> > > > > > >> > > >> > > > > >> > > >> > > > > > Mitch Sundt >> > > >> > > > > >> > > >> > > > > > Software Engineer >> > > >> > > > > >> > > >> > > > > > University of Washington >> > > >> > > > > >> > > >> > > > > > mitche...@gmail.com >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > -- >> > > >> > > > > >> > > >> > > > > -- >> > > >> > > > > >> > > >> > > > > Post: opend...@googlegroups.com >> > > >> > > > > >> > > >> > > > > Unsubscribe: opendatakit...@googlegroups.com >> > > >> > > > > >> > > >> > > > > Options: http://groups.google.com/group/opendatakit?hl=en >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > --- >> > > >> > > > > >> > > >> > > > > You received this message because you are subscribed to a topic in the Google Groups "ODK Community" group. >> > > >> > > > > >> > > >> > > > > To unsubscribe from this topic, visit https://groups.google.com/d/topic/opendatakit/hjMKcHkk1tc/unsubscribe. >> > > >> > > > > >> > > >> > > > > To unsubscribe from this group and all its topics, send an email to opendatakit...@googlegroups.com. >> > > >> > > > > >> > > >> > > > > For more options, visit https://groups.google.com/d/optout. >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > -- >> > > >> > > > > >> > > >> > > > > -- >> > > >> > > > > >> > > >> > > > > Post: opend...@googlegroups.com >> > > >> > > > > >> > > >> > > > > Unsubscribe: opendatakit...@googlegroups.com >> > > >> > > > > >> > > >> > > > > Options: http://groups.google.com/group/opendatakit?hl=en >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > --- >> > > >> > > > > >> > > >> > > > > You received this message because you are subscribed to the Google Groups "ODK Community" group. >> > > >> > > > > >> > > >> > > > > To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com. >> > > >> > > > > >> > > >> > > > > For more options, visit https://groups.google.com/d/optout. >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > >> > > >> > > > > -- >> > > >> > > > > >> > > >> > > > > Mitch Sundt >> > > >> > > > > Software Engineer >> > > >> > > > > University of Washington >> > > >> > > > > mitche...@gmail.com >> > > >> > > >> > > >> > > -- >> > > >> > > -- >> > > >> > > Post: opend...@googlegroups.com >> > > >> > > Unsubscribe: opendatakit...@googlegroups.com >> > > >> > > Options: http://groups.google.com/group/opendatakit?hl=en >> > > >> > > >> > > >> > > --- >> > > >> > > You received this message because you are subscribed to the Google Groups "ODK Community" group. >> > > >> > > To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit...@googlegroups.com. >> > > >> > > For more options, visit https://groups.google.com/d/optout. >> > > >> > > >> > > >> > > >> > > >> > > -- >> > > >> > > Mitch Sundt >> > > Software Engineer >> > > University of Washington >> > > mitche...@gmail.com >> > >> > Hi Guys, >> > >> > I am trying to export JSON format. The last downloaded file size is 43.7 MB. Now when I try to export, it is showing 'Generation in progress'. Nothing happens. When I checked the log (catalina.out) I am getting the following: >> > >> > /var/log/tomcat7# tail -f catalina.out >> > Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile >> > INFO: wrapping up JSON generation for NCD_DMS_8Nov2016_v26 >> > Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile >> > WARNING: stale JSON activity - do not save file in PersistentResults table for NCD_DMS_8Nov2016_v26 >> > Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile >> > INFO: iteration 46 before issuing query for NCD_DMS_8Nov2016_v26 >> > Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile >> > INFO: wrapping up JSON generation for NCD_DMS_8Nov2016_v26 >> > Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl generateJsonFile >> > WARNING: stale JSON activity - do not save file in PersistentResults table for NCD_DMS_8Nov2016_v26 >> > >> > >> > Please help me. This is a live project and all are waiting for this output to proceed. >> > >> > Thanks in advance >> > Krishna >> >> I am getting heap memory issue >> >> Jan 09, 2017 11:43:23 AM org.springframework.scheduling.support.TaskUtils$LoggingErrorHandler handleError >> SEVERE: Unexpected error occurred in scheduled task. >> java.lang.OutOfMemoryError: Java heap space > > This is the memory allocations > # java -XX:+PrintFlagsFinal -version | grep -iE 'HeapSize|PermSize|ThreadStackSize' > uintx AdaptivePermSizeWeight = 20 {product} > intx CompilerThreadStackSize = 0 {pd product} > uintx ErgoHeapSizeLimit = 0 {product} > uintx HeapSizePerGCThread = 87241520 {product} > uintx InitialHeapSize := 130784000 {product} > uintx LargePageHeapSizeThreshold = 134217728 {product} > uintx MaxHeapSize := 2092957696 {product} > uintx MaxPermSize = 174063616 {pd product} > uintx PermSize = 21757952 {pd product} > intx ThreadStackSize = 1024 {pd product} > intx VMThreadStackSize = 1024 {pd product} > java version "1.7.0_79" > OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-1~deb8u1) > OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode) > > Please help me solving this issue > > -- > -- > Post: opendatakit@googlegroups.com > Unsubscribe: opendatakit+unsubscribe@googlegroups.com > Options: http://groups.google.com/group/opendatakit?hl=en > > --- > You received this message because you are subscribed to the Google Groups "ODK Community" group. > To unsubscribe from this group and stop receiving emails from it, send an email to opendatakit+unsubscribe@googlegroups.com. > For more options, visit https://groups.google.com/d/optout.

Thank you Yaw.

I have increased the RAM and solved it.

RegardsKrishna ....

··· On 9 January 2017 at 21:32, Yaw Anokwa wrote:

Hi there,

You are out of RAM. Allocate more RAM to Tomcat.

Alternatively, use Briefcase to download the data and export that to
CSV (or JSON if you can write a script to convert XML or CSV to JSON).

Yaw

On Mon, Jan 9, 2017 at 1:45 AM, krishnarp2373@gmail.com wrote:

On Monday, January 9, 2017 at 11:45:50 AM UTC+5:30, krishn...@gmail.com wrote:

On Monday, January 9, 2017 at 11:30:47 AM UTC+5:30, krishn...@gmail.com wrote:

On Wednesday, June 8, 2016 at 10:57:43 PM UTC+5:30, Mitch Sundt wrote:

Congrats!

I'm glad you worked through the (very-confusing) sequence of steps
and were able to get your system up and running again.

(It's been years since I worked on the PAUSED mechanism -- I was
clearly mistaken about how it functions. I'm glad that it automatically
resumed.)

On Wed, Jun 8, 2016 at 1:23 AM, tomadedat...@gmail.com wrote:
I´ll complete my answer for the future:

  • Publishing is restored in both fusion tables and google
    spreadsheet (significantly slower in the spreadsheet), and now both status
    is "ACTIVE" (there is no button Mitch)
  • Briefcase gives me a "SUCCESS" message after pulling the whole
    thing.
  • CSV generation works without any problem.

Really this is the best!

El miércoles, 8 de junio de 2016, 9:30:55 (UTC+2), tomadedat...@gmail.com escribió:

IT WORKED!!!!

thank you very much!!!!!

the publishing started automatically, I guess that the 5-7 errors
I was having per second was due to trying to publish.

I am extremely happy, not only for resolving this issue but for
understanding it. Now I think I could do it again :slight_smile:

Mitch you are the best!

El martes, 7 de junio de 2016, 23:26:31 (UTC+2), Mitch Sundt escribió:

You are very close. Note that in step (5), the instructions say:

  1. Now, for each data table name and XXXXX value in step (2),
    execute a query of the form:SELECT * FROM opendatakit.YOUR_TABLE_NAME
    WHERE _PARENT_AURI = "XXXXXX"

Take special note of the backquote around the
opendatakit.-qualified table name, and the double-quotes around the XXXXXX
value. If the original query specified _DOM_AURI instead of_PARENT_AURI,
use that. If this query does not return a value, you may also need to add
the AND _TOP_LEVEL_AURI = "YYYYY" qualifier (but that should not be
necessary), if it was present in the original error message.

In the queries you tried, you missed this part of the
instructions (replace MAPFRE_JUNIO2016_G2_FOTO1_BN in your select
statements with opendatakit.MAPFRE_JUNIO2016_G2_FOTO1_BN )

These instructions are saying to select and show to you the
entries in the data tables that correspond to a specific submission (the
one with instanceID uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 ).

========================

If the publisher is paused, I believe the "PAUSED" is actually
a button, and you can click on that to get it to resume.

That, of course, will only be successful if you fix the data
corruption.

On Tue, Jun 7, 2016 at 10:05 AM, Pablo Rodríguez < nuncaestardes...@gmail.com> wrote:

One last thing and hoping I can fix the main problem, since
this is secondary: I tried to publish another spreadsheet (didnt´t work)
and another fusion table (just existing submissions, 1420, when I had
already published 1476 in the original) again, but at the end I decided to
delete them. The two original spreadsheet and fusion table are both "BOTH
Upload Existing & Stream New Submission Data" but now the status shows
"PAUSED". How can I make them active again?

On Tue, Jun 7, 2016 at 5:32 PM, Pablo Rodríguez < nuncaestardes...@gmail.com> wrote:

Ok, I think I´m seeing some progress in the sense of figuring
out, what´s happening. I see where did you took the query you gave me, from
Logs, App Engine, Background, All versions:

If I enlarge the first error I can see what was happening:

And I followed your instructions. But there is a second error,
and it is different:

where it says:

org.opendatakit.aggregate.task.CsvWorkerImpl failureRecovery:
Exception caught: org.opendatakit.common.datamodel.ODKEnumeratedElementException:
Attachment errors:SELECT * FROM MAPFRE_JUNIO2016_G2_FOTO1_BN WHERE
_TOP_LEVEL_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 AND
_PARENT_AURI = uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 is missing an
attachment instance OR has extra copies. for MapfreJunio2016

(BTW, in MAPFRE_JUNIO2016_G2_FOTO1_BN G2 stands for group1
since in this form there are several fields displayed in the same screen,
and FOTO1 means "picture1" which is a required field... I don´t know if
that is relevant)

Then I decided to query:

SELECT * FROM MAPFRE_JUNIO2016_G2_FOTO1_BN WHERE
_TOP_LEVEL_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" AND
_PARENT_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139"

But in this case instead of finding 2 entities there is no
match. Also I can see that by searching this folder in the instances folder
created by ODK Briefcase, instead of having an XML file and a picture/s it
is empty.

So I guess now I should find a way of deleting
uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 in the database since it is a
void submission, but I don´t know how to do this. ¿how can I do this?

What about all the rest of errors that I am finding in APP
Engine,Default Service that I posted before? should I worry about them?
(Because I do)

Why is this happening? How can I prevent this to happen again?
could large pictures be a problem? (right now the pulled process made by
briefcase is 4,24GB which includes XML files and pictures. This is 65% of
what I am expecting)

Thank you very much in advance.

On Tue, Jun 7, 2016 at 10:09 AM, Pablo Rodríguez < nuncaestardes...@gmail.com> wrote:

Ok So I see there are two errors constantly happening:

one of them:

09:55:58.212GET500960 B5.9 sAppEngine-Google; (+
http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=
uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3

0.1.0.2 - - [07/Jun/2016:09:55:58 +0200] "GET
/gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3
HTTP/1.1" 500 960 http://olmedo-servicios-integrales.appspot.com/gae/
watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "
olmedo-servicios-integrales.appspot.com" ms=5901 cpu_ms=460
cpm_usd=3.1940599999999997e-7 loading_request=0 instance=
00c61b117c0e9e3d18748ca60cc00676c90a86e4344dfcefb7184ce68bcd8de904
app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-07|00:56:08.408633-
07|10.94.34.134|-1592936713"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

09:56:01.221org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet
doGet: Beginning servlet processing

09:56:01.227org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet
doGet: Request is running on frontend

09:56:01.228org.opendatakit.aggregate.task.UploadSubmissionsWorkerImpl
uploadAllSubmissions: Beginning UPLOAD service:
uuid:8979e040-b496-49c1-b8de-f355d69d4c46 form MapfreJunio2016

09:56:01.228org.opendatakit.aggregate.form.FormFactory
internalGetForms: FormCache: using cached list of Forms

09:56:02.449org.opendatakit.aggregate.form.FormFactory
internalGetForms: FormCache: using cached list of Forms

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -73233127

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -13406599

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -52076838

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -57872843

09:56:02.478[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -78707824

And the other one

09:56:03.646GET2020 B1.4 sAppEngine-Google; (+
http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=
uuid%3A7ad28143-2a7a-4624-8807-215f05fd0102

0.1.0.2 - - [07/Jun/2016:09:56:03 +0200] "GET
/gae/uploadSubmissionsTask?fscUri=uuid%3A7ad28143-2a7a-4624-8807-215f05fd0102
HTTP/1.1" 202 - http://olmedo-servicios-integrales.appspot.com/gae/
watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "
olmedo-servicios-integrales.appspot.com" ms=1400 cpu_ms=120
cpm_usd=1.39586e-7 loading_request=0 instance=
00c61b117c0e9e3d18748ca60cc00676c90a86e4344dfcefb7184ce68bcd8de904
app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-07|00:56:08.408751-
07|10.94.34.134|-1592936713"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

09:56:03.652org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet
doGet: Beginning servlet processing

09:56:03.658org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet
doGet: Request is running on frontend

09:56:03.658org.opendatakit.aggregate.task.UploadSubmissionsWorkerImpl
uploadAllSubmissions: Beginning UPLOAD service:
uuid:28dbf50b-2fcf-4424-800f-9c90eb823a09 form MapfreJunio2016

09:56:03.658org.opendatakit.aggregate.form.FormFactory
internalGetForms: FormCache: using cached list of Forms

09:56:03.856org.opendatakit.aggregate.form.FormFactory
internalGetForms: FormCache: using cached list of Forms

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -14278749

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -17503047

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -34874322

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -50316469

09:56:03.881[s~olmedo-servicios-integrales/1.392851787586356529].:
Time left on lock: -18963331

So I try to run your query but changing the uuid to see if i
can see two entries like the first thing I did but it says no entities
matched. I checked and the publishing is still not working, but when I pull
with Briefcase I can see it is downloading more instances, so I guess the
system keeps receiving forms...

On Tue, Jun 7, 2016 at 9:51 AM, Pablo Rodríguez < nuncaestardes...@gmail.com> wrote:

I´ve done exactly what do told me, now I´m pulling with
briefcase to see if it works, but ends with FAILED

The problem is that I´m struggling to understand what I am
doing, so the steps for Repairing a Filled-in Form Submission are not
straight forward for me. I´ll keep trying...

On Tue, Jun 7, 2016 at 9:22 AM, nuncaestardes...@gmail.com wrote:

I´m going to try Mitch, for me this is dancing on thin ice...
let´s see...!

Thanks!

El martes, 7 de junio de 2016, 0:55:47 (UTC+2), Mitch Sundt escribió:

Have you worked through the database repair steps described
here:

https://github.com/opendatakit/opendatakit/wiki/
Aggregate-AppEngine-Troubleshooting#repairing-a-filled-in-form-submission

I.e.,

org.opendatakit.aggregate.task.gae.servlet.UploadSubmissionsTaskServlet
doGet: org.opendatakit.aggregate.exception.ODKExternalServiceException:
org.opendatakit.common.datamodel.ODKEnumeratedElementException: SELECT *
FROM MAPFRE_JUNIO2016_G3_REPORTE WHERE _TOP_LEVEL_AURI =
uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 AND _PARENT_AURI =
uuid:9e03e883-84bd-4457-a9cd-ae54746e1139 multiple-choice valueSet is
missing an entry OR has an extra copy of one

and then, on the datastore tab,

entering in the query box:

SELECT * FROM opendatakit.MAPFRE_JUNIO2016_G3_REPORTE
WHERE _TOP_LEVEL_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139" AND
_PARENT_AURI = "uuid:9e03e883-84bd-4457-a9cd-ae54746e1139"

you'll find 2 entries with matching _ORDINAL_NUMBER values
(1).

Per the instructions (step 6 & 7), delete the older one of
these.

On Mon, Jun 6, 2016 at 9:39 AM, Pablo Rodríguez < nuncaestardes...@gmail.com> wrote:

Well Now in another project I´m having the very same issue.
wort timing ever. In the middle of a big running project :S

There are not incomplete submissions according to ODK.

I was publishing a google spreadsheet and a fusion table
(Running in google app engine with billing enabled)

I´m looking at the log and I´m having an average of 4 to 7
error per minute, basically like this:

18:14:55.688GET500956 B550 msAppEngine-Google; (+
http://code.google.com/appengine)/gae/uploadSubmissionsTask?fscUri=
uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3

0.1.0.2 - - [06/Jun/2016:18:14:55 +0200] "GET
/gae/uploadSubmissionsTask?fscUri=uuid%3A9672732c-1bc6-4dde-a463-d066b5f3edf3
HTTP/1.1" 500 956 http://olmedo-servicios-integrales.appspot.com/gae/
watchdog "AppEngine-Google; (+http://code.google.com/appengine)" "
olmedo-servicios-integrales.appspot.com" ms=550 cpu_ms=231
cpm_usd=2.5548e-7 loading_request=0 instance=
00c61b117c327d751511c82e8ba427e6081789412cae8a6b2a33f690566e8b31
app_engine_release=1.9.38 trace_id=-

{

metadata: {…}

protoPayload: {…}

insertId: "2016-06-06|09:15:00.884913-
07|10.106.162.86|1409825575"

log: "appengine.googleapis.com/request_log"

httpRequest: {…}

operation: {…} }

Seaching for OutOfMemory in the log and I don´t find anything.

Right now I just pulled all the information with Briefcase,
and As far as I can see I have more than what I had in my published tables.

My biggest concern is that I could loose submissions :S.

Any help is very appreciated.

Regards,

On Fri, May 6, 2016 at 6:22 PM, Mitch Sundt < mitche...@gmail.com> wrote:

This would indicate a data corruption issue.

See these instructions: https://github.com/
opendatakit/opendatakit/wiki/Aggregate-AppEngine-
Troubleshooting#reparing-a-filled-in-form-submission

w.r.t. ODK Briefcase starting to pull data from the beginning
every time, if, after you pull data, you issue a push back up to the same
server, this will set tracking flags such that those records will be
ignored and skipped over the next time you pull.

ODK Briefcase stops at the first corrupted record because it
is fetching data in ascending marked-as-complete timestamp order. Until the
corrupted record is repaired or removed, it will not proceed to other
records.

Data corruption is more likely to occur if your server memory
is becoming exhausted or if you are submitting data over very-low-bandwidth
communications channels (e.g., satellite).

On Fri, May 6, 2016 at 5:52 AM, nuncaestardes...@gmail.com wrote:

El viernes, 6 de mayo de 2016, 12:30:39 (UTC+2), nuncaestardes...@gmail.com escribió:

El martes, 3 de mayo de 2016, 14:10:28 (UTC+2), nuncaestardes...@gmail.com escribió:

Hi guys,

I´m kind of new here, but I already have this problem:

On ODK Aggregate, when I want to export my data to .csv
file, It´s showing me an error (Failure - will retry later). it was working
fine but I guess the DDBB reach a certain volume that made it fail.

I have read that it is related to global
max_allowed_packet which has to be set to a higher value, with the
following expression

set global max_allowed_packet = 1073741824;

I have my ODK aggregate deployed in a google app engine,
so i tied to type that in the cloud shell but nothing happened, I still
have the same problem, and as you can imaging it´s the worst moment to
happend...

I have a excel macro that reads the csv file and download
the media files, renames and save everything in a file, so then i can open
each picture from a link in an excel spreadsheet.

Now I can only download the the CSV using ODK briefcase
by:

1- Pulling all the data.

2- Exporting the CSV (I can only do it if I firstly pull
the data)

The problem is that by doing this I have to download the
media files twice (in briefcase and in my macro), and the CSV file exported
by briefcase contains the local link in my computer to the media files, but
not the cloud link to the media files, which in my case it is super
convenient. I figure out that the link to the media is a concatenation of
the ID number and some other things, so I manage to bypass this problem,
but I had to change my whole procedure in the middle of the work, making it
much more manual and not allowing me to use the tools I developped to
automate all this work.

Anyone knows how can I in ODK aggregate deployed in
google app engine execute this procedure?

set global max_allowed_packet = 1073741824;

IF that really is the problem, which i hope it is.

Thank you very much in advance.

Regards,

Pablo

One more thing, everytime I pull information with briefcase
it downloads the whole project instead of just downloading the new
entries,that could be something easy to improve in future versions, my
excel macro that downloads everything from the csv does that it is muc more
convenient this way.

Regards,

Pablo

I am trying to pull all the information with briefcase, and
looking at the log I have this:

...

fetching instance 3048 ...

fetching instance 3049 ...

fetching instance 3050 ...

fetching instance 3051 ...

fetching instance 3052 ...

fetching instance 3053 ...

retrieving next chunk of instances from server...

FAILED.

I don´t know if it failed because there is nothing more to
pull or because actually there is something elses happening... right now I
don´t know how many entries I have so 3053? I don´t know... the filter in
aggregate does not work so i don´t know how could I do this...

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the
Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from
it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to a
topic in the Google Groups "ODK Community" group.

To unsubscribe from this topic, visit
https://groups.google.com/d/topic/opendatakit/hjMKcHkk1tc/unsubscribe.

To unsubscribe from this group and all its topics, send an
email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the
Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from
it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to a topic
in the Google Groups "ODK Community" group.

To unsubscribe from this topic, visit
https://groups.google.com/d/topic/opendatakit/hjMKcHkk1tc/unsubscribe.

To unsubscribe from this group and all its topics, send an
email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the
Google Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from
it, send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt

Software Engineer

University of Washington

mitche...@gmail.com

--

--

Post: opend...@googlegroups.com

Unsubscribe: opendatakit...@googlegroups.com

Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google
Groups "ODK Community" group.

To unsubscribe from this group and stop receiving emails from it,
send an email to opendatakit...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--

Mitch Sundt
Software Engineer
University of Washington
mitche...@gmail.com

Hi Guys,

I am trying to export JSON format. The last downloaded file size is
43.7 MB. Now when I try to export, it is showing 'Generation in progress'.
Nothing happens. When I checked the log (catalina.out) I am getting the
following:

/var/log/tomcat7# tail -f catalina.out
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl
generateJsonFile

INFO: wrapping up JSON generation for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl
generateJsonFile

WARNING: stale JSON activity - do not save file in PersistentResults
table for NCD_DMS_8Nov2016_v26

Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl
generateJsonFile

INFO: iteration 46 before issuing query for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl
generateJsonFile

INFO: wrapping up JSON generation for NCD_DMS_8Nov2016_v26
Jan 09, 2017 10:45:02 AM org.opendatakit.aggregate.task.CsvWorkerImpl
generateJsonFile

WARNING: stale JSON activity - do not save file in PersistentResults
table for NCD_DMS_8Nov2016_v26

Please help me. This is a live project and all are waiting for this
output to proceed.

Thanks in advance
Krishna

I am getting heap memory issue

Jan 09, 2017 11:43:23 AM org.springframework.
scheduling.support.TaskUtils$LoggingErrorHandler handleError

SEVERE: Unexpected error occurred in scheduled task.
java.lang.OutOfMemoryError: Java heap space

This is the memory allocations

java -XX:+PrintFlagsFinal -version | grep -iE 'HeapSize|PermSize|

ThreadStackSize'

uintx AdaptivePermSizeWeight                    = 20

{product}

 intx CompilerThreadStackSize                   = 0

{pd product}

uintx ErgoHeapSizeLimit                         = 0

{product}

uintx HeapSizePerGCThread                       = 87241520

{product}

uintx InitialHeapSize                          := 130784000

{product}

uintx LargePageHeapSizeThreshold                = 134217728

{product}

uintx MaxHeapSize                              := 2092957696

{product}

uintx MaxPermSize                               = 174063616

{pd product}

uintx PermSize                                  = 21757952

{pd product}

 intx ThreadStackSize                           = 1024

{pd product}

 intx VMThreadStackSize                         = 1024

{pd product}

java version "1.7.0_79"
OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-1~deb8u1)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)

Please help me solving this issue

--

Post: opendatakit@googlegroups.com
Unsubscribe: opendatakit+unsubscribe@googlegroups.com
Options: http://groups.google.com/group/opendatakit?hl=en


You received this message because you are subscribed to the Google
Groups "ODK Community" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to opendatakit+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Hello,

I have this error on ODK 1.4.13 running on a Tomcat Server. I have;

  1. Increase the Tomcat Memory
  2. Run the "max_allowed_packet" command

The error keeps recurring and I cant export to CSV.

Please help.

Asante
Paul

Hey Paul,

Please use filter option in odk rather than increasing memory.

Thanks Krishna_Rajesh,

How do use filter option?

Paul

You can filter by date.

1 Like