You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Observation: after importing a data dictionary with 1000 items into a MR, comparing consecutive revisions (where each consecutive pair has more items due to batched data dictionary item upload) gets slower with each later pair of revisions, until the gateway times out. This can be reproduced by opening https://dev.cat.data.fin.gov.bc.ca/node/157/revisions/view/11152/11154/visual_inline and continually clicking "Next change". I'm guessing that computing the diff is getting more expensive as there is more content to compare.
Nicole let me know that the T1 tax form has many fields (~800), which was why importing 1000 rows was tested. I am curious how much / how severely this will affect users, and whether there's anything that can be done to make it better. Getting some info on roughly how much computer memory and CPU these operations are using might be a good place to start.
For investigation: Does this happen when comparing two very similar, published revisions after auto-pruning has run?
The text was updated successfully, but these errors were encountered:
Observation: after importing a data dictionary with 1000 items into a MR, comparing consecutive revisions (where each consecutive pair has more items due to batched data dictionary item upload) gets slower with each later pair of revisions, until the gateway times out. This can be reproduced by opening https://dev.cat.data.fin.gov.bc.ca/node/157/revisions/view/11152/11154/visual_inline and continually clicking "Next change". I'm guessing that computing the diff is getting more expensive as there is more content to compare.
Nicole let me know that the T1 tax form has many fields (~800), which was why importing 1000 rows was tested. I am curious how much / how severely this will affect users, and whether there's anything that can be done to make it better. Getting some info on roughly how much computer memory and CPU these operations are using might be a good place to start.
For investigation: Does this happen when comparing two very similar, published revisions after auto-pruning has run?
The text was updated successfully, but these errors were encountered: