- Messages are shown once a day by default (within the same session for
id-less messages)
- Messages with an `id` attribute include a checkbox to not show again
for 30 days
- If an `infoURL` attribute is provided, a "More Information" button is
shown that launches that URL
- If `title` is provided, it's used for the dialog title. Otherwise
"Warning" is shown.
But skip it at startup, even if flagged on, if there are schema update
steps to perform, to avoid creating tables that aren't expected to exist
yet.
Originally added in 5b9e6497a but disabled in c4cc44528 and 7a434df53
Originally added in 5b9e6497a. We stopped running the integrity check
before userdata upgrades in c4cc44528 because this new behavior was
breaking upgrades, but it could still be run when coming from the DB
Repair Tool, which could cause problems if, say, you recovered a
database from a computer that had an older version of Zotero and ran the
DB Repair Tool on it. Disabling this for now until we have a better
solution.
- Create userdata tables and indexes that are missing
- Delete tables and triggers that should no longer exist
- Run schema integrity check before user data migration
- Run schema integrity check after restart error
This is meant to address two problems:
1) Database damage, and subsequent use of the DB Repair Tool, that
results in missing tables
2) A small number of cases of schema update steps somehow not being
reflected in users' databases despite their having updated userdata
numbers, which are set within the same transaction. Until we figure
out how that's happening, we should start adding conditional versions
of schema update steps to the integrity check.
This is currently only running the update check after a restart error,
which might not occur for all missed schema update steps, so we might
want other triggers for calling setIntegrityCheckRequired().
- When changing type based on 'type:' line, move existing fields that
are no longer valid to Extra
- Remove 'type:' line with CSL type if the item's existing type is one
of the types mapped to it
It shouldn't be possible to nest two collections inside each other, but
if it happens, fix it in the integrity check.
Also detect it from CollectionTreeView::expandToCollection() (used when
showing the collections containing an item) and crash Zotero with a flag
to run an integrity check after restart. Previously, this would result
in an infinite loop.
This may be the cause of some of the collection disappearances people
have reported. If parentCollectionID never leads to a null, the
collection won't appear anywhere in the tree.
TODO:
- Figure out how this is happening
- Detect and fix it automatically for people it's happened to
This changes the way item types, item fields, creator types, and CSL
mappings are defined and handled, in preparation for updated types and
fields.
Instead of being predefined in SQL files or code, type/field info is
read from a bundled JSON file shared with other parts of the Zotero
ecosystem [1], referred to as the "global schema". Updates to the
bundled schema file are automatically applied to the database at first
run, allowing changes to be made consistently across apps.
When syncing, invalid JSON properties are now rejected instead of being
ignored and processed later, which will allow for schema changes to be
made without causing problems in existing clients. We considered many
alternative approaches, but this approach is by far the simplest,
safest, and most transparent to the user.
For now, there are no actual changes to types and fields, since we'll
first need to do a sync cut-off for earlier versions that don't reject
invalid properties.
For third-party code, the main change is that type and field IDs should
no longer be hard-coded, since they may not be consistent in new
installs. For example, code should use `Zotero.ItemTypes.getID('note')`
instead of hard-coding `1`.
[1] https://github.com/zotero/zotero-schema
Previously, objects were first downloaded and saved to the sync cache,
which was then processed separately to create/update local objects. This
meant that a server bug could result in invalid data in the sync cache
that would never be processed. Now, objects are saved as they're
downloaded and only added to the sync cache after being successfully
saved. The keys of objects that fail are added to a queue, and those
objects are refetched and retried on a backoff schedule or when a new
client version is installed (in case of a client bug or a client with
outdated data model support).
An alternative would be to save to the sync cache first and evict
objects that fail and add them to the queue, but that requires more
complicated logic, and it probably makes more sense just to buffer a few
downloads ahead so that processing is never waiting for downloads to
finish.