But skip it at startup, even if flagged on, if there are schema update
steps to perform, to avoid creating tables that aren't expected to exist
yet.
Originally added in 5b9e6497a but disabled in c4cc44528 and 7a434df53
This lays the groundwork for moving collections and searches to the
trash instead of deleting them outright. We're not doing that yet, so
the `deleted` property will never be set (except for items), but this
will allow clients from this point forward to sync collections and
searches with that property for when it's used in the future. For now,
such objects will just be hidden from the collections pane as if they
had been deleted.
Originally added in 5b9e6497a. We stopped running the integrity check
before userdata upgrades in c4cc44528 because this new behavior was
breaking upgrades, but it could still be run when coming from the DB
Repair Tool, which could cause problems if, say, you recovered a
database from a computer that had an older version of Zotero and ran the
DB Repair Tool on it. Disabling this for now until we have a better
solution.
ConcurrentCaller wasn't waiting properly if start() was called again
while it was pausing, so 429 caused an immediate retry, which is pretty
much exactly what you don't want a 429 to do.
After a local item change, Zotero uploads the JSON and then applies the
saved JSON returned from the API to the local object using `fromJSON()`,
the same as it would apply any other remote change.
`fromJSON()` is meant to migrate Extra lines into real types and fields
after future item type/field changes. It calls
`Z.Utilities.Internal.extractExtraFields()`, which looks for valid item
type or CSL type values in Type lines in Extra, handles the rest of
parsing accordingly, and passes back the parsed item type. `fromJSON()`
wasn't handling `itemType` in the response object, so the item type
didn't get applied and the Type line was stripped. This fixes that.
Since valid type values are now parsed, if you have a Journal Article
item with a Pages field and enter "Type: song" into Extra and sync, the
item will be converted to Audio Recording and `Pages: 123` will be
placed in Extra.
https://forums.zotero.org/discussion/comment/369221/#Comment_369221
- Create userdata tables and indexes that are missing
- Delete tables and triggers that should no longer exist
- Run schema integrity check before user data migration
- Run schema integrity check after restart error
This is meant to address two problems:
1) Database damage, and subsequent use of the DB Repair Tool, that
results in missing tables
2) A small number of cases of schema update steps somehow not being
reflected in users' databases despite their having updated userdata
numbers, which are set within the same transaction. Until we figure
out how that's happening, we should start adding conditional versions
of schema update steps to the integrity check.
This is currently only running the update check after a restart error,
which might not occur for all missed schema update steps, so we might
want other triggers for calling setIntegrityCheckRequired().
- Using `sandboxPrototype` properly uses window as prototype
- This commit removes the need for our patch in babel-worker.js:
3d0bc4cf9f
- We properly inject into frames in the client if we ever include frames
Our SingleFileZ integration would save images inside directories following the
SingleFileZ format. However, Zotero does not support syncing sub-directories of
attachments. This commit switch back to a single HTML file with base64 encoded
resources. We think that the 33% increase in resources will be offset by the
compression of HTML and removal of JavaScript and unused CSS.
This commit does not fix past snapshots that were saved using SingleFileZ.
Possibly caused by a third-party client uploading mtimes that then
aren't synced, or that differ from what get synced. When we detect this,
try to correct it by updating mtimes on WebDAV and the API to match the
local file.
https://forums.zotero.org/discussion/83554/zotero-loop-syncs-2000-items
Zotero.File.move() now forces `overwrite` if the old and new filenames
differ only by case, since otherwise on a case-insensitive filesystem
OS.File.move() does an existence check and thinks that the target file
exists.
Previously, if an object was uploaded but the API returned 'unchanged',
the uploaded data would be written to the sync cache, which, given that
most requests are patch requests, could result in an empty or mostly
empty object being saved to the sync cache. That would cause the next
sync to treat most/all local fields as changed and either upload them
unnecessarily or trigger a conflict instead of merging changes
automatically.
When non-conflicting changes were automatically merged, the local object
would be correctly marked as unsynced, but the merged object rather than
the remote object would be saved to the sync cache. When the object was
then uploaded, it matched the cache version exactly, so an empty patch
object (other than an unchanged dateModified, which is always included)
would be uploaded and the local change wouldn't make it to the server.
The empty patch would result in an 'unchanged' response, which would
cause the empty patch object to be saved to the sync cache (which is a
bug that I'll fix separately). If the local object was modified again,
the patch would include all fields (since the cache object was empty)
and the local change would be uploaded, but there could also be
unnecessary conflicts due to it looking like all local fields had been
modified.
This patch causes the remote object to be saved to the sync cache
instead, so the local change looks like a local edit and is correctly
uploaded.
Don't include linked-object or replaced-item relations. Previously, if
you duplicated an item, modified it to represent a different source, and
dragged it to another library where you had already copied the original
item, the new item wouldn't be transferred.
https://forums.zotero.org/discussion/comment/353246/#Comment_353246