Say "Use the [local|remote] version for all remaining conflicts" for
everything instead of saying "Use [local|remote] fields for all
remaining conflicts" for some conflicts.
This also fixes a test failure after 54343c49fb.
Check file-editing access for the group from the API before offering to
reset, update the filesEditable setting properly, and restart the sync
automatically after resetting.
Restores the "Restore to Zotero Server" functionality, now using the
API:
1. Get all remote keys and send `DELETE` for any that don't exist
locally.
2. Upload all local objects in full (non-patch) mode using only library
version so that the remotes are overwritten.
3. Reset file sync history, causing all files to be uploaded (or, more
likely, reassociated with existing remote files).
Since these are treated as regular updates on the server, they'll sync
down to other clients normally. Unsynced changes by other clients might
still trigger conflicts.
This and Reset File Sync History can also now be run on group libraries,
with a library selector in the Reset pane (which I forgot to do with
React).
The full sync option is now removed from the Reset pane, since there
wasn't ever really a reason to run it manually.
We should be able to reimplement Restore from Online Library (#1386)
using the inverse of this approach.
Closes#914
While objects in the sync queue that fail to save should remain in the
queue, objects that just don't exist remotely need to be removed, or
else they'll be retried forever.
If an object exists locally but not remotely and the local version has a
version number, that's an error. I don't think that should ever happen,
but it can if things somehow get out of sync due to other bugs.
To address, reprocess the API delete log during a full sync and then
reset the version number of all remaining local objects that don't exist
remotely (not just unmodified objects, as was the case previously) to 0
for uploading.
When remote deletions are reprocessed, delete local objects that haven't
been modified and show the conflict resolution window for any local
items that have.
Also:
- Clean up checking of last remote library version during download
syncs
- Add Zotero.DataObjects.getAllKeys()
If the item was deleted on one side and moved to the trash on the other,
just delete the item on the trash side. Since trash emptying happens
automatically, this would otherwise result in a conflict even if the
user carefully avoided making changes before a manual sync.
If the API returns a modified item after an upload (e.g., to strip invalid
characters), don't update the Date Modified field when saving those changes to
the local version (though it would still be good to avoid API-side changes as
much as possible).
And omit in ZFS file sync requests
The API previously didn't allow these properties to be set for group items,
because they were set atomically during the file upload process, but 1) that's
not really necessary (makes a little sense for 'filename', but not really a big
deal if an old file is renamed on another computer before the new file is
synced down) and 2) skipping them results in the properties getting erased
after items are uploaded and the empty values returned by the server overwrite
the local values.
- Use custom exception for user-initiated sync cancellations, which can bubble
up to the sync runner -- this should help with a sync stop button (#915)
- Separate out deletions-downloading code
- Refactor delay generator handling on library version mismatch
- Clearer variable names
Previously, objects were first downloaded and saved to the sync cache,
which was then processed separately to create/update local objects. This
meant that a server bug could result in invalid data in the sync cache
that would never be processed. Now, objects are saved as they're
downloaded and only added to the sync cache after being successfully
saved. The keys of objects that fail are added to a queue, and those
objects are refetched and retried on a backoff schedule or when a new
client version is installed (in case of a client bug or a client with
outdated data model support).
An alternative would be to save to the sync cache first and evict
objects that fail and add them to the queue, but that requires more
complicated logic, and it probably makes more sense just to buffer a few
downloads ahead so that processing is never waiting for downloads to
finish.
While trying to get translation and citing working with asynchronously
generated data, we realized that drag-and-drop support was going to
be...problematic. Firefox only supports synchronous methods for
providing drag data (unlike, it seems, the DataTransferItem interface
supported by Chrome), which means that we'd need to preload all relevant
data on item selection (bounded by export.quickCopy.dragLimit) and keep
the translate/cite methods synchronous (or maintain two separate
versions).
What we're trying instead is doing what I said in #518 we weren't going
to do: loading most object data on startup and leaving many more
functions synchronous. Essentially, this takes the various load*()
methods described in #518, moves them to startup, and makes them operate
on entire libraries rather than individual objects.
The obvious downside here (other than undoing much of the work of the
last many months) is that it increases startup time, potentially quite a
lot for larger libraries. On my laptop, with a 3,000-item library, this
adds about 3 seconds to startup time. I haven't yet tested with larger
libraries. But I'm hoping that we can optimize this further to reduce
that delay. Among other things, this is loading data for all libraries,
when it should be able to load data only for the library being viewed.
But this is also fundamentally just doing some SELECT queries and
storing the results, so it really shouldn't need to be that slow (though
performance may be bounded a bit here by XPCOM overhead).
If we can make this fast enough, it means that third-party plugins
should be able to remain much closer to their current designs. (Some
things, including saving, will still need to be made asynchronous.)