- Archive remotely missing that user chooses to keep
- Ignore archived groups that don't existing remotely
- Unarchive groups that become available again
This reverts commit 60befe52e4 and adds a
better fix that leaves the notifier event in place. Feeds just don't
need to update after syncs during tests.
On reset, items are overwritten with pristine versions if available and deleted
otherwise, and then the library is marked for a full sync. Unsynced/changed
files are deleted and marked for download.
Closes#1002
Todo:
- Handle API key access change (#953, in part)
- Handle 403 from data/file upload for existing users (#1041)
- Use custom exception for user-initiated sync cancellations, which can bubble
up to the sync runner -- this should help with a sync stop button (#915)
- Separate out deletions-downloading code
- Refactor delay generator handling on library version mismatch
- Clearer variable names
This is necessary to get a library version after the write instead of an
item version. Otherwise after a full-text write, the main library
version is behind, so the next sync checks all object types for that
library instead of getting a 304.
Full text is batched up to 500K characters or 10 items, whichever is
less.
This also switches to using ?format=versions for /fulltext requests,
which isn't currently necessary but reflects what it's actually doing.
Previously, objects were first downloaded and saved to the sync cache,
which was then processed separately to create/update local objects. This
meant that a server bug could result in invalid data in the sync cache
that would never be processed. Now, objects are saved as they're
downloaded and only added to the sync cache after being successfully
saved. The keys of objects that fail are added to a queue, and those
objects are refetched and retried on a backoff schedule or when a new
client version is installed (in case of a client bug or a client with
outdated data model support).
An alternative would be to save to the sync cache first and evict
objects that fail and add them to the queue, but that requires more
complicated logic, and it probably makes more sense just to buffer a few
downloads ahead so that processing is never waiting for downloads to
finish.
Tests should make no assumptions about the presence of bundled files and
should do a full resetDB() if they need them. But most tests don't need
them, and they're very slow to install. We can reconsider this if we
drastically speed up DB resetting in tests (e.g., by caching a pristine
data directory).
Improves UX of sync authentication.
The account is now linked and unlinked and an API key related to
the client is generated transparently in the background.
The API key is deleted on unlinking.
No sync options are allowed before linking an account.
This mostly gets ZFS file syncing and file conflict resolution working
with the API sync process. WebDAV will need to be updated separately.
Known issues:
- File sync progress is temporarily gone
- File uploads can result in an unnecessary 412 loop on the next data
sync
- This causes Firefox to crash on one of my computers during tests,
which would be easier to debug if it produced a crash log.
Also:
- Adds httpd.js for use in tests when FakeXMLHttpRequest can't be used
(e.g., saveURI()).
- Adds some additional test data files for attachment tests
Also:
* _finalizeErase in Zotero.DataObject is now inheritable
* Call _initErase before starting a DB transaction
* removes Zotero.Libraries.add and Zotero.Libraries.remove (doesn't seem like this is used any more)
There's a lot more to do, and this isn't ready for actual usage, but the
basic functionality is mostly in place and has decent test coverage. It
can successfully upgrade a library last used with classic syncing and
pull down changes via the API. Uploading mostly works but is currently
disabled for safety until it has better test coverage.
Downloaded JSON is first saved to a cache table, which is then used to
populate other tables and later for generating PATCH requests and
automatically resolving conflicts (since it shows what was changed
locally and what was changed remotely). Objects with unmet dependencies
or unknown fields are skipped for now but don't block the rest of the
sync.
Some of the bigger remaining to-dos:
- Tests for uploading
- Re-do the preferences to get an API key
- File sync integration
- Full-text syncing integration
- Manual conflict resolution (though this already includes much smarter
conflict handling that automatically resolves many conflicts)