Restores the "Restore to Zotero Server" functionality, now using the
API:
1. Get all remote keys and send `DELETE` for any that don't exist
locally.
2. Upload all local objects in full (non-patch) mode using only library
version so that the remotes are overwritten.
3. Reset file sync history, causing all files to be uploaded (or, more
likely, reassociated with existing remote files).
Since these are treated as regular updates on the server, they'll sync
down to other clients normally. Unsynced changes by other clients might
still trigger conflicts.
This and Reset File Sync History can also now be run on group libraries,
with a library selector in the Reset pane (which I forgot to do with
React).
The full sync option is now removed from the Reset pane, since there
wasn't ever really a reason to run it manually.
We should be able to reimplement Restore from Online Library (#1386)
using the inverse of this approach.
Closes#914
And use them in new importTextAttachment() and importHTMLAttachment()
test support functions. These can be used to avoid needing a hidden
browser for determining the character set of the imported text
documents.
- Make Zotero.Attachments.createDirectoryForItem() delete existing
directory instead of moving it to orphaned-files; also now returns a
string path instead of an nsIFile
- Use above function during file sync instead of
_deleteExistingAttachmentFiles(), which was partly broken
- Fix throwing on errors when saving some attachment types
The Zotero.DataDirectory equivalents return string paths instead of nsIFile
instances, so some of these calls now just use Zotero.File.pathToFile(), which
can be removed when the surrounding code is updated to OS.File,
OS.File.DirectoryIterator, used by OS.File.removeDir(), isn't reliable
on Travis, returning entry.isDir == false for directories, so use
nsIFile instead
See also: 2c2a5a378
Previously, objects were first downloaded and saved to the sync cache,
which was then processed separately to create/update local objects. This
meant that a server bug could result in invalid data in the sync cache
that would never be processed. Now, objects are saved as they're
downloaded and only added to the sync cache after being successfully
saved. The keys of objects that fail are added to a queue, and those
objects are refetched and retried on a backoff schedule or when a new
client version is installed (in case of a client bug or a client with
outdated data model support).
An alternative would be to save to the sync cache first and evict
objects that fail and add them to the queue, but that requires more
complicated logic, and it probably makes more sense just to buffer a few
downloads ahead so that processing is never waiting for downloads to
finish.
While trying to get translation and citing working with asynchronously
generated data, we realized that drag-and-drop support was going to
be...problematic. Firefox only supports synchronous methods for
providing drag data (unlike, it seems, the DataTransferItem interface
supported by Chrome), which means that we'd need to preload all relevant
data on item selection (bounded by export.quickCopy.dragLimit) and keep
the translate/cite methods synchronous (or maintain two separate
versions).
What we're trying instead is doing what I said in #518 we weren't going
to do: loading most object data on startup and leaving many more
functions synchronous. Essentially, this takes the various load*()
methods described in #518, moves them to startup, and makes them operate
on entire libraries rather than individual objects.
The obvious downside here (other than undoing much of the work of the
last many months) is that it increases startup time, potentially quite a
lot for larger libraries. On my laptop, with a 3,000-item library, this
adds about 3 seconds to startup time. I haven't yet tested with larger
libraries. But I'm hoping that we can optimize this further to reduce
that delay. Among other things, this is loading data for all libraries,
when it should be able to load data only for the library being viewed.
But this is also fundamentally just doing some SELECT queries and
storing the results, so it really shouldn't need to be that slow (though
performance may be bounded a bit here by XPCOM overhead).
If we can make this fast enough, it means that third-party plugins
should be able to remain much closer to their current designs. (Some
things, including saving, will still need to be made asynchronous.)
Also:
- Remove last-sync-time mechanism for both WebDAV and ZFS, since it can
be determined by storage properties (mtime/md5) in data sync
- Add option to include synced storage properties in item toJSON()
instead of local file properties
- Set "Fake-Server-Match" header in setHTTPResponse() test support
function, which can be used for request count assertions -- see
resetRequestCount() and assertRequestCount() in webdavTest.js
- Allow string (e.g., 'to_download') instead of constant in
Zotero.Sync.Data.Local.setSyncState()
- Misc storage tweaks
This mostly gets ZFS file syncing and file conflict resolution working
with the API sync process. WebDAV will need to be updated separately.
Known issues:
- File sync progress is temporarily gone
- File uploads can result in an unnecessary 412 loop on the next data
sync
- This causes Firefox to crash on one of my computers during tests,
which would be easier to debug if it produced a crash log.
Also:
- Adds httpd.js for use in tests when FakeXMLHttpRequest can't be used
(e.g., saveURI()).
- Adds some additional test data files for attachment tests