Instead of My Publications being a separate library, have it be a
special collection inside My Library. Top-level items can be dragged
into it as before, and child items can be toggled off and on with a
button in the item pane. Newly added child items won't be shown by
default.
For upgraders, items in the My Publications library will be moved into
My Library, which might result in their being duplicated if the items
weren't removed from My Library. The client will then upload those new
items into My Library.
The API endpoint will continue to show items in the separate My
Publications library until My Publications items are added to My
Library, so the profile page will continue to show them.
The Zotero.DataDirectory equivalents return string paths instead of nsIFile
instances, so some of these calls now just use Zotero.File.pathToFile(), which
can be removed when the surrounding code is updated to OS.File,
Look for other profiles, from both apps (Firefox and Standalone), that
point to the data directory being migrated and update prefs.js in those
profiles to point to the new location.
Also reorganize code into Zotero.Profile and Zotero.DataDirectory
namespaces
On reset, items are overwritten with pristine versions if available and deleted
otherwise, and then the library is marked for a full sync. Unsynced/changed
files are deleted and marked for download.
Closes#1002
Todo:
- Handle API key access change (#953, in part)
- Handle 403 from data/file upload for existing users (#1041)
This is necessary because you can copy a database synced with a
different account into the data directory without affecting the stored
pref.
Also tweak the text to use proper quotes and remove quaint references to
"the server".
Previously, objects were first downloaded and saved to the sync cache,
which was then processed separately to create/update local objects. This
meant that a server bug could result in invalid data in the sync cache
that would never be processed. Now, objects are saved as they're
downloaded and only added to the sync cache after being successfully
saved. The keys of objects that fail are added to a queue, and those
objects are refetched and retried on a backoff schedule or when a new
client version is installed (in case of a client bug or a client with
outdated data model support).
An alternative would be to save to the sync cache first and evict
objects that fail and add them to the queue, but that requires more
complicated logic, and it probably makes more sense just to buffer a few
downloads ahead so that processing is never waiting for downloads to
finish.
While trying to get translation and citing working with asynchronously
generated data, we realized that drag-and-drop support was going to
be...problematic. Firefox only supports synchronous methods for
providing drag data (unlike, it seems, the DataTransferItem interface
supported by Chrome), which means that we'd need to preload all relevant
data on item selection (bounded by export.quickCopy.dragLimit) and keep
the translate/cite methods synchronous (or maintain two separate
versions).
What we're trying instead is doing what I said in #518 we weren't going
to do: loading most object data on startup and leaving many more
functions synchronous. Essentially, this takes the various load*()
methods described in #518, moves them to startup, and makes them operate
on entire libraries rather than individual objects.
The obvious downside here (other than undoing much of the work of the
last many months) is that it increases startup time, potentially quite a
lot for larger libraries. On my laptop, with a 3,000-item library, this
adds about 3 seconds to startup time. I haven't yet tested with larger
libraries. But I'm hoping that we can optimize this further to reduce
that delay. Among other things, this is loading data for all libraries,
when it should be able to load data only for the library being viewed.
But this is also fundamentally just doing some SELECT queries and
storing the results, so it really shouldn't need to be that slow (though
performance may be bounded a bit here by XPCOM overhead).
If we can make this fast enough, it means that third-party plugins
should be able to remain much closer to their current designs. (Some
things, including saving, will still need to be made asynchronous.)
Also:
- Remove last-sync-time mechanism for both WebDAV and ZFS, since it can
be determined by storage properties (mtime/md5) in data sync
- Add option to include synced storage properties in item toJSON()
instead of local file properties
- Set "Fake-Server-Match" header in setHTTPResponse() test support
function, which can be used for request count assertions -- see
resetRequestCount() and assertRequestCount() in webdavTest.js
- Allow string (e.g., 'to_download') instead of constant in
Zotero.Sync.Data.Local.setSyncState()
- Misc storage tweaks
This mostly gets ZFS file syncing and file conflict resolution working
with the API sync process. WebDAV will need to be updated separately.
Known issues:
- File sync progress is temporarily gone
- File uploads can result in an unnecessary 412 loop on the next data
sync
- This causes Firefox to crash on one of my computers during tests,
which would be easier to debug if it produced a crash log.
Also:
- Adds httpd.js for use in tests when FakeXMLHttpRequest can't be used
(e.g., saveURI()).
- Adds some additional test data files for attachment tests
This will appear much less frequently, since non-conflicting field changes on
both sides can be resolved automatically, but genuine field conflicts still
require manual conflict resolution.
The merge pane is no longer editable, since the itembox code to do that is
async and can't run in a modal window, but it's not really necessary,
particularly with conflicts happening less frequently.
TODO:
- Remote item deletions
- File conflicts
- Maybe handle some edge cases where the conflicted items fail to save
There's a lot more to do, and this isn't ready for actual usage, but the
basic functionality is mostly in place and has decent test coverage. It
can successfully upgrade a library last used with classic syncing and
pull down changes via the API. Uploading mostly works but is currently
disabled for safety until it has better test coverage.
Downloaded JSON is first saved to a cache table, which is then used to
populate other tables and later for generating PATCH requests and
automatically resolving conflicts (since it shows what was changed
locally and what was changed remotely). Objects with unmet dependencies
or unknown fields are skipped for now but don't block the rest of the
sync.
Some of the bigger remaining to-dos:
- Tests for uploading
- Re-do the preferences to get an API key
- File sync integration
- Full-text syncing integration
- Manual conflict resolution (though this already includes much smarter
conflict handling that automatically resolves many conflicts)