Move an item and its attachments to another library. Attachments are
removed as necessary if linked files or all files aren't supported in
the target library.
If a standalone attachment existed in a collection and then was added to
a parent (e.g., via Create Parent Item), and attachment metadata was
also changed at the same time (e.g., due to file syncing), the
'collection item must be top level' trigger could throw on another
syncing computer. To work around this, remove collections first, then
make changes to the parentItemID columns, and then add new collections.
Instead of My Publications being a separate library, have it be a
special collection inside My Library. Top-level items can be dragged
into it as before, and child items can be toggled off and on with a
button in the item pane. Newly added child items won't be shown by
default.
For upgraders, items in the My Publications library will be moved into
My Library, which might result in their being duplicated if the items
weren't removed from My Library. The client will then upload those new
items into My Library.
The API endpoint will continue to show items in the separate My
Publications library until My Publications items are added to My
Library, so the profile page will continue to show them.
The client skips synced storage properties (md5, mtime) when uploading items to
ZFS-enabled libraries, but since the API returns JSON with those values
included after writes, they do get saved to the sync cache. If the local
attachment is then modified and the client generates a diff from the cached
version with those properties skipped, they'll be included in the patch JSON as
empty strings in order to clear them. This changes Zotero.Item::toJSON() to
skip those properties in patch mode as well.
This fixes a sync error ("Cannot change 'md5' directly in group library") when
a group attachment is updated locally.
And omit in ZFS file sync requests
The API previously didn't allow these properties to be set for group items,
because they were set atomically during the file upload process, but 1) that's
not really necessary (makes a little sense for 'filename', but not really a big
deal if an old file is renamed on another computer before the new file is
synced down) and 2) skipping them results in the properties getting erased
after items are uploaded and the empty values returned by the server overwrite
the local values.
Before 5.0 we performed a regexp on new item data values to determine if
they were integers and saved them natively in SQLite if so. We no longer
do that, but setField() used strict equality when checking for changes,
so an item could be marked as changed when comparing to a new string
value (e.g., from a write response from the API, which always returns
strings). To avoid that, this converts all old values in the DB to
strings and saves all incoming values as strings automatically. (This
should also help with searching and some other things.)
While trying to get translation and citing working with asynchronously
generated data, we realized that drag-and-drop support was going to
be...problematic. Firefox only supports synchronous methods for
providing drag data (unlike, it seems, the DataTransferItem interface
supported by Chrome), which means that we'd need to preload all relevant
data on item selection (bounded by export.quickCopy.dragLimit) and keep
the translate/cite methods synchronous (or maintain two separate
versions).
What we're trying instead is doing what I said in #518 we weren't going
to do: loading most object data on startup and leaving many more
functions synchronous. Essentially, this takes the various load*()
methods described in #518, moves them to startup, and makes them operate
on entire libraries rather than individual objects.
The obvious downside here (other than undoing much of the work of the
last many months) is that it increases startup time, potentially quite a
lot for larger libraries. On my laptop, with a 3,000-item library, this
adds about 3 seconds to startup time. I haven't yet tested with larger
libraries. But I'm hoping that we can optimize this further to reduce
that delay. Among other things, this is loading data for all libraries,
when it should be able to load data only for the library being viewed.
But this is also fundamentally just doing some SELECT queries and
storing the results, so it really shouldn't need to be that slow (though
performance may be bounded a bit here by XPCOM overhead).
If we can make this fast enough, it means that third-party plugins
should be able to remain much closer to their current designs. (Some
things, including saving, will still need to be made asynchronous.)
Also:
- Remove last-sync-time mechanism for both WebDAV and ZFS, since it can
be determined by storage properties (mtime/md5) in data sync
- Add option to include synced storage properties in item toJSON()
instead of local file properties
- Set "Fake-Server-Match" header in setHTTPResponse() test support
function, which can be used for request count assertions -- see
resetRequestCount() and assertRequestCount() in webdavTest.js
- Allow string (e.g., 'to_download') instead of constant in
Zotero.Sync.Data.Local.setSyncState()
- Misc storage tweaks
This mostly gets ZFS file syncing and file conflict resolution working
with the API sync process. WebDAV will need to be updated separately.
Known issues:
- File sync progress is temporarily gone
- File uploads can result in an unnecessary 412 loop on the next data
sync
- This causes Firefox to crash on one of my computers during tests,
which would be easier to debug if it produced a crash log.
Also:
- Adds httpd.js for use in tests when FakeXMLHttpRequest can't be used
(e.g., saveURI()).
- Adds some additional test data files for attachment tests
When called on an identified object (i.e., one with an id or
library/key), loadAllData() must be called first. When called on a new
object (which is more common anyway), fromJSON() can be called
immediately.
Absolute paths have been stored as strings on all platforms for a while,
but old Mac persistent descriptors (Base64-encoded opaque alias records)
could still exist in the DB. Additionally, relative paths for stored
files were stored as Mozilla-specific opaque strings rather than UTF-8
strings.
This adds a schema step to convert those to strings paths in the DB.
Since Mac persistent descriptors aren't converted if the file isn't
found, we still handle and (convert) old-style persistent descriptors if
necessary when reading paths from the DB.
This also moves path string handling -- converting a path to a prefixed
string for stored or base-dir-relative files -- to the
Zotero.Item#attachmentPath setter instead of save() so that reading it
back immediately returns the correct value. One consequence is that the
attachment link mode must now be set before setting the path.
Zotero.Item#getFile() is now deprecated in favor of getFilePath() and
getFilePathAsync() (which checks file existence).
Zotero.File.directoryContains() now takes string paths instead of files.
This will appear much less frequently, since non-conflicting field changes on
both sides can be resolved automatically, but genuine field conflicts still
require manual conflict resolution.
The merge pane is no longer editable, since the itembox code to do that is
async and can't run in a modal window, but it's not really necessary,
particularly with conflicts happening less frequently.
TODO:
- Remote item deletions
- File conflicts
- Maybe handle some edge cases where the conflicted items fail to save
Save uploaded data to cache, and update local object if necessary (which
it mostly shouldn't be except for invalid characters and HTML filtering
in notes)
Also add some upload and JSON tests