`waitForDialog()` now returns a regular window, and
`window.document.documentElement.textContent` includes all form
elements, so this updates a test to include the checkbox label.
An extra sync loop would be performed for every object downloaded, so a
download to an empty database could result in a huge number of
unnecessary loops. This was a regression from 52932b6eb, which started
queuing auto-syncs while a sync was in progress. The fix here is to skip
auto-sync for all objects saved from a sync download.
There are two new mechanisms involved:
- Event-level notifier options that get passed to passed to notify() at
the top level of extraData rather than being included with every
object (e.g., because `skipAutoSync` should apply to an entire save
transaction)
- The ability to pass event-level notifier options when initializing
a Zotero.Notifier.Queue, such as the one used for sync downloads
E.g., embedded attachment notes with no note don't have an itemNotes row
and don't output noteSchemaVersion in their JSON, but they shouldn't
trigger a conflict
When non-conflicting changes were automatically merged, the local object
would be correctly marked as unsynced, but the merged object rather than
the remote object would be saved to the sync cache. When the object was
then uploaded, it matched the cache version exactly, so an empty patch
object (other than an unchanged dateModified, which is always included)
would be uploaded and the local change wouldn't make it to the server.
The empty patch would result in an 'unchanged' response, which would
cause the empty patch object to be saved to the sync cache (which is a
bug that I'll fix separately). If the local object was modified again,
the patch would include all fields (since the cache object was empty)
and the local change would be uploaded, but there could also be
unnecessary conflicts due to it looking like all local fields had been
modified.
This patch causes the remote object to be saved to the sync cache
instead, so the local change looks like a local edit and is correctly
uploaded.
Previously, files updated remotely wouldn't be downloaded in "as needed"
mode if a copy of the file already existed locally and could only be
re-downloaded by deleting the file via Show File.
This causes remotely modified files that exist locally to be downloaded
at sync time, even in "as needed" mode, by marking them as
"force_download". While this might not be ideal for people who use "as
needed" to limit data transfer, it's better for people who use it simply
to limit local storage, and ending up with an outdated file while
offline seems worse than a little bit of extra data transfer.
In the future, we'll likely also provide ways to explicitly download and
remove files, so keeping chosen files in sync makes sense.
Files modified remotely before this change (which were marked as
"to_download" instead of "force_download") won't be downloaded as sync
time in "as needed" mode, but they'll now be re-downloaded on open.
Fixes#1322
It shouldn't be possible for collections to be nested this way, if it
happens, it shouldn't result in an infinite loop.
This removes one of the parent assignments at sync time.
`nsIFilePicker::show()` is removed in Firefox 60 in favor of `open()`,
which takes a callback (and apparently has been preferred for a long
time).
There's no point switching to that, so this module is a version of
nsIFilePicker with an async `show()` that returns a promise and some
XPCOM-isms replaced (e.g., string paths instead of nsIFile).
If an object changed on both sides and the changes were either
non-conflicting or identical but there were other local changes, the
local object was incorrectly being marked as synced, causing it not to
be uploaded until it was next modified locally.
Instead of My Publications being a separate library, have it be a
special collection inside My Library. Top-level items can be dragged
into it as before, and child items can be toggled off and on with a
button in the item pane. Newly added child items won't be shown by
default.
For upgraders, items in the My Publications library will be moved into
My Library, which might result in their being duplicated if the items
weren't removed from My Library. The client will then upload those new
items into My Library.
The API endpoint will continue to show items in the separate My
Publications library until My Publications items are added to My
Library, so the profile page will continue to show them.
The Zotero.DataDirectory equivalents return string paths instead of nsIFile
instances, so some of these calls now just use Zotero.File.pathToFile(), which
can be removed when the surrounding code is updated to OS.File,
Look for other profiles, from both apps (Firefox and Standalone), that
point to the data directory being migrated and update prefs.js in those
profiles to point to the new location.
Also reorganize code into Zotero.Profile and Zotero.DataDirectory
namespaces
On reset, items are overwritten with pristine versions if available and deleted
otherwise, and then the library is marked for a full sync. Unsynced/changed
files are deleted and marked for download.
Closes#1002
Todo:
- Handle API key access change (#953, in part)
- Handle 403 from data/file upload for existing users (#1041)
This is necessary because you can copy a database synced with a
different account into the data directory without affecting the stored
pref.
Also tweak the text to use proper quotes and remove quaint references to
"the server".
Previously, objects were first downloaded and saved to the sync cache,
which was then processed separately to create/update local objects. This
meant that a server bug could result in invalid data in the sync cache
that would never be processed. Now, objects are saved as they're
downloaded and only added to the sync cache after being successfully
saved. The keys of objects that fail are added to a queue, and those
objects are refetched and retried on a backoff schedule or when a new
client version is installed (in case of a client bug or a client with
outdated data model support).
An alternative would be to save to the sync cache first and evict
objects that fail and add them to the queue, but that requires more
complicated logic, and it probably makes more sense just to buffer a few
downloads ahead so that processing is never waiting for downloads to
finish.
While trying to get translation and citing working with asynchronously
generated data, we realized that drag-and-drop support was going to
be...problematic. Firefox only supports synchronous methods for
providing drag data (unlike, it seems, the DataTransferItem interface
supported by Chrome), which means that we'd need to preload all relevant
data on item selection (bounded by export.quickCopy.dragLimit) and keep
the translate/cite methods synchronous (or maintain two separate
versions).
What we're trying instead is doing what I said in #518 we weren't going
to do: loading most object data on startup and leaving many more
functions synchronous. Essentially, this takes the various load*()
methods described in #518, moves them to startup, and makes them operate
on entire libraries rather than individual objects.
The obvious downside here (other than undoing much of the work of the
last many months) is that it increases startup time, potentially quite a
lot for larger libraries. On my laptop, with a 3,000-item library, this
adds about 3 seconds to startup time. I haven't yet tested with larger
libraries. But I'm hoping that we can optimize this further to reduce
that delay. Among other things, this is loading data for all libraries,
when it should be able to load data only for the library being viewed.
But this is also fundamentally just doing some SELECT queries and
storing the results, so it really shouldn't need to be that slow (though
performance may be bounded a bit here by XPCOM overhead).
If we can make this fast enough, it means that third-party plugins
should be able to remain much closer to their current designs. (Some
things, including saving, will still need to be made asynchronous.)