Introduce placeholder migrations for Backbone models so they never implicitly
run migrations whenever they are `fetch`ed. We prefer to run our migrations
explicitly upon app startup and then let Backbone models be (slightly) dumb(er)
models, without inadvertently triggering migrations.
- Logging is available in main process as well as renderer process, and
entries all go to one set of rotating files. Log entries in the
renderer process go to DevTools as well as the console. Entries from
the main process only show up in the console.
- We save three days of logs, one day per file in %userData%/logs
- The 'debug' object store is deleted in a new database migration
- Timestamps and level included in the new log we generate for publish
as well as the devtools
- The bunyan API is exposed via windows.log (providing the ability to
log at different levels, and save objects instead of just text), so we
can move our code to it over time.
FREEBIE
Expiring messages received before 0.31.0 may not have an expires_at time
populated. Loading these messages once will update their expires_at if
it wasn't already set. To avoid loading too many messages into memory,
add them individually, and remove them from the collection as soon as
they are added, allowing them to be garbage collected immediately.
// FREEBIE
Occasionally these will fail if they happen to be executed before the
necessary dependencies (storage, ConversationCollection) are declared.
// FREEBIE
This should really only be called once, from background.js.
Calling it twice can cause doubled listeners for the registration_done
event, which in turn leads to duplicate post-registration callbacks,
dual sync requests, and an eventual datastore inconsistency.
Fixes#670
// FREEBIE
Using the search field produces a filtered view of all contacts and
groups containing the input. To make this fast and scalable, add an
index on a 'tokens' array containing words from the conversation name
and different forms of phone number.
Closes#365
// FREEBIE
Storing multiple sessions in a single indexeddb record is prone to
clobbering data due to races between requests to update multiple device
sessions for the same number, since you have to read the current state
of the device->session map and update it. Splitting the records up makes
it so that those updates can be made in parallel. Selecting all the
sessions for a given number can still be done efficiently thanks to
indexeddb range queries.
* Session records are now opaque strings, so treat them that way:
- no more cross checking identity key and session records
- Move hasOpenSession to axolotl wrapper
- Remote registration ids must be fetched async'ly via protocol wrapper
* Implement async AxolotlStore using textsecure.storage
* Add some db stores and move prekeys and signed keys to indexeddb
* Add storage tests
* Rename identityKey storage key from libaxolotl25519KeyidentityKey to
simply identityKey, since it's no longer hardcoded in libaxolotl
* Rework registration and key-generation, keeping logic in libtextsecure
and rendering in options.js.
* Remove key_worker since workers are handled at the libaxolotl level
now