Scholar.Prefs also registers itself as a preferences observer and can be used to trigger actions when certain prefs are changed by editing the switch statement in the observe() method
Updated preferences.js to use Scholar.Prefs
- Added methods getID(idOrName) and getName(idOrName) to Scholar.CreatorTypes and Scholar.ItemTypes to take either typeID or typeName
- Removed getTypeName() in each and changed references accordingly
- Streamlined both classes to be as similar as possible
- Make Amazon scraper work with multiple documents
- Fix bugs in processDocuments
- Make Scholar.Ingester.Utilities.getItemArray() willing to take an array of DOM nodes to search for links, and finally take advantage of the fact that objects have no length
- Multiple item detection code is now a part of the scraperJavaScript, rather than the scrapeDetectCode, and code to choose which items to add is part of Scholar.Ingester.Utilities, accessible from inside scrapers. The alternative approach would result in one request (or, in the case of JSTOR, three requests) per new item, while in some cases (e.g. Voyager) only one request is necessary to get all of the items.
- When possible, corporate creators/contributors are categorized with their own RDF types (prefixDummy + "corporateCreator/corporateContributor)
- Remove extraneous debug code in extensions
- Don't try to display an SQLite error when it's "not an error" (i.e. when the error is in something else)
- Switch to nsIFile instead of nsILocalFile to retrieve the profile directory
Fix in Collection.erase() -- when the DB methods started returning values in their native type, the collection id became an int rather than a string and "new Array(this._id)" became a length declaration rather than an elements declaration
- Ingester lets callback function save items, rather than saving them itself.
- Better handling of multiple items in API, although no scrapers currently implement this.
Added a separate retry interval so that the extension retries sooner after failures (browser offline, request failure, etc.)
Revision 200 -- w00t i am victorious
- Removed localLastUpdated field from scrapers table and renamed centralLastUpdated to lastUpdated; updated scraper queries accordingly
- Added query in scrapers.sql to update version table 'repository' row to prevent immediate downloads of newly installed scrapers
- Get version property from extension manager in Scholar.init() and assign to Scholar.version
Scholar.HTTP.doGet(url, onStatus, onDone) and Scholar.HTTP.doPost(url, body, onStatus, onDone) -- onStatus and onDone are callbacks to call on non-200 responses and the response body, respectively
Assigned guids to scrapers, replaced INSERT queries with REPLACE queries, and removed table DELETE query at top -- this will allow scrapers to be updated without deleting any others that may exist (e.g. that someone is developing, third-party, etc.)