- Added methods getID(idOrName) and getName(idOrName) to Scholar.CreatorTypes and Scholar.ItemTypes to take either typeID or typeName
- Removed getTypeName() in each and changed references accordingly
- Streamlined both classes to be as similar as possible
- Make Amazon scraper work with multiple documents
- Fix bugs in processDocuments
- Make Scholar.Ingester.Utilities.getItemArray() willing to take an array of DOM nodes to search for links, and finally take advantage of the fact that objects have no length
- Multiple item detection code is now a part of the scraperJavaScript, rather than the scrapeDetectCode, and code to choose which items to add is part of Scholar.Ingester.Utilities, accessible from inside scrapers. The alternative approach would result in one request (or, in the case of JSTOR, three requests) per new item, while in some cases (e.g. Voyager) only one request is necessary to get all of the items.
Moved the capture icon into the URL bar (invisible until you visit a scrapable page. Currently just displays a Book, but will change to the correct item types in the future?)
- When possible, corporate creators/contributors are categorized with their own RDF types (prefixDummy + "corporateCreator/corporateContributor)
- Remove extraneous debug code in extensions
- Don't try to display an SQLite error when it's "not an error" (i.e. when the error is in something else)
- Switch to nsIFile instead of nsILocalFile to retrieve the profile directory
Fix in Collection.erase() -- when the DB methods started returning values in their native type, the collection id became an int rather than a string and "new Array(this._id)" became a length declaration rather than an elements declaration
- Ingester lets callback function save items, rather than saving them itself.
- Better handling of multiple items in API, although no scrapers currently implement this.
- Fix item modify notify() on 1st row.
- Ensure that new items are visible when added.
- New functionality for creating new items (prevents a lot of problems).
- Number-based fields display properly.
- Fixed bug when creating and saving the first notes on an item.
- New notes won't save empty.
[fix] You shouldn't lose your changes if you select another item in the middle of editing a field.
[fix] The dropdown menu to select notes doesn't steal the focus
[interface] Multi-notes functionality (waiting on data layer)
[docs] Major internal documentation written for itemTreeView.js and collectionTreeView.js (this actually does work ;-))
Added a separate retry interval so that the extension retries sooner after failures (browser offline, request failure, etc.)
Revision 200 -- w00t i am victorious
- Removed localLastUpdated field from scrapers table and renamed centralLastUpdated to lastUpdated; updated scraper queries accordingly
- Added query in scrapers.sql to update version table 'repository' row to prevent immediate downloads of newly installed scrapers
- Get version property from extension manager in Scholar.init() and assign to Scholar.version
[Drag and Drop] in Items Tree: You can drag items from one window into another, directly into the Items list.
[Editing] Close the edit box and save when you click on its label
The collections list does not resize randomly now.
The pane on the right stays open all the time - even when 0/multiple items are selected. This is to avoid frequent resizing of the items pane.
Temporarily, if the first "word" of a field's value is more than 29 characters long, it will set it to crop. This is for the long URLs, etc.
Scholar.HTTP.doGet(url, onStatus, onDone) and Scholar.HTTP.doPost(url, body, onStatus, onDone) -- onStatus and onDone are callbacks to call on non-200 responses and the response body, respectively
Assigned guids to scrapers, replaced INSERT queries with REPLACE queries, and removed table DELETE query at top -- this will allow scrapers to be updated without deleting any others that may exist (e.g. that someone is developing, third-party, etc.)
[style] Better add/remove Creator buttons.
[fix] The sorting should not randomly switch the order of two items with the same sort value (eg, Barnes vs. Barnes).
[fix] The browser should not open with two sorted columns.
scholar.properties addition is just to keep metadata panel from breaking and could probably be removed once interface is hard-coded to not display the notes field there
- Send a 'delete' rather than a 'remove' to itemViews when items are actually deleted (removals from collections still get 'remove' unless it's part of a collection erase)
1) Items into collections.
2) Collections into collections.
Dan S, please check this out, I am getting some exceptions from the data access portion.
[interface] Temporarily, "metadata" pane on the right is not resizable.
Added creatorTypeID to the PK in itemCreators, because a creator could conceivably have two roles on an item
Throw an error if attempt to save() an item with two identical creator/creatorTypeID combinations, since, assuming this shouldn't be allowed (i.e. we don't have a four-column PK), there's really no way to handle this elegantly on my end -- interface code can use new method Item.creatorExists(firstName, lastName, creatorTypeID, skipIndex), with skipIndex set to the current index, to make sure this isn't done
Integrated the scrapers with the schema update mechanism. Changed a bunch of schema methods to handle both schema.sql and scrapers.sql (or others, if need be) and altered the version table to track mu
ltiple versions for different files. This theoretically should detect that the version table has changed and force a reinitialization of the DB--let me know if there are problems.
- Broke schema functions into separate object and got rid of DB_VERSION config constant in favor of a toVersion variable in the _migrateSchema command (which isn't technically necessary either, since the version number at the top of schema.sql is now always compared to the DB version at startup) but will help reduce the chance that someone will update the schema file without adding migration steps)
- Removed Amazon scraper from schema.sql, as it will be loaded with the rest of the scrapers