- Added 'note' item type
- Updated API to support independent note creation
Notes are, more or less, just regular items, with an item type of 1. They're created through Scholar.Notes.add(text, sourceItemID), which returns the itemID of the new note. sourceItemID is optional--if left out, an independent note will be created. (There's currently nothing stopping you from doing getNewItemByType(1) yourself, but the note would be contentless and broken, so you shouldn't do that.) Note data could've been stuffed into itemData, but I kept it separate in itemNotes to keep metadata searching faster and to keep things cleaner.
Methods calls that can be called on all items:
isNote() (same as testing for itemTypeID 1)
Method calls that can be called on source items only:
numNotes()
getNotes() (array of note itemIDs for a source)
Method calls that can be called on note items only:
updateNote(text)
setNoteSource(sourceItemID) (for changing source--use empty or false to make independent, which is currently what happens when you delete a note--will get option with #91)
getNote() (note content)
getNoteSource() (sourceItemID of a note)
Calling the above methods on the wrong item types will throw an error.
*** This will break note creation/display until David updates interface code. ***
- Added detection for network failure -- debug message is output and noNetwork property is added to the xmlhttp object
- Removed onStatus callback from HTTP.doGet and HTTP.doPost -- that was copied over from the Piggy Bank API, but the onDone callback has to handle errors anyway, so it can just check the status code if it actually cares to differentiate non-200 status codes from any other error
- Added error handling for empty responseXML to Schema._updateScrapersRemoteCallback
- Renamed SCHOLAR_CONFIG['REPOSITORY_CHECK_RETRY'] to SCHOLAR_CONFIG['REPOSITORY_RETRY_INTERVAL']
Added a separate retry interval so that the extension retries sooner after failures (browser offline, request failure, etc.)
Revision 200 -- w00t i am victorious
- Removed localLastUpdated field from scrapers table and renamed centralLastUpdated to lastUpdated; updated scraper queries accordingly
- Added query in scrapers.sql to update version table 'repository' row to prevent immediate downloads of newly installed scrapers
- Get version property from extension manager in Scholar.init() and assign to Scholar.version
Assigned guids to scrapers, replaced INSERT queries with REPLACE queries, and removed table DELETE query at top -- this will allow scrapers to be updated without deleting any others that may exist (e.g. that someone is developing, third-party, etc.)
scholar.properties addition is just to keep metadata panel from breaking and could probably be removed once interface is hard-coded to not display the notes field there
Added creatorTypeID to the PK in itemCreators, because a creator could conceivably have two roles on an item
Throw an error if attempt to save() an item with two identical creator/creatorTypeID combinations, since, assuming this shouldn't be allowed (i.e. we don't have a four-column PK), there's really no way to handle this elegantly on my end -- interface code can use new method Item.creatorExists(firstName, lastName, creatorTypeID, skipIndex), with skipIndex set to the current index, to make sure this isn't done
Integrated the scrapers with the schema update mechanism. Changed a bunch of schema methods to handle both schema.sql and scrapers.sql (or others, if need be) and altered the version table to track mu
ltiple versions for different files. This theoretically should detect that the version table has changed and force a reinitialization of the DB--let me know if there are problems.
- Broke schema functions into separate object and got rid of DB_VERSION config constant in favor of a toVersion variable in the _migrateSchema command (which isn't technically necessary either, since the version number at the top of schema.sql is now always compared to the DB version at startup) but will help reduce the chance that someone will update the schema file without adding migration steps)
- Removed Amazon scraper from schema.sql, as it will be loaded with the rest of the scrapers