add Scholar.Cite and Scholar.CSL for parsing items into a bibliography using CSL. unfortunately, the output is not very good at the moment, and the format likely needs some changes, but I'm working with a few other people on getting it to that point.
Fixes#115, Windows should open with Scholar Pane closed.
Fixes#105, search box loses focus after search starts.
Retooled the interface a bit, and removed the top toolbar. The close and fullscreen buttons are located to the right of the items toolbar. The item pane cannot be collapsed.
closes#100, migrate ingester to Scholar.Translate
closes#88, migrate scrapers away from RDF
closes#9, pull out LC subject heading tags
references #87, add fromArray() and toArray() methods to item objects
API changes:
all translation (import/export/web) now goes through Scholar.Translate
all Scholar-specific functions in scrapers start with "Scholar." rather than the jumbled up piggy bank un-namespaced confusion
scrapers now longer specify items through RDF (the beginning of an item.fromArray()-like function exists in Scholar.Translate.prototype._itemDone())
scrapers can be any combination of import, export, and web (type is the sum of 1/2/4 respectively)
scrapers now contain functions (doImport, doExport, doWeb) rather than loose code
scrapers can call functions in other scrapers or just call the function to translate itself
export accesses items item-by-item, rather than accepting a huge array of items
MARC functions are now in the MARC import translator, and accessed by the web translators
new features:
import now works
rudimentary RDF (unqualified dublin core only), RIS, and MARC import translators are implemented (although they are a little picky with respect to file extensions at the moment)
items appear as they are scraped
MARC import translator pulls out tags, although this seems to slow things down
no icon appears next to a the URL when Scholar hasn't detected metadata, since this seemed somewhat confusing
apologizes for the size of this diff. i figured if i was going to re-write the API, i might as well do it all at once and get everything working right.
caveats:
- it's not human readable. mozilla doesn't nest blank nodes, so everything's scattered throughout the file. it would be relatively easy to do post-processing with E4X or even regexps to correct this.
- there's no generic callNumber field, so all callNumbers are encoded as LCC.
adds container creation routines to dataMode rdf
changes Dublin Core export to Unqualified Dublin Core, and removes DC Terms qualifiers
This custom credits panel gives us more flexibility. Also, there is an About link which is certainly easier to find than right-clicking on the extension in the Add-ons window.
adds export of seeAlso info and project hierarchy to RDF. for now, this is embedded in the modsCollection root element.
uses nodeIDs for Dublin Core RDF.
adjusts the Google Books translator to work with the latest revision of the site
renames the MODS translator to just MODS, because "Metadata Object Description Schema (MODS)" was too long for the export dialog
- fixes a bug that could result in the Scrape Progress chrome thingy sticking around forever
- makes chrome thingy disappear when URL changes or when tabs are switched
Item.addSeeAlso(itemID)
Item.removeSeeAlso(itemID)
Item.getSeeAlso() -- array of linked items
Relationships are mutual
Closes#82, Notes: see also table
- changes scrapers table to translators table; all import/export/web translators now belong in this table
- adds Scholar.Translate to handle translation issues. eventually, Scholar.Ingester.Document will become part of this interface
- adds Scholar_File_Interface (in fileInterface.js) to handle UI for export and eventually import. (David, when you have time, please connect Scholar_File_Interface.exportFile to a button.)
- adds an export translator for MODS. all of our metadata, but not our hierarchy (projects, etc.) translates directly and unambiguously into valid MODS. eventually, we can use RDF or another format to handle hierarchy.
- adds utilities.getVersion() and utilities.inArray() for simplified scraper coding
- fixes minor interface issues with the nifty chrome scraping status window
New methods:
Item.addTag(tag)
Item.getTags() -- array of tagIDs
Item.removeTag(tagID)
Tags.getName(tagID)
Tags.getID(tag)
Tags.add(text) -- returns tagID of new tag
Tags.purge() -- purge obsolete tags
The last two are for use by Item.addTag() and Item.removeTag(), respectively, and probably don't need to be used elsewhere.
Item.save() changed to not call notify() if a transaction is in progress, which is totally going to trip someone up at some point, and probably me, but it's better than a new parameter
Item.toArray() implemented -- builds up a multidimensional array of item data, converting all type ids to their textual equivalents -- currently has empty placeholder arrays for tags and seeAlso
Sample source output:
'itemID' => "2"
'itemType' => "book"
'title' => "Computer-Mediated Communication: Human-to-Human Communication Across the Internet"
'dateAdded' => "2006-03-12 05:25:50"
'dateModified' => "2006-03-12 05:25:50"
'publisher' => "Allyn & Bacon Publishers"
'year' => "2002"
'pages' => "347"
'ISBN' => "0-205-32145-3"
'creators' ...
'0' ...
'firstName' => "Susan B."
'lastName' => "Barnes"
'creatorType' => "author"
'notes' ...
'0' ...
'note' => "text"
'tags' ...
'seeAlso' ...
'1' ...
'note' => "text"
'tags' ...
'seeAlso' ...
'tags' ...
'seeAlso' ...
Sample note output:
'itemID' => "17"
'itemType' => "note"
'dateAdded' => "2006-06-27 04:21:16"
'dateModified' => "2006-06-27 04:21:16"
'note' => "text"
'sourceItemID' => "2"
'tags' ...
'seeAlso' ...
sourceItemID won't exist if it's an independent note.
We'll use the same format in reverse for fromArray, so Simon, let me know if you need more data (preserving type ids, etc) or want anything in a different form.