Scholar should now attempt to process citation information from EndNote download links (MIME types application/x-endnote-refer and application/x-research-info-systems). in situations where Scholar cannot process the information, a standard helper app dialog will appear. this behavior is controlled by the preference extensions.scholar.parseEndNoteMIMETypes.
Implemented advanced/saved search architecture -- to use, you create a new search with var search = new Scholar.Search(), add conditions to it with addCondition(condition, operator, value), and run it with search(). The standard conditions with their respective operators can be retrieved with Scholar.SearchConditions.getStandardConditions(). Others are for special search flags and can be specified as follows (condition, operator, value):
'context', null, collectionIDToSearchWithin
'recursive', 'true'|'false' (as strings!--defaults to false if not specified, though, so should probably just be removed if not wanted), null
'joinMode', 'any'|'all', null
For standard conditions, currently only 'title' and the itemData fields are supported -- more coming soon.
Localized strings created for the standard search operators
API:
search.setName(name) -- must be called before save() on new searches
search.load(savedSearchID)
search.save() -- saves search to DB and returns a savedSearchID
search.addCondition(condition, operator, value)
search.updateCondition(searchConditionID, condition, operator, value)
search.removeCondition(searchConditionID)
search.getSearchCondition(searchConditionID) -- returns a specific search condition used in the search
search.getSearchConditions() -- returns search conditions used in the search
search.search() -- runs search and returns an array of item ids for results
search.getSQL() -- will be used by Dan for search-within-search
Scholar.Searches.getAll() -- returns an array of saved searches with 'id' and 'name', in alphabetical order
Scholar.Searches.erase(savedSearchID) -- deletes a given saved search from the DB
Scholar.SearchConditions.get(condition) -- get condition data (operators, etc.)
Scholar.SearchConditions.getStandardConditions() -- retrieve conditions for use in drop-down menu (as opposed to special search flags)
Scholar.SearchConditions.hasOperator() -- used by Dan for error-checking
closes#76, implement extensible search/retrieval architecture for obtaining metadata
OpenURL COinS lookup is now implemented using a real search architecture system. at the moment, it works with Open WorldCat for books, CrossRef for journal articles (provided the COinS object contains a DOI or an ISSN), and PubMed when a PMID is available.
OpenURL lookup now works for books. this means that all that's necessary to add scrapable book metadata to a page is an ISBN, as shown below:
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.isbn=1579550088"></span>
also, we can now scrape Open WorldCat and Wikipedia Book Sources pages with no specialized code involved.
i'm still looking for a better way of looking up journal article metadata. it's currently implemented with CrossRef, but CrossRef simply will not work without a DOI, and is also incomplete (only holds the last name of the first author).
Scholar.OpenURL.resolve(item) returns the URL that retrieves an item from the user's OpenURL resolver. this means we can implement a "find in my library" feature.
Scholar.OpenURL.discoverResolvers() returns a list of available resolvers for the user's current location (by IP address).
closes#163, make translator API allow creator types besides author
import and export in the multi-ontology RDF format should now work properly. collections, notes, and see also are all preserved. more extensive testing will be necessary later.
Fixed ignoreCase logic (and also set all but CharacterSets to false, since there's no reason for them to be true)
Also made CachedTypes.getID() and getName() return false and '', respectively, on unknown types rather than letting them hit the error (there's still the 'invalid * type' debug message)
- eliminates "unresponsive script" message on import/export
i tried to make a progress bar that actually provides useful information, but for some reason, XUL interface updates are done asynchronously, and thus don't actually happen as long as the import/export operation continues. the code is there, but disabled, if there's some solution to this issue, but i searched and couldn't find one.
Temporarily added in a check of the backup file on startup, since I'm not entirely convinced that the backup mechanism on shutdown couldn't create a corrupt file under certain conditions
If you run with debug output on and notice the "Backup file was corrupt" message, let me know.
The Scholar database is backed up on browser close. On startup, if the main database is damaged, the extension saves a copy of the damaged file and tries to restore from the last automatic backup. If it fails, it creates a new database file.
New methods:
Scholar.getScholarDatabase(string [ext])
Scholar.backupDatabase()
Scholar.moveToUnique(file, newFile) -- find a unique filename using the leafName of newFile as the suggested name (using the built-in Mozilla functionality) and move the file there
Scholar.Date.getFileDateString(file)
Scholar.Date.getFileTimeString(file)
Closes#118, add "translator" creator type
Closes#122, add DOI and abbreviated journal title fields
Addresses #45, reorder item fields -- source/rights moved down to bottom; date fields not yet moved
Returns the first 80 characters of the note content as the title
Also changed setField() to use the loadIn parameter for primary fields so it can be used instead of this._data without affected _changedItems
closes#4, Make printable version
- moves functions for creating and deleting hidden browser objects to scholar.js (from ingester.js), since these are necessary for printing as well
- allows saving bibliography in HTML or printing bibliography. style support is not yet complete (pending finalization of 0.9 version of CSL specification).
Not finished, but enough to give David something to work with
No BLOBs -- just linking/importing of files and loaded documents
New Scholar.Item methods:
incrementFileCount() (used internally)
decrementFileCount() (used internally)
isFile()
numFiles()
getFile() -- returns nsILocalFile or false if associated file doesn't exist (note: always returns false for items with LINK_MODE_LINKED_URL, since they have no files -- use getFileURL() instead)
getFileURL() -- returns URL string
getFileLinkMode() -- compare to Scholar.Files.LINK_MODE_* constants: LINKED_FILE, IMPORTED_FILE, LINKED_URL, IMPORTED_URL
getFileMimeType() -- mime type of file (e.g. text/plain)
getFileCharset() -- charsetID of file
getFiles() -- array of file itemIDs this file is a source for
New Scholar.Files methods:
importFromFile(nsIFile file [, int sourceItemID])
linkFromFile(nsIFile file [, int sourceItemID])
importFromDocument(nsIDOMDocument document [, int sourceItemID])
linkFromDocument(nsIDOMDocument document [, int sourceItemID])
New class Scholar.FileTypes -- partially implemented, not yet used
New class Scholar.CharacterSets -- same as other *Types classes:
getID(idOrName)
getName(idOrName)
getTypes() (aliased to getAll(), which I'll probably change the others to as well)
Charsets table with all official character sets (copied from Mozilla source)
Renamed Item.setNoteSource() to setSource() and Item.getNoteSource() to getSource() and adjusted to handle both notes and files
add Scholar.Cite and Scholar.CSL for parsing items into a bibliography using CSL. unfortunately, the output is not very good at the moment, and the format likely needs some changes, but I'm working with a few other people on getting it to that point.
closes#100, migrate ingester to Scholar.Translate
closes#88, migrate scrapers away from RDF
closes#9, pull out LC subject heading tags
references #87, add fromArray() and toArray() methods to item objects
API changes:
all translation (import/export/web) now goes through Scholar.Translate
all Scholar-specific functions in scrapers start with "Scholar." rather than the jumbled up piggy bank un-namespaced confusion
scrapers now longer specify items through RDF (the beginning of an item.fromArray()-like function exists in Scholar.Translate.prototype._itemDone())
scrapers can be any combination of import, export, and web (type is the sum of 1/2/4 respectively)
scrapers now contain functions (doImport, doExport, doWeb) rather than loose code
scrapers can call functions in other scrapers or just call the function to translate itself
export accesses items item-by-item, rather than accepting a huge array of items
MARC functions are now in the MARC import translator, and accessed by the web translators
new features:
import now works
rudimentary RDF (unqualified dublin core only), RIS, and MARC import translators are implemented (although they are a little picky with respect to file extensions at the moment)
items appear as they are scraped
MARC import translator pulls out tags, although this seems to slow things down
no icon appears next to a the URL when Scholar hasn't detected metadata, since this seemed somewhat confusing
apologizes for the size of this diff. i figured if i was going to re-write the API, i might as well do it all at once and get everything working right.
caveats:
- it's not human readable. mozilla doesn't nest blank nodes, so everything's scattered throughout the file. it would be relatively easy to do post-processing with E4X or even regexps to correct this.
- there's no generic callNumber field, so all callNumbers are encoded as LCC.
adds container creation routines to dataMode rdf
changes Dublin Core export to Unqualified Dublin Core, and removes DC Terms qualifiers
adds export of seeAlso info and project hierarchy to RDF. for now, this is embedded in the modsCollection root element.
uses nodeIDs for Dublin Core RDF.
adjusts the Google Books translator to work with the latest revision of the site
renames the MODS translator to just MODS, because "Metadata Object Description Schema (MODS)" was too long for the export dialog