Commit graph

64 commits

Author SHA1 Message Date
Dan Stillman
d67d96c321 Closes #7, Add advanced search functionality to data layer
Implemented advanced/saved search architecture -- to use, you create a new search with var search = new Scholar.Search(), add conditions to it with addCondition(condition, operator, value), and run it with search(). The standard conditions with their respective operators can be retrieved with Scholar.SearchConditions.getStandardConditions(). Others are for special search flags and can be specified as follows (condition, operator, value):

'context', null, collectionIDToSearchWithin
'recursive', 'true'|'false' (as strings!--defaults to false if not specified, though, so should probably just be removed if not wanted), null
'joinMode', 'any'|'all', null

For standard conditions, currently only 'title' and the itemData fields are supported -- more coming soon.

Localized strings created for the standard search operators


API:

search.setName(name) -- must be called before save() on new searches
search.load(savedSearchID)
search.save() -- saves search to DB and returns a savedSearchID
search.addCondition(condition, operator, value)
search.updateCondition(searchConditionID, condition, operator, value)
search.removeCondition(searchConditionID)
search.getSearchCondition(searchConditionID) -- returns a specific search condition used in the search
search.getSearchConditions() -- returns search conditions used in the search
search.search() -- runs search and returns an array of item ids for results
search.getSQL() -- will be used by Dan for search-within-search

Scholar.Searches.getAll() -- returns an array of saved searches with 'id' and 'name', in alphabetical order
Scholar.Searches.erase(savedSearchID) -- deletes a given saved search from the DB

Scholar.SearchConditions.get(condition) -- get condition data (operators, etc.)
Scholar.SearchConditions.getStandardConditions() -- retrieve conditions for use in drop-down menu (as opposed to special search flags)
Scholar.SearchConditions.hasOperator() -- used by Dan for error-checking
2006-08-08 02:04:02 +00:00
Simon Kornblith
cbe3611182 references #110, implement CSL for citation styling
add Scholar.Cite and Scholar.CSL for parsing items into a bibliography using CSL. unfortunately, the output is not very good at the moment, and the format likely needs some changes, but I'm working with a few other people on getting it to that point.
2006-07-22 01:25:46 +00:00
Simon Kornblith
c64e5c841f closes #78, figure out import/export architecture
closes #100, migrate ingester to Scholar.Translate
closes #88, migrate scrapers away from RDF
closes #9, pull out LC subject heading tags
references #87, add fromArray() and toArray() methods to item objects

API changes:
all translation (import/export/web) now goes through Scholar.Translate
all Scholar-specific functions in scrapers start with "Scholar." rather than the jumbled up piggy bank un-namespaced confusion
scrapers now longer specify items through RDF (the beginning of an item.fromArray()-like function exists in Scholar.Translate.prototype._itemDone())
scrapers can be any combination of import, export, and web (type is the sum of 1/2/4 respectively)
scrapers now contain functions (doImport, doExport, doWeb) rather than loose code
scrapers can call functions in other scrapers or just call the function to translate itself
export accesses items item-by-item, rather than accepting a huge array of items
MARC functions are now in the MARC import translator, and accessed by the web translators

new features:
import now works
rudimentary RDF (unqualified dublin core only), RIS, and MARC import translators are implemented (although they are a little picky with respect to file extensions at the moment)
items appear as they are scraped
MARC import translator pulls out tags, although this seems to slow things down
no icon appears next to a the URL when Scholar hasn't detected metadata, since this seemed somewhat confusing

apologizes for the size of this diff. i figured if i was going to re-write the API, i might as well do it all at once and get everything working right.
2006-07-17 04:06:58 +00:00
Simon Kornblith
45b9234996 addresses #78, figure out import/export architecture
- changes scrapers table to translators table; all import/export/web translators now belong in this table
- adds Scholar.Translate to handle translation issues. eventually, Scholar.Ingester.Document will become part of this interface
- adds Scholar_File_Interface (in fileInterface.js) to handle UI for export and eventually import. (David, when you have time, please connect Scholar_File_Interface.exportFile to a button.)
- adds an export translator for MODS. all of our metadata, but not our hierarchy (projects, etc.) translates directly and unambiguously into valid MODS. eventually, we can use RDF or another format to handle hierarchy.
- adds utilities.getVersion() and utilities.inArray() for simplified scraper coding
- fixes minor interface issues with the nifty chrome scraping status window
2006-06-29 00:56:50 +00:00
Simon Kornblith
7148852955 make generic Scholar.Utilities class and HTTP-dependent Scholar.Utilities.Ingester and Scholar.Utilities.HTTP classes in preparation for import/export filters; split off into separate javascript file 2006-06-26 14:46:57 +00:00
Dan Stillman
97940c7470 Replaced all instances of "Firefox Scholar" (not counting the repository URL) with "Scholar for Firefox" for now 2006-06-24 09:08:12 +00:00
Dan Stillman
726364d091 Scholar.History -- i.e. undo/redo functionality
Partially integrated into data layer, but I'm waiting to commit that part until I'm sure it won't break everything
2006-06-22 14:01:54 +00:00
Dan Stillman
448fde9ff1 Made the schema update system moderately less convoluted
- Broke schema functions into separate object and got rid of DB_VERSION config constant in favor of a toVersion variable in the _migrateSchema command (which isn't technically necessary either, since the version number at the top of schema.sql is now always compared to the DB version at startup) but will help reduce the chance that someone will update the schema file without adding migration steps)

- Removed Amazon scraper from schema.sql, as it will be loaded with the rest of the scrapers
2006-06-07 01:02:59 +00:00
Simon Kornblith
85d8153024 Add library, hooks for scraping MARC records. 2006-06-03 22:26:01 +00:00
Simon Kornblith
639a006efb XPCOM-ize ingester, fix swapped first and last name in ingested info, stop ingesting pages field (this should be for pages of the source used, not the total number of pages, right?) 2006-06-02 03:19:12 +00:00
Dan Stillman
d323df4a2d window.confirm() replacement for XPCOM 2006-06-01 21:57:12 +00:00
Dan Stillman
38d87463af Item.inCollection(collectionID) -- test if an item belongs to a given collection (currently uses a DB call--I'll optimize that later)
Scholar.Notifier framework to handle update notifications between data layer and interface

 - Notifier.registerColumnTree(ref) and Notifier.registerItemTree(ref) pass back a unique hash that can be used to unregister the tree later with unregister*(hash) (and must be, lest we leak treeViews and probably entire browser windows if the browser window is closed without unregistering the treeView)

 - Data layer calls Scholar.Notify.trigger(event, type, id) after various events (only two calls in the data layer at the moment--more coming later)

 - Notify.trigger() calls notify(event, type, id) on all registered trees of the appropriate type -- the data layer usually knows what collection the action pertains to, but we decided that it's cleaner to just let the tree decide what it wants to do rather than add all that logic into the data layer)

 (Note: Item collection adds appear to be buggy on the interface side, but removes seem to be working)
2006-06-01 20:03:53 +00:00
Dan Stillman
bc0463774f Move XPCOM-included files into subdirectory to keep separate from chrome files and rename overlay.js to scholar.js 2006-05-27 00:44:24 +00:00
Dan Stillman
f09def19f7 - Implemented singleton XPCOM component to store core Scholar objects
More details coming on Basecamp: http://chnm.grouphub.com/W161222

*** Important note: after checking out this code, be sure to delete the compreg.dat and xpti.dat files in your FF profile directory or else the extension will not load. They'll be regenerated when you start FF again, and you won't have to do it again unless we add interfaces or other components. Once I set up the XPI packaging system, this step won't be necessary. ***

- Localization properties are now loaded directly via nsIStringBundleService and available via Scholar.getString(name) -- removed stringbundle from XUL

- Updated status line in bottom right to reflect whether Scholar is loaded correctly or not (temporary, obviously)
2006-05-27 00:20:27 +00:00