- 'isInstitution' parameter added to Item.setCreator(), Creators.getID(), Creators.add()
- 'isInstitution' property added to return from Creators.get() and Item.getCreator()
var obj = Scholar.Items.getNewItemByType(1);
obj.setField('title', 'Digital History for Dummies');
obj.setCreator(0, '', 'Center for History and New Media', 1, true); // true == institutional creator
var id = obj.save();
Note: 'firstName' field is ignored when 'isInstitution' is true
Conditions in ANY queries can be made required by passing 'true' as an extra parameter to addCondition() and updateCondition() -- this can be used for limiting ANY queries to particular collections (in place of the removed 'context' condition), but if there was an elegant way to expose it to the user for all ANY queries, it's something users might find very useful.
- Implemented 'collectionID' and 'savedSearchID' conditions (a.k.a. search within a search) and removed special 'context' condition. Per my conversation with Dan, the 'recursive' flag is now a global flag that applies to all specified collectionIDs, which is less than ideal but probably better than the alternatives (e.g. having condition-specific recursive checkboxes). It does mean, however, that a "Search subfolders" checkbox is irrelevant if there are no collectionID conditions and should probably be greyed out until applicable.
Another side effect is that it's no longer possible to do an ANY search and return results only within a specific folder (though it can now be done by putting the ANY conditions in a subsearch). Since ANY searches are always annoying in this regard, what I might do is add a way to mark particular conditions as required even in ANY mode, which would allow for quite a lot of flexibility...
Note also that while 'collectionID' and 'savedSearchID' are standard conditions, they should probably be combined into a single condition on the interface side (like playlists and smart playlists under just 'Playlist' in iTunes).
- Now skips invalid/obsolete saved conditions in load()
- Remaining searchConditionIDs are no longer affected by removeCondition() (i.e. they now act like autoincrements), which should make interface code simpler
- Changed default join mode to ALL
- Fixed loading of saved searches with no search conditions
Closes#174, Don't load images and attached files when detecting content type in linkFromURL()
If mime type not provided, Scholar.Files.linkFromURL() now uses XMLHTTPRequest HEAD request to get the content type without loading file (thanks Simon for the idea)
If title not provided, try to figure it out from URL, though not particularly intelligently (last slash)
Note that order of title and mimeType parameters is now swapped
This code should be a bit smarter about unexpected conditions
Addresses #169, add OpenURL interface hooks
Addresses #170, Put "Link" option before "Import" in drop-down menu
Fixes some advanced search flaws (there are still bugs)
Closes#152, Saved Searches (interface layer)
For now, advanced search IS a saved search.
There are still bugs. The 'search' icon is ugly. I wanted to get it out there, however.
Scholar should now attempt to process citation information from EndNote download links (MIME types application/x-endnote-refer and application/x-research-info-systems). in situations where Scholar cannot process the information, a standard helper app dialog will appear. this behavior is controlled by the preference extensions.scholar.parseEndNoteMIMETypes.
Implemented advanced/saved search architecture -- to use, you create a new search with var search = new Scholar.Search(), add conditions to it with addCondition(condition, operator, value), and run it with search(). The standard conditions with their respective operators can be retrieved with Scholar.SearchConditions.getStandardConditions(). Others are for special search flags and can be specified as follows (condition, operator, value):
'context', null, collectionIDToSearchWithin
'recursive', 'true'|'false' (as strings!--defaults to false if not specified, though, so should probably just be removed if not wanted), null
'joinMode', 'any'|'all', null
For standard conditions, currently only 'title' and the itemData fields are supported -- more coming soon.
Localized strings created for the standard search operators
API:
search.setName(name) -- must be called before save() on new searches
search.load(savedSearchID)
search.save() -- saves search to DB and returns a savedSearchID
search.addCondition(condition, operator, value)
search.updateCondition(searchConditionID, condition, operator, value)
search.removeCondition(searchConditionID)
search.getSearchCondition(searchConditionID) -- returns a specific search condition used in the search
search.getSearchConditions() -- returns search conditions used in the search
search.search() -- runs search and returns an array of item ids for results
search.getSQL() -- will be used by Dan for search-within-search
Scholar.Searches.getAll() -- returns an array of saved searches with 'id' and 'name', in alphabetical order
Scholar.Searches.erase(savedSearchID) -- deletes a given saved search from the DB
Scholar.SearchConditions.get(condition) -- get condition data (operators, etc.)
Scholar.SearchConditions.getStandardConditions() -- retrieve conditions for use in drop-down menu (as opposed to special search flags)
Scholar.SearchConditions.hasOperator() -- used by Dan for error-checking
closes#76, implement extensible search/retrieval architecture for obtaining metadata
OpenURL COinS lookup is now implemented using a real search architecture system. at the moment, it works with Open WorldCat for books, CrossRef for journal articles (provided the COinS object contains a DOI or an ISSN), and PubMed when a PMID is available.
If you are simple removing an item from a project, it won't ask you if you want to delete files and notes.
ItemTreeView:
- notify() now works with multiple ids for action=modify.
- saveSelection() returns an array of selected IDs instead of saving it to a class variable
- rememberSelection(selection) takes an array of IDs.
OpenURL lookup now works for books. this means that all that's necessary to add scrapable book metadata to a page is an ISBN, as shown below:
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.isbn=1579550088"></span>
also, we can now scrape Open WorldCat and Wikipedia Book Sources pages with no specialized code involved.
i'm still looking for a better way of looking up journal article metadata. it's currently implemented with CrossRef, but CrossRef simply will not work without a DOI, and is also incomplete (only holds the last name of the first author).
Scholar.OpenURL.resolve(item) returns the URL that retrieves an item from the user's OpenURL resolver. this means we can implement a "find in my library" feature.
Scholar.OpenURL.discoverResolvers() returns a list of available resolvers for the user's current location (by IP address).
closes#163, make translator API allow creator types besides author
import and export in the multi-ontology RDF format should now work properly. collections, notes, and see also are all preserved. more extensive testing will be necessary later.
Fixed ignoreCase logic (and also set all but CharacterSets to false, since there's no reason for them to be true)
Also made CachedTypes.getID() and getName() return false and '', respectively, on unknown types rather than letting them hit the error (there's still the 'invalid * type' debug message)
- We might want to do this for regular files as well? I think we need a discussion on how Citation will be presented to the user, and if that will save the web page, and all that jazz.
- eliminates "unresponsive script" message on import/export
i tried to make a progress bar that actually provides useful information, but for some reason, XUL interface updates are done asynchronously, and thus don't actually happen as long as the import/export operation continues. the code is there, but disabled, if there's some solution to this issue, but i searched and couldn't find one.