faster faster_img
Always working to set the federated search speed record.
better better_img
Open source - extensible, extendable, understandable
easier easier_img
Two clicks - one click to find, one click to get
download download_img
Get it now - free!

LibraryFind news and notes

Couple of interesting news and notes for the LibraryFind community.

  1. I am currently working with our first non-academic institution set to go live before the first of the year.  The Oregon State Library will be the first library to move to the 0.9 branch of the codebase (which means, I probably need to finish my optimizations and close the branch).  I'm excited about this for a number of reasons.  First, LibraryFind was initially funded by a grant provided by the State Library, so it's nice to see that investment coming back to them.  Secondly, it will give us a chance to test a number of assumptions (that I believe work in an academic environment) against a non-academic audience that is primarily interested in policy questions.
  2. As mentioned above, the LibraryFind 0.9 branch wrapped up prior to the first of the year.  In the mean time, I will be posting a point release to the 0.8.5.x branch that will incorporate some of the optimizations that will show up in 0.9, as well as include one CONSTANT that was left out of the environment.rb.example file.  That will likely happen Monday.

--TR

LibraryFind 0.8.5.2 available

Reposted from [http://oregonstate.edu/~reeset/blog/archives/565]

LibraryFind 0.8.5.2 has officially been tagged and posted to the libraryfind.org website.  You can get the tarball here: http://libraryfind.org/release-0.8.5.2.tar.gz.  This release has admittedly been a long time coming.  What’s held it up?  Well, primarily work that we were doing for people interested in using LibraryFind outside of the public view.  Of course, the result of some of that work has been integrated into the 0.8.5.2 build which I think will make the overall application better and more reliable.

The other thing that slowed the release of 0.8.5.2 has been the parallel development of 0.9.0.  The 0.9.0 branch will represent a different direction in the UI — in that the UI will be much more responsive to users, allow users to stop a query at any point and retrieve present results, see queries and search status, etc.  Because the 0.9.0 represented, in many ways, a redesign of the UI framework, the time that it took to make it all work together took more time as well.  Fortunately, at this point, the 0.9.0 test branch is also feature complete, so the turn around between the 0.8.5.2 and the 0.9.0 builds should be a short one.

–TR

The blog is back!

July 23rd, 2008

As you may have noticed, we have not been posting recently to the LibraryFind blog. Basically, we ended up breaking our non-standard blogging solution awhile back, and took way too much time getting WordPress installed and our archived content moved over. However, we are now up and running on WordPress, which should allow us to concentrate on writing on the blog as opposed to maintaining the blog. :-)

A couple of quick notes - we are getting ready to release the 0.8.5 version of LibraryFind, so check back soon for news of that release. Also, if folks out there are interested in helping us improve our documentation (both technical and user-side), please drop us a note!

LibraryFind 0.8.2 Released

September 18th, 2007

After an amazingly long absence of posting, I’m happy to write that we’ve just released the 0.8.2 version of LibraryFind! There are many improvements in the 0.8 branch. These include: 

* Improved out-of-the-box user interface
* re-worked html/css architecture to support easier design customization
* new generic graphic design for out-of-the-box UI
* many, many bugfixes 

A current list of closed tickets for the 0.8 branch can be found at https://trac.library.oregonstate.edu/projects/libraryfind/query?status=closed&milestone=0.8.


And now that 0.8.2 is ready, believe it or not, 0.8.3 is just around the corner. We are setting up testing of the 0.8.3 branch now, and hope to have it out within then next month.

Overcoming the lag

June 14th, 2007

One of the most frustrating aspects of federating out a search process is the lag time in receiving the results back from the various query targets (especially with, but not limited to, Z39.50 queries). Ideally, at some point in time, there will be an infrastructure service made available which harvests from all of our content providers, and provides us the ability to work with locally (or near-locally) indexed data instead of having to bootstrap via federated querying.

Until that time, however, we need to be pragmatic in order to provide the best user experience possible. Up through LF 0.8 (yes, 0.8 is not quite released yet, but will be very shortly), our approach with the client / server interaction has been to hold open an http request until we can return the full set of search results to the user. Since we are post-processing the results to improve the relevancy, we need to wait until all of the query targets have returned their results before we can provide the user their results. This, in turn, creates open http processes that last way too long - sometimes up to 30 seconds.

So, how to improve on this? In truth, we have no control over the response time of the remote query targets - we cannot make them quicker. Our only approach with these targets is a longer-term effort to work with the providers to allow us better access to their servers or their data (sometimes through an XML gateway instead of Z39.50; optimally through harvesting via OAI-PMH). Since we cannot improve the efficiency of the queries we farm out to remote targets, we need to focus on how to minimize this effect on our users. One method we currently employ to help with this is search result caching - we use a tiered caching system to enable users who happen to enter a search that some other user had previously performed to take advantage of that fact of get their results quite quickly - by caching the results of the first instance of the search, we do not need to go out an query the remote targets again.

Caching works best within user sessions, when a user is more likely to replicate a search precisely. It is less effective across user sessions, because users need to utilize the same precise search terms for it to work. So, caching is no panacea when it comes to reducing the lag, though it does improve the situation.

One of our big changes in the 0.9 branch will be moving away from the persistent http request and instead implementing a ‘constant ping’ approach. Not only will this improve efficiencies from the LF server end (holding open http requests is extremely inefficient, especially with Rails), but it will give us greater flexibility in using the user interface to minimize the impact of slow target responses. We will be able to better inform the user of the status of their search, and we can even provide the user the ability to stop a search mid-stream and look at the results that are currently present. It also opens the door to different UI approaches - for instance, we could more easily devise an A9-style UI, if desired.

None of the above is a silver bullet solution for dealing with the lag - having data available for local or near-local indexing and access is still going to be more effective and efficient than federating a search out across various provider targets, but there currently isn’t the infrastructure in place to support such a scenario. Until then, a combination of bootstrap approaches will continue to improve on the experience LF provides users.