|Always working to set the federated search speed record.|
|Open source - extensible, extendable, understandable|
|Two clicks - one click to find, one click to get|
|Get it now - free!
June 11th, 2007
This is an update to my original post from April on my Rails benchmarking. See this URL for the original.
I have uploaded a more complete PDF of my results from the benchmarks, along with some pretty graphs and stuff.
We have 7 different combinations of web server, connector and application server:
Lighttpd 1.4+ SCGI+rails_scgi_cluster
Apache 2.0+FastCGI+Rails FCGI
Lighttpd 1.4+FastCGI+Rails FCGI
We had 2 separate test rounds:
Test Round 3 consists of executing 500 non-cached queries against LibraryFind. Because the queries are not going to hit the cache, they will take much more time for each individual query to complete, so we are getting an idea of how well this particular combination of webserver/appserver handle concurrent sessions.
Test Round 4 consists of executing 500,000 cached queries against LibraryFind. Because the queries are cached, the back-end remote OAI queries are eliminated, so we are testing just the speed of the actual webserver/appserver/database. Because only the webserver/appserver combinations are changing, we can eliminate the database as a variable. The database is otherwise idle during these tests.
Non-Cached Test Duration: (Test Round 3)
Non-Cached Transaction Rate: (Test Round3)
As you can see from the above graph, of the 7 variations the Apache 2.2/mod_proxy_balancer/mongrel_cluster instance on the far left completed fastest and had the highest number of transactions per second, by nearly 25% over its closest competitor, Apache/FastCGI.
Cached Test Duration: (Test Round 4)
Cached Transaction Rate: (Test Round 4)
Again, with Test Round 4 it is clear that our Apache 2.2/mod_proxy_balancer/mongrel_cluster combination beats out the competition.
People have also asked me for more information on the actual setups of Apache, etc., which I will post soon as well.
Lastly, Terry has added support for memcached to replace the SQL session store that seems to be broken with multiple mongrel instances, so I will try to do some comparison benchmarks to see how much memcached speeds things up.
April 30th, 2007
BleakHouse is a Rails plugin for finding memory leaks. It tracks ObjectSpace for your entire app, and produces charts of references by controller, by action, and by object class.
April 19th, 2007
From the Ruby Inside blog comes this reference to a cleverly named Rails plugin called The MOle. From the MOle plugin website:
The MOle allows you to precisely analyze how your customers are interacting with your rails application. Instead of sitting on the console and watching your controller actions and db queries fly by, you can easily leverage the MOle and let it do this work for you by trapping events of interest. This plugin allows you to figure out if your latest application features are a success or a bust. You will be able to trap certain user interactions and record them for your next iterations. This is not yet another page hit or heat map type plugin, within a few steps you will be able to monitor your users interactions LIVE and assess your application usability from the comfort of your own machine…
Check out the video screencast to see it in action.
April 3rd, 2007
So when we released LF 0.7 I was pretty pleased that we were able to include our own OpenURL resolver. The marriage of metasearch and OpenURL is one of those no-brainer concepts — but in order for it to work really well, I wanted OpenURL resolution directly a part of LF. For OSU and anyone without an OpenURL resolver, this will work great. However, for those that have taken the time to purchase and setup their own tools (like SFX), having OpenURL resolution within the application represented a problem. Soo…
Thanks to Ross Singer, who provided some documentation and the location to the GA Tech. SFX server to test on, I’ve been able to add SFX support into LF. This means that by setting a config value in the environment.rb file, one can configure LF to work with SFX or the Local LF OpenURL repository.
Of course, the next question might be, will we support other openurl resolvers? The answer to that question is easy — sure if they provide an open API for query. SFX makes it easy to retrieve an XML representation of the request results making the service easy to use and parse. Of course, at present, this means that certain vendor tools are off limits. Innovative Interface’s OpenURL software for example, would be out-of-bounds because it lacks such API support. So if you have a vendor that currently doesn’t support an API that returns XML with your OpenURL tool, talk to them. Get them to support it. As soon as they do, give us a holler because support can then be added in just a couple of lines of code.
April 2nd, 2007
After fighting with concurrency issues with the OSU Libraries implementation of LibraryFind, I decided to do some investigation into the best way to implement the webserver / application server. Ruby suffers from a severe lack of threadiness. That’s my word, don’t steal it!
After scouring the Internet… the whole Internet… I found quite a bit of discussion about how this particular combination was great, but this other combination was even better and thisother combination was horrible. *gasp*
So, being the perfectionist that I am (my wife would disagree), I decided to put them to the test, just like the MythBusters.
I put together a couple of different test cases. I call these “Round 3″ and “Round4″? Where did rounds 1 and 2 go? I don’t know.
Test Round 3 consists of executing 500 non-cached queries against the LibraryFind searcherer, /record/retrieve, using 30 different search terms in quasi-random order. I was hoping to get together a list of 500 different search terms, but my fingers got tired after 30, so I stuck with that. It seemed to do the trick, though! To test concurrency I ran 5 concurrent transactions. The server I was testing on is a dual processor, dual core Opteron (from Sun Microsystems. If I say their name enough, maybe they’ll give me free stuff… ok, this was a Sun Fire X4200!) so much more than 5 concurrent searches (4 really) is just testing how fast your kernel’s SMP or NUMA code is.
Test Round 4 consists of executing 500,000 cached queries against the same searcherer, /record/retrieve, this time using the exact same search term, over and over… and over. Again, I ran 5 concurrent transactions to put the system under as much load as it could reasonably tolerate without getting annoyed with me. Not a true load test, but a good idea of how well the system can retrieve records from cache and spit them back to the user.
The results were somewhat, but not entirely surprising. Of course, using plain CGI worked reasonably well, but was slow. It was so slow, in fact, that I ended up killing the Apache/CGI test after it had completed only about 8600 of its 500,000 transactions because it had already taken longer than Apache/mongrel had for the FULL 500,000… yes, Apache/CGI took more time to complete less than 2% of the test than Apache/mongrel.
Between web servers, Apache 2.2.4 in its worker MPM mode seemed to perform slightly better than Lighttpd 1.4.13 when proxying to mongrel. The opposite was true for Apache 2.0.59/FastCGI (I couldn’t get FastCGI to run against 2.2.4) and Lighttpd 1.4.13/FastCGI. Lighttpd seems to perform better when using FastCGI. After the horrible showing for Apache/CGI I didn’t even try Lighttpd/CGI. I also could not get Apache to work well with scgi_rails.
All in all, I found the combination of Apache 2.2.4 using mod_proxy_balancer to talk to 10 instances of mongrel 1.0.1 (using mongrel_cluster to manage the mongrel instances) to be the fastest, most efficient combination. I also like mongrel_cluster’s ability to manage the independent mongrel instances.
Below are some links to a summary of my findings, and some graphs of the test durations and transaction rates.