I've been a fan of the acts_as_ferret plugin for a while and have had great success in using it while developing Rails applications using WEBrick. It's a cinch to use and can be highly configurable. However, once I deployed applications that used acts_as_ferret to production things started to come apart. acts_as_ferret simply didn't work in a multi-process (read fastcgi) or multi-server environment.
The problem is due to concurrency issues between processes. Each fastcgi process assumes it is the only one using the index files and would not respect other processes' actions. Under heavy loads, and during concurrent read/write accesses, file locking breaks down and the application begins to throw errors (including some nasty segfaults).
The problem gets worse when not only are there multiple processes on a server but there are multiple servers as well. In this configuration, keeping the ferret index files separated on each server isn't an option as that would lead to out of sync indices. The most obvious solution is to use a centralized location for these files and link each server to it. But that is the same situation as above, only this time there are more processes!
So, is acts_as_ferret a 'development-only' plugin that's not ready for production? Fortunately not!
The Solution
There is now a DRb Server implementation for acts_as_ferret. From the authors:
"In production environments most often multiple processes are responsible for serving client requests. Sometimes these processes are even spread across several physical machines.
Just like the database, the Ferret index of an application is a unique resource that has to be shared among all servers. To achieve this, acts_as_ferret comes with a built in DRb server that acts as the central hub for all indexing and searching in your application."
Perfect! Now acts_as_ferret is ready for production environments. The only question is, "How does it scale?"
I recently ran across another 'acts_as_' plugin for Rails, called acts_as_solr, that has many of the same features, and many that acts_as_ferret doesn't, but its index server is implemented in Java using the apache-solr engine. I began to wonder which one was faster and thus would scale better. Time for a benchmark!
The Benchmark
System
- CPU: Intel(R) Pentium(R) 4 CPU 3.00GHz (using non-smp kernel)
- RAM: 2GB
- OS: Kubuntu Edgy (latest updates as of 3/14/07)
- Ruby: ruby 1.8.4 (2005-12-24) [i686-linux]
- Rails: 1.1.6
- MySQL: mysql Ver 14.7 Distrib 4.1.15, for pc-linux-gnu (i486) using readline 5.1
- ferret gem: 0.11.3
- acts_as_ferret: svn://projects.jkraemer.net/acts_as_ferret/trunk/plugin/acts_as_ferret (as of 3/14/07)
- acts_as_solr: http://opensvn.csie.org/acts_as_solr/trunk (as of 3/13/07)
- apache-solr: apache-solr-1.1.0-incubating
Hard Drive information ('hdparm -I /dev/hda')
ATA device, with non-removable media
Model Number: WDC WD800BB-22HEA1
Serial Number: WD-WMAJ51461221
Firmware Revision: 14.03G14
Standards:
Supported: 6 5 4
Likely used: 6
Configuration:
Logical max current
cylinders 16383 16383
heads 16 16
sectors/track 63 63
--
CHS current addressable sectors: 16514064
LBA user addressable sectors: 156301488
device size with M = 1024*1024: 76319 MBytes
device size with M = 1000*1000: 80026 MBytes (80 GB)
Capabilities:
LBA, IORDY(can be disabled)
Standby timer values: spec'd by Standard, with device specific minimum
R/W multiple sector transfer: Max = 16 Current = 0
Recommended acoustic management value: 128, current value: 254
DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 *udma5
Cycle time: min=120ns recommended=120ns
PIO: pio0 pio1 pio2 pio3 pio4
Cycle time: no flow control=120ns IORDY flow control=120ns
Commands/features:
Enabled Supported:
* SMART feature set
* Power Management feature set
* Write cache
* Look-ahead
* Host Protected Area feature set
* WRITE_BUFFER command
* READ_BUFFER command
* DOWNLOAD_MICROCODE
SET_MAX security extension
Automatic Acoustic Management feature set
* Device Configuration Overlay feature set
* Mandatory FLUSH_CACHE
* SMART error logging
* SMART self-test
HW reset results:
CBLID- above Vih
Device num = 0 determined by CSEL
Network
Everything was done on a single machine (see assumptions as to why).
Benchmark Routine
This routine simply loops a specified number of times, evaluates a given routine, and outputs basic statistical information. It catches all exceptions thrown (caused by passing in invalid characters as part of the query) and treats that timing instance as noise (not included in success totals).
def benchmark_routine(times, routine)
success_total_time = 0.0
error_total_time = 0.0
successes = 0
errors = 0
high = nil
low = nil
times.times do
start_time = Time.now
begin
eval routine
time = Time.now - start_time
high = time if high == nil or time > high
low = time if low == nil or time < low
success_total_time += time
successes += 1
rescue Exception => e
time = Time.now - start_time
error_total_time += time
errors += 1
end
end
return {:successes => successes,
:success_total_time => success_total_time,
:success_high => high,
:success_low => low,
:success_average => success_total_time / successes
:errors => errors,
:error_total_time => error_total_time }
end
ActiveRecord Class
Note: only one of the 'acts_as' declarations was uncommented at a time.
class Article < ActiveRecord::Base
# acts_as_solr :fields => [ :url, :title, :description ]
# acts_as_ferret ({ :fields => [ :url, :title, :description ], :remote => true })
end
Database
The table I used contained 414 rows of news articles pulled from various news feeds (Yahoo, MSNBC, etc.). The columns indexed were the url of the full article, the title of the article, and a synopsis of the article content. The article table is as follows:
CREATE TABLE `articles` (
`id` int(11) NOT NULL auto_increment,
`url` text NOT NULL,
`title` varchar(255) NOT NULL default '',
`description` text NOT NULL,
`article_date` datetime default NULL,
`created_at` datetime default NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
Assumptions
All routines were performed on the same local machine. Thus these tests do not account for any latency that may occur when in an environment distributed across different machines on a network. Any latency differences between a multi-server environment versus a localhost-only environment would be constant, and thus not affect the relative performance, as both plugins are opening sockets to connect to their respective servers.
I am assuming that 414 rows of aggregated news article summaries from several different sites provides a large enough pool of information that searches terms pulled from these articles can be considered random.
Routines
There were four different routines ran for each plugin: random search with no background updates, cached search with no background updates, random search with continuous background updates, cached search with no background updates. Each routine/plugin combination ran a benchmark of 10, 100, 1000, and 10000 queries.
Random search with No Background Updates:
The routines I used for random searching is as as follows:
acts_as_solr
Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])
or
acts_as_ferret
Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])
Article.find(:first, :order => 'rand()').description.split(' ')[rand()] is used as the search term to provide random queries by pulling a random word from a random article description. The idea here is to see what non-cached query benchmarks are. I am not accounting for time taken to perform the query to get a random word from the database.
Cached search with No Background Updates:
As many of the feeds used to generate the articles table were of a technical nature I chose the word 'computer' to maximize the number of matches. The routine I used for cached searching is as follows:
acts_as_solr
Article.find_by_solr('computer')
or
acts_as_ferret
Article.find_by_contents('computer')
Random Search with Continuous Background Updates:
Using two different 'script/console's, one would continuously select a random article and save it, thus updating the indices, while the other continuously ran the random query as described above.
Cached search with Continuous Background Updates:
Using two different 'script/console's, one would continuously select a random article and save it, thus updating the indicies, while the other continuously ran the cached query as described above.
Results
Here is the output from two script/console sessions (one for acts_as_solr and the other for acts_as_ferret):
acts_as_solr Random Query Test (with no background updates):
>> Article.rebuild_solr_index
=> true
>> benchmark_routine(10, "Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.082648, :error_total_time=>0.013225, :success_low=>0.009316, :successes=>9, :success_average=>0.0225318888888889, :errors=>1, :success_total_time=>0.202787}
>> benchmark_routine(100, "Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.061882, :error_total_time=>0.0, :success_low=>0.005442, :successes=>100, :success_average=>0.01048266, :errors=>0, :success_total_time=>1.048266}
>> benchmark_routine(1000, "Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.061038, :error_total_time=>0.129032, :success_low=>0.00526, :successes=>990, :success_average=>0.0103782050505051, :errors=>10, :success_total_time=>10.274423}
>> benchmark_routine(10000, "Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.10479, :error_total_time=>1.821459, :success_low=>0.005218, :successes=>9865, :success_average=>0.0105283624936645, :errors=>135, :success_total_time=>103.862296}
acts_as_ferret Random Query Test (with no background updates):
>> Article.rebuild_index
=> {}
>> benchmark_routine(10, "Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.07014, :error_total_time=>0.0, :success_low=>0.007163, :successes=>10, :success_average=>0.014553, :errors=>0, :success_total_time=>0.14553}
>> benchmark_routine(100, "Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.053758, :error_total_time=>0.0076, :success_low=>0.006882, :successes=>99, :success_average=>0.00910739393939394, :errors=>1, :success_total_time=>0.901632}
>> benchmark_routine(1000, "Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>1.15512, :error_total_time=>0.193326, :success_low=>0.006969, :successes=>975, :success_average=>0.0103111794871795, :errors=>25, :success_total_time=>10.0534}
>> benchmark_routine(10000, "Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.07639, :error_total_time=>1.869886, :success_low=>0.006948, :successes=>9770, :success_average=>0.00901357379733882, :errors=>230, :success_total_time=>88.0626160000002}
acts_as_solr Cached Query Test (with no background updates):
>> Article.rebuild_solr_index
=> true
>> benchmark_routine(10, "Article.find_by_solr('computer')")
=> {:success_high=>0.043805, :error_total_time=>0.0, :success_low=>0.003642, :successes=>10, :success_average=>0.0081675, :errors=>0, :success_total_time=>0.081675}
>> benchmark_routine(100, "Article.find_by_solr('computer')")
=> {:success_high=>0.050668, :error_total_time=>0.0, :success_low=>0.003121, :successes=>100, :success_average=>0.00472935, :errors=>0, :success_total_time=>0.472935}
>> benchmark_routine(1000, "Article.find_by_solr('computer')")
=> {:success_high=>0.074675, :error_total_time=>0.0, :success_low=>0.003167, :successes=>1000, :success_average=>0.00468432899999999, :errors=>0, :success_total_time=>4.68432899999999}
>> benchmark_routine(10000, "Article.find_by_solr('computer')")
=> {:success_high=>0.145258, :error_total_time=>0.0, :success_low=>0.003079, :successes=>10000, :success_average=>0.0047008854, :errors=>0, :success_total_time=>47.008854}
acts_as_ferret Cached Query Test (with no background updates):
>> Article.rebuild_index
=> {}
>> benchmark_routine(10, "Article.find_by_contents('computer')")
=> {:success_high=>0.055753, :error_total_time=>0.0, :success_low=>0.003231, :successes=>10, :success_average=>0.0095941, :errors=>0, :success_total_time=>0.095941}
>> benchmark_routine(100, "Article.find_by_contents('computer')")
=> {:success_high=>0.048506, :error_total_time=>0.0, :success_low=>0.003188, :successes=>100, :success_average=>0.00418007, :errors=>0, :success_total_time=>0.418007}
>> benchmark_routine(1000, "Article.find_by_contents('computer')")
=> {:success_high=>0.05537, :error_total_time=>0.0, :success_low=>0.003131, :successes=>1000, :success_average=>0.00417725, :errors=>0, :success_total_time=>4.17725}
>> benchmark_routine(10000, "Article.find_by_contents('computer')")
=> {:success_high=>0.13851, :error_total_time=>0.0, :success_low=>0.003122, :successes=>10000, :success_average=>0.0042286392, :errors=>0, :success_total_time=>42.286392}
acts_as_solr Random Query Test (with background updates):
Console 1:
>> while true
>> Article.find(:first, :order => "rand()").save
>> end
Console 2:
>> Article.rebuild_solr_index
=> true
>> benchmark_routine(10, "Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.163176, :error_total_time=>0.0, :success_low=>0.005903, :successes=>10, :success_average=>0.035533, :errors=>0, :success_total_time=>0.35533}
>> benchmark_routine(100, "Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.328657, :error_total_time=>0.03657, :success_low=>0.005748, :successes=>99, :success_average=>0.0402173939393939, :errors=>1, :success_total_time=>3.981522}
>> benchmark_routine(1000, "Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.395172, :error_total_time=>0.528478, :success_low=>0.005612, :successes=>989, :success_average=>0.0381039544994945, :errors=>11, :success_total_time=>37.6848110000001}
>> benchmark_routine(10000, "Article.find_by_solr(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_high=>0.428329, :error_total_time=>5.812703, :success_low=>0.004934, :successes=>9852, :success_average=>0.0349491181485993, :errors=>148, :success_total_time=>344.318712}
acts_as_solr Cached Query Test (with background updates):
Console 1:
>> while true
>> Article.find(:first, :order => "rand()").save
>> end
Console 2:
>> Article.rebuild_solr_index
=> true
>> benchmark_routine(10, "Article.find_by_solr('computer')")
=> {:success_high=>0.109399, :error_total_time=>0.0, :success_low=>0.01582, :successes=>10, :success_average=>0.0307117, :errors=>0, :success_total_time=>0.307117}
>> benchmark_routine(100, "Article.find_by_solr('computer')")
=> {:success_high=>0.311115, :error_total_time=>0.0, :success_low=>0.004612, :successes=>100, :success_average=>0.02918882, :errors=>0, :success_total_time=>2.918882}
>> benchmark_routine(1000, "Article.find_by_solr('computer')")
=> {:success_high=>0.347518, :error_total_time=>0.0, :success_low=>0.003539, :successes=>1000, :success_average=>0.030654657, :errors=>0, :success_total_time=>30.654657}
>> benchmark_routine(10000, "Article.find_by_solr('computer')")
=> {:success_high=>0.919183, :error_total_time=>0.0, :success_low=>0.003729, :successes=>10000, :success_average=>0.0301908975999999, :errors=>0, :success_total_time=>301.908975999999}
acts_as_ferret Random Query Test (with background updates):
Console 1:
>> while true
>> Article.find(:first, :order => "rand()").save
>> end
Console 2:
>> Article.rebuild_index
=> {}
>> benchmark_routine(10, "Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_total_time=>0.368735, :success_high=>0.194123, :error_total_time=>0.0, :success_low=>0.007293, :successes=>10, :success_average=>0.0368735, :errors=>0}
>> benchmark_routine(100, "Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_total_time=>1.922619, :success_high=>0.164076, :error_total_time=>0.00814, :success_low=>0.006691, :successes=>99, :success_average=>0.0194203939393939, :errors=>1}
>> benchmark_routine(1000, "Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_total_time=>20.138275, :success_high=>0.159272, :error_total_time=>0.396986, :success_low=>0.006772, :successes=>977, :success_average=>0.0206123592630502, :errors=>23}
>> benchmark_routine(10000, "Article.find_by_contents(Article.find(:first, :order => 'rand()').description.split(' ')[rand()])")
=> {:success_total_time=>202.389328, :success_high=>1.209112, :error_total_time=>3.954261, :success_low=>0.006643, :successes=>9755, :success_average=>0.0207472401845207, :errors=>245}
acts_as_ferret Cached Query Test (with background updates):
Console 1:
>> while true
>> Article.find(:first, :order => "rand()").save
>> end
Console 2:
>> Article.rebuild_index
=> {}
>> benchmark_routine(10, "Article.find_by_contents('computer')")
=> {:success_high=>0.118141, :error_total_time=>0.0, :success_low=>0.007559, :successes=>10, :success_average=>0.0348693, :errors=>0, :success_total_time=>0.348693}
>> benchmark_routine(100, "Article.find_by_contents('computer')")
=> {:success_high=>0.081537, :error_total_time=>0.0, :success_low=>0.004364, :successes=>100, :success_average=>0.01803433, :errors=>0, :success_total_time=>1.803433}
>> benchmark_routine(1000, "Article.find_by_contents('computer')")
=> {:success_high=>0.152265, :error_total_time=>0.0, :success_low=>0.003591, :successes=>1000, :success_average=>0.0346570519999999, :errors=>0, :success_total_time=>34.6570519999999}
>> benchmark_routine(10000, "Article.find_by_contents('computer')")
=> {:success_high=>0.257654, :error_total_time=>0.0, :success_low=>0.006658, :successes=>10000, :success_average=>0.0427653468, :errors=>0, :success_total_time=>427.653468}
Conclusion
The results were surprisingly close, with the largest margin of difference being approximately 0.01 seconds and acts_as_ferret performing faster on most test cases. I honestly would have figured a Java implementation would have been faster with all the negative press out there about Ruby benchmarking.
Now to be fair to the solr server, it does appear to have many features that the acts_as_ferret DRb server does not, and thus could be doing a lot more than just building the index files. I didn't look into this further, though that would make a good follow up to see how much (if at all) these extra features affect these results and if they can be configured to improve performance.
Here a breakdown of the results for different tests:
Test Winner Avg. Margin
random (no updates) acts_as_ferret 0.002734
random (with updates) acts_as_ferret 0.012787
cached (no updates) acts_as_ferret 0.000026
cached (with updates) acts_as_solr 0.002395
3 comments:
Do you feel that the speed of the storage system affects the performance outcome? What storage were you using on your reference platform?
I'm not sure how these tests will differ with other hardware. The disk I was using was a ATA Wester Digital WD800BB-22HEA1 using firmware revision 14.03G14.
I've updated this post and placed the results of 'hdparm -I /dev/hda' which has more information about the hard drive.
As you mentioned Solr is more than just indexing system and it has many features that are required for huge performance websites. For example you can layout your index files in different file systems or use in memory indexes to have search and index at the same time. So I would say if performance is an issue for you use the Solr.
Regards
Behrang Javaherian
http://www.beyondng.com
Post a Comment