Sunday, August 17, 2008

Q/A: What Thread-safe Rails Means

There's been a little bit of buzz about David Heinemeier Hansson's announcement that Josh Peek has joined Rails core and is about to wrap up his GSoC project making Rails finally be thread-safe. To be honest, there probably hasn't been enough buzz, and there's been several misunderstandings about what it means for Rails users in general.

So I figured I'd do a really short Q/A about what effect Rails thread-safety would have on the Rails world, and especially the JRuby world. Naturally there's some of my opinions reflected here, but most of this should be factually correct. I trust you will offer corrections in the comments.

Q: What does it mean to make Rails thread-safe?

A: I'm sure Josh or Michael Koziarski, his GSoC mentor, can explain in more detail what the work involved, but basically it means removing the single coarse-grained lock around every incoming request and replacing it with finer-grained locks around only those resources that need to be shared across threads. So for example, data structures within the logging subsystem have either been modified so they are not shared across threads, or locked appropriately to make sure two threads don't interfere with each other or render those data structures invalid or corrupt. Instead of a single database connection for a given Rails instance, there will be a pool of connections, allowing N database connections to be used by the M requests executing concurrently. It also means allowing requests to potentially execute without consuming a connection, so the number of live, active connections usually will be lower than the number of requests you can handle concurrently.

Q: Why is this important? Don't we have true concurrency already with Rails' shared-nothing architecture and multiple processes?

A: Yes, processes and shared-nothing do give us full concurrency, at the cost of having multiple processes to manage. For many applications, this is "good enough" concurrency. But there's a down side to requiring as many processes as concurrent requests: inefficient use of shared resources. In a typical Mongrel setup, handling 10 concurrent requests means you have to have 10 copies of Rails loaded, 10 copies of your application loaded, 10 in-memory data caches, 10 database connections...everything has to be scaled in lock step for every additional request you want to handle concurrently. Multiply the N copies of everything times M different applications, and you're eating many, many times more memory than you should.

Of course there are partial solutions to this that don't require thread safety. Since much of the loaded code and some of the data may be the same across all instances, deployment solutions like Passenger from Phusion can use forking and memory-model improvements in Phusion's Ruby Enterprise Edition to allow all instances to share the portion of memory that's the same. So you reduce the memory load by about the amount of code and data in memory that each instance can safely hold in common, which would usually include Rails itself, your static application code, and to some extent the other libraries loaded by Rails and your app. But you still pay the duplication cost for database connections, application code, and in-memory data that are loaded or created after startup. And you still have "no better" concurrency than the coarse-grained locking since Ruby Enterprise Edition is is just as green-threaded as normal Ruby.

Q: So for green-threaded implementations like Ruby, Ruby EE, and Rubinius, native threading offers no benefit?

A: That's not quite true. Thread-safe Rails will mean that an individual instance, even with green threads, can handle multiple requests at the same time. By "at the same time" I don't mean concurrently...green threads will never allow two requests to actually run concurrently or to utilize multiple cores. What I mean is that if a given request ends up blocking on IO, which happens in almost all requests (due to REST hits, DB hits, filesystem hits and so on), Ruby will now have the option of scheduling another request to execute. Put another way, removing the coarse-grained lock will at least improve concurrency up to the "best" that green-threaded implementations can do, which isn't too bad.

The practical implication of this is that rather than having to run a Rails instance for every process you want to handle at the same time, you will only have to run a certain constant number of instances for each core in your system. Some people use N + 1 or 2N + 1 as their metric to map from cores (N) to the number of instances you would need to effectively utilize those cores. And this means that you'd probably never need more than a couple Rails instances on a one-core system. Of course you'll need to try it yourself and see what metric works best for your app, but ultimately even on green-threaded implementations you should be able to reduce the number of instances you need.

Q. Ok, what about native-threaded implementations like JRuby?

A. On JRuby, the situation improves much more than on the green-threaded implementations. Because JRuby implements Ruby threads as native kernel-level threads, a Rails application would only need one instance to handle all concurrent requests across all cores. And by one instance, I mean "nearly one instance" since there might be specific cases where a given application bottlenecks on some shared resource, and you might want to have two or three to reduce that bottleneck. In general, though, I expect those cases will be extremely rare, and most would be JRuby or Rails bugs we should fix.

This means what it sounds like: Rails deployments on JRuby will use 1/Nth the amount of memory they use now, where N is the number of thread-unsafe Rails instances currently required to handle concurrent requests. Even compared to green-threaded implementations running thread-safe Rails, it willl likely use 1/Mth the memory where M is the number of cores, since it can parallelize happily across cores with only "one" instance.

Q: Isn't that a huge deal?

A: Yes, that's a huge deal. I know existing JRuby on Rails users are going to be absolutely thrilled about it. And hopefully more folks will consider using JRuby on Rails in production as a result.

And it doesn't end at resource utilization in JRuby's case. With a single Rails instance, JRuby will be able to "warm up" much more quickly, since code we compile and optimize at runtime will immediately be applicable to all incoming requests. The "throttling" we've had to do for some optimizations (to reduce overall memory consumption) may no longer even be needed. Existing JDBC connection pooling support will be more reliable and more efficient, even allowing connection sharing from application to application as well as across instances. And it will put Rails on JRuby on par with other frameworks that have always been (probably) thread-safe like Merb, Groovy on Grails, and all the Java-based frameworks.

Naturally, I'm thrilled. :)

13 comments:

  1. Great summary, thanks for writing it up!

    ReplyDelete
  2. IronRuby will reap the same benefits as JRuby. Awesome write up!

    ReplyDelete
  3. Concurrency is having more than one operation active (but possibly all suspended) at a time. Parallelism is having more than one operation making forward progress at the same time. Don't confuse the two.

    Because Ruby is green-threaded, and a thread may be switched out when a request is blocked on IO, the requests are in fact able to run concurrently. It's just that they're not running in parallel.

    ReplyDelete
  4. That's the best thing i ever heard !!!

    ReplyDelete
  5. Unfortunately, this:

    "If a given request ends up blocking on IO ... due to DB hits ... Ruby will now have the option of scheduling another request to execute."

    Is not correct, at least not for database client libraries like MySQL/Ruby, which call blocking C APIs, and block the Ruby whole interpreter.

    We need a lot more work on non-blocking client libraries before Rails will be able to get much from thread-level concurrency IMO.

    ReplyDelete
  6. @Matth

    Technically true, but there's something you're not considering: Ruby O/RMs spend close to half their time in the interpreter. So Just because you're inside AR doesn't mean you're necessarily in blocking client driver IO.

    DataMapper sees quite a big boost from multiple threads because of this very issue.

    It's also true the boost isn't anywhere near what it might be with native threads or asynchronous drivers though.

    ReplyDelete
  7. With the help of non blocking DB drivers for MRI [1] [2] This could be great news, since even MRI will be able to benefit [albeit 1.9, but hey, we do what we can].
    -=R
    [1] http://oldmoe.blogspot.com/2008/07/faster-io-for-ruby-with-postgres.html
    [2] http://github.com/tqbf/asymy/tree/master

    ReplyDelete
  8. If you want a real thread safe solution in Ruby, look at Merb + datamapper. They've been doing that for a while now.

    -Matt

    ReplyDelete
  9. A blog idea for you: so how would we be able to run jruby on rails as one threaded instance?

    Does it mean the dispatcher needs to be adjusted for this to happen or is there something that already works for this?

    ReplyDelete
  10. @pope: just download rails 2.2 when it comes out and run it on jruby. Should work out of the box--I think, and across all cores.

    ReplyDelete
  11. What about File Upload? Is Rails 2.2 capable of handling multiple file uploads without tying up the Mongrel processes?

    ReplyDelete
  12. You bet that we are thrilled!

    Great write up Charlie.

    ReplyDelete