Wednesday, February 27, 2008

University of Tokyo and JRuby Team to Collaborate on MVM

Finally the full announcement is available. We're pretty excited about this one...
The University of Tokyo and Sun Microsystems Commence Joint Research Projects on High Performance Computing and Web-based Programming Languages

Improving Efficiency and Simplicity of Fortress, Ruby and JRuby Languages is Current Focus for Unique Academic-Corporate Collaborative Model

TOKYO and SANTA CLARA, Calif. -- February 27, 2008 -- The University of Tokyo and Sun Microsystems, Inc. (Nasdaq: JAVA) today announced two joint research projects that will focus on High-Performance Computing (HPC) and Web-based programming languages.

...

The two research topics are:
  • Development of a library based on skeletal parallel programming in Fortress
  • Implementation of a multiple virtual machine (MVM) environment on Ruby and JRuby

Some of you knew about this from Tim Bray's rather cryptic announcement at RubyConf. The basic idea here is that we on the JRuby team and folks in Professor Ikuo Takeuchi's group at the University of Tokyo will be cooperating to come up with an MVM specification and implementations for the Ruby 1.9 and JRuby codebases. Tom and I have had brief discussions with Professor Koichi Sasada (ko1, creator of YARV) about this already, and there are mailing lists and wikis already running. I've also made sure that Evan Phoenix, of Rubinius, is included in the discussions. Like JRuby, Rubinius largely has MVM features implemented, but the eventual form they'll take and API they'll present is up in the air.

This should be an exciting collaboration between Sun and U Tokyo, as well as between the three most functional next-generation Ruby implementations. I'll certainly keep you posted on all MVM developments as they come up, and I welcome any comments or suggestions you might have.

Tuesday, February 26, 2008

JRuby in Google Summer of Code 2008

Greetings!

Google's Summer of Code for 2008 is starting up again, and we're looking for folks to submit proposals. The JRuby Community or Sun or me or someone will sign up as a mentoring organization, so start thinking about or discussing possible proposals. Here's a few ideas to get you started (and hopefully, there are many other ideas out there not on this list):
  • Writing a whole suite of specs for current Java integration behavior...and then expanding that suite to include behavior we want to add. This work would go hand-in-hand with a rework of Java Integration we're likely to start soon.
  • Collect and help round out all the profiling/debugging/etc tools for JRuby and get them to final releasable states. There's several projects in the works, but most of them are stalled and folks need better debugging. This project could also include simply working with existing IDEs (NetBeans, etc) to figure out how to get them to debug compiled Ruby code correctly (currently they won't step into .rb files).
  • Continue work on an interface-compatible RMagick port. There's already RMagickJR which has a lot of work into it, but nobody's had time to continue it. A working RMagick gem would ease migration for lots of folks using Ruby.
  • Putting together a definitive set of fine and coarse-grained benchmarks for Rails. JRuby on most benchmarks has been faster than Ruby 1.8.6...and yet higher-performance Rails has been elusive. We need better benchmarks and better visibility into core Rails. Bonus work: help nail down what's slower about JRuby.
  • Survey all existing JRuby extensions and put together an official public API based on core JRuby methods they're using. This would help us reduce the hassle of migrating extensions across JRuby versions.
Please, anyone else who has ideas, feel free to post them here or on the mailing list for discussion. And if you have a proposal, go ahead and mail the JRuby dev list directly.

Update: Martin Probst added this idea in the comments:
Another idea might be to "fix" RDoc, whatever that means. That's not really JRuby centric, but still a very worthwhile task, I think.

A documentation system that allows easy doc writing (Wiki-alike) and provides a better view on the actual functionality you can find in a certain instance would be really helpful. Plus a decent search feature.
And that makes me think of another idea:
  • Add RDoc comments to all the JRuby versions of Ruby core methods, and get RI/RDocs generating as part of JRuby distribution builds. Then we could ship RI and have it work correctly. Ola Bini has already started some of the work, creating an RDoc annotation we can add to our Ruby-bound methods.
Update 2: sgwong in the comments suggested implementing the win32ole library for JRuby. This would also be an excellent contribution, since there's already libraries like Jacob to take some of the pain out of it, and it would be great to have it working on JRuby. And again, this makes me think of an few additional options:
  • Implement a full, compatible set of Apple Cocoa bindings for JRuby. You could use JNA, and I believe there's already Cocoa bindings for Java you could reuse as well, but I'm not familiar with them.
  • Complete implementation of the DL library (Ruby's stdlib for programmatic loading and calling native dynamic libraries) and/or Rubinius's FFI (same thing, with a somewhat tidier interface). Here too there's lots of help: I've already partially implemented DL using JNA, and it wouldn't be hard to finish it and/or implement Rubinius's FFI. And implementing Rubinius's FFI would have the added benefit of allowing JRuby to share some of Rubinius's library wrappers.

Sunday, February 24, 2008

Ruby's Thread#raise, Thread#kill, timeout.rb, and net/protocol.rb libraries are broken

(NOTE: Due to an issue migrating my blog from one Google account to another, this URL now points at a *copy* of the original post. If you would like to add further comments, please do so there.)

I'm taking a break from some bug fixing to bring you this public service announcement:

Ruby's Thread#raise, Thread#kill, and the timeout.rb standard library based on them are inherently broken and should not be used for <span style="font-weight: bold;">any purpose</span>. And by extension, net/protocol.rb and all the net/* libraries that use timeout.rb are also currently broken (but they can be fixed).

I will explain, starting with timeout.rb. You see, timeout.rb allows you to specify that a given block of code should only run for a certain amount of time. If it runs longer, an error is raised. If it completes before the timeout, all is well.

Sounds innocuous enough, right? Well, it's not. Here's the code:
<pre>def timeout(sec, exception=Error)
return yield if sec == nil or sec.zero?
raise ThreadError, "timeout within critical session"\
if Thread.critical
begin
x = Thread.current
y = Thread.start {
sleep sec
x.raise exception, "execution expired" if x.alive?
}
yield sec
#    return true
ensure
y.kill if y and y.alive?
end
end</pre>

So you call timeout with a number of seconds, an optional exception type to raise, and the code to execute. A new thread is spun up (<span style="font-weight: bold;"><span style="font-style: italic;">for every invocation, I might add</span></span>) and set to sleep for the specified number of seconds, while the main/calling thread yields to your block of code.

If the code completes before the timeout thread wakes up, all is well. The timeout thread is killed and the result of the provided block of code is returned.

If the timeout thread wakes up before the block has completed, it tells the main/calling thread to raise a new instance of the specified exception type.

All this is reasonable if we assume that Thread#kill and Thread#raise are safe. Unfortunately, they're provably <span style="font-weight: bold;">unsafe</span>.

Here's a reduced example:
<pre>main = Thread.current
timer = Thread.new { sleep 5; main.raise }
begin
lock(some_resource)
do_some_work
ensure
timer.kill
unlock_some_resource
end</pre>

Here we have a simple timeout case. A new thread is spun up to wait for five seconds. A resource is acquired (in this case a lock) and some work is performed. When the work has completed, the timer is killed and the resource is unlocked.

The problem, however, is that you can't guarantee when the timer might fire.

In general, with multithreaded applications, you have to assume that cross-thread events can happen at any point in the program. So we can start by listing a number of places where the timer's raise call might actually fire in the main part of the code, between "begin" and "ensure".
<ol>
<li>It could fire before the lock is acquired</li>
<li>It could fire while the lock is being acquired (potentially corrupting whatever resource is being locked, but we'll ignore that for the moment)
</li>
<li>It could fire after the lock is acquired but before the work has started</li>
<li>It could fire while the work is happening (presumably the desired effect of the timeout, but it also suffers from potential data corruption issues)</li>
<li>It could fire immediately after the work completes but before entering the ensure block</li>
</ol>
Other than the data corruption issues (which are very real concerns) none of these is particularly dangerous. We could even assume that the lock is safe and the work being done with the resource is perfectly synchronized and impossible to corrupt. Whatever. The bad news is what happens in the ensure block.

If we assume we've gotten through the main body of code without incident, we now enter the ensure. The main thread is about to kill the timeout thread, when BAM, the raise call fires. Now we're in a bit of a predicament. We're already outside the protected body, so the remaining code in the ensure is going to fail. What's worse, we're about to leave a resource locked that may never get unlocked, so even if we can gracefully handle the timeout error somewhere else, we're in trouble.

What if we move the timer kill inside the protected body, to ensure we kill the timer before proceeding to the lock release?
<pre>main = Thread.current
timer = Thread.new { sleep 5; raise }
begin
lock(some_resource)
do_some_work
timer.kill
ensure
unlock_some_resource
end</pre>

Now we have to deal with the flip side of the coin: if the work we're performing raises an exception, we won't kill the timer thread, and all hell breaks loose. Specifically, after our lock has been released and we've bubbled the exception somewhere up into the call stack, BAM, the raise call fires. Now it's anybody's guess what we've screwed up in our system. And the same situation applies to any thread you might want to call Thread#raise or Thread#kill against: you can't make any guarantees about what damage you'll do.

There's a good FAQ in the Java SE platform docs entitled <span style="font-style: italic;"><a href="http://java.sun.com/j2se/1.5.0/docs/guide/misc/threadPrimitiveDeprecation.html">Why Are Thread.stop, Thread.suspend, Thread.resume and Runtime.runFinalizersOnExit Deprecated?</a></span>, which covers this phenomenon in more detail. You see, in the early days of Java, it had all these same operations: killing a thread with Thread.stop(), causing a thread to raise an arbitrary exception with Thread.stop(Throwable), and a few others for suspending and resuming threads. But they were a mistake, and in any current Java SE platform implementation, these methods no longer function.

It is provably unsafe to be able to kill a target thread or cause it to raise an exception arbitrarily.

So what about net/protocol.rb? Here's the relevant code, used to fill the read buffer from the protocol's socket:
<pre>def rbuf_fill
timeout(@read_timeout) {
@rbuf &lt;&lt; @io.sysread(8196)
}
end</pre>
This is from JRuby's copy; MRI performs a read of 1024 bytes at a time (spinning up a thread for each) and Rubinius has both a size modification and a timeout.rb modification to use a single timeout thread. But the problems with timeout remain; specifically, if you have any code that uses net/protocol (like net/http, which I'm guessing a few of you rely on) you have a chance that a timeout error might be raised in the middle of your code. You are all rescuing Timeout::Error when you use net/http to read from a URL, right? What, you aren't? Well you better go add that to every piece of code you have that calls net/http, and while you're at it add it to every other library in the world that uses net/http. And then you can move on to the other net/* protocols and third-party libraries that use timeout.rb. <a href="http://www.google.com/codesearch?as_q=require+%28%27%7C%22%29timeout%28.rb%29*%28%27%7C%22%29&amp;btnG=Search+Code&amp;hl=en&amp;as_lang=ruby&amp;as_license_restrict=i&amp;as_license=&amp;as_package=&amp;as_filename=&amp;as_case=">Here's a quick list to get you started</a>.  Ok, so you don't want to do all that. What are your options? Here's a few suggestions to help you on your way:
<ol>
<li>Although you don't have to take my word for it, eventually you're going to have to accept the truth. Thread#kill, Thread#raise, timeout.rb, net/protocol.rb all suffer from these problems. net/protocol.rb could be fixed to use nonblocking IO (select), as could I suspect most of the other libraries, but there is no safe way to use Thread#kill or Thread#raise. Let me repeat that: <span style="font-weight: bold;">there is no safe way to use Thread#kill or Thread#raise</span>.</li>
<li>Start lobbying the various Ruby implementations to eliminate Thread#kill and Thread#raise (and while you're at it, eliminate Thread#critical= as well, since it's probably the single largest thing preventing Ruby from being both concurrently-threaded and high-performance).</li>
<li>Start lobbying the library and application maintainers using Thread#kill, Thread#raise, and timeout.rb to stop.</li>
<li>Stop using them yourself.</li>
</ol>
Now I want this post to be productive, so I'll give a brief overview of how to avoid using these seductively powerful and inherently unsafe features:
<ul>
<li>If you want to be able to kill a thread, write its code such that it periodically checks whether it should terminate. That allows the thread to safely clean up resources and prepare to "die itself".</li>
<li>If you need to time out an operation, you're going to have to find a different way to do it. With IO, it's pretty easy. Just look up IO#select and learn to love it. With arbitrary code and libraries, you may be able successfully lobby the authors to add timeout options, or you may be able to hook into them yourself. If you can't do either of those...we'll, you're SOL. Welcome to threads. I hope others will post suggestions in the comments.</li>
<li>If you think you can ignore this, think again. Eventually you're going to get bitten in the ass, be it from a long-running transaction that gets corrupted due to a timeout error or a filesystem operation that wipes out some critical file. You're not going to escape this one, so we should start trying to fix it now.</li>
</ul>
I'm hoping this will start a discussion eventually leading to these features being deprecated and removed from use. I believe Ruby's viability in an increasingly parallel computing world depends on getting threading and concurrency right, and Thread#raise, Thread#kill, and the timeout.rb library need to go away.

Ruby's Thread#raise, Thread#kill, timeout.rb, and net/protocol.rb libraries are broken

I'm taking a break from some bug fixing to bring you this public service announcement:

Ruby's Thread#raise, Thread#kill, and the timeout.rb standard library based on them are inherently broken and should not be used for any purpose. And by extension, net/protocol.rb and all the net/* libraries that use timeout.rb are also currently broken (but they can be fixed).

I will explain, starting with timeout.rb. You see, timeout.rb allows you to specify that a given block of code should only run for a certain amount of time. If it runs longer, an error is raised. If it completes before the timeout, all is well.

Sounds innocuous enough, right? Well, it's not. Here's the code:
def timeout(sec, exception=Error)
return yield if sec == nil or sec.zero?
raise ThreadError, "timeout within critical session"\
if Thread.critical
begin
x = Thread.current
y = Thread.start {
sleep sec
x.raise exception, "execution expired" if x.alive?
}
yield sec
# return true
ensure
y.kill if y and y.alive?
end
end

So you call timeout with a number of seconds, an optional exception type to raise, and the code to execute. A new thread is spun up (for every invocation, I might add) and set to sleep for the specified number of seconds, while the main/calling thread yields to your block of code.

If the code completes before the timeout thread wakes up, all is well. The timeout thread is killed and the result of the provided block of code is returned.

If the timeout thread wakes up before the block has completed, it tells the main/calling thread to raise a new instance of the specified exception type.

All this is reasonable if we assume that Thread#kill and Thread#raise are safe. Unfortunately, they're provably unsafe.

Here's a reduced example:

main = Thread.current
timer = Thread.new { sleep 5; main.raise }
begin
lock(some_resource)
do_some_work
ensure
timer.kill
unlock_some_resource
end

Here we have a simple timeout case. A new thread is spun up to wait for five seconds. A resource is acquired (in this case a lock) and some work is performed. When the work has completed, the timer is killed and the resource is unlocked.

The problem, however, is that you can't guarantee when the timer might fire.

In general, with multithreaded applications, you have to assume that cross-thread events can happen at any point in the program. So we can start by listing a number of places where the timer's raise call might actually fire in the main part of the code, between "begin" and "ensure".
  1. It could fire before the lock is acquired
  2. It could fire while the lock is being acquired (potentially corrupting whatever resource is being locked, but we'll ignore that for the moment)
  3. It could fire after the lock is acquired but before the work has started
  4. It could fire while the work is happening (presumably the desired effect of the timeout, but it also suffers from potential data corruption issues)
  5. It could fire immediately after the work completes but before entering the ensure block
Other than the data corruption issues (which are very real concerns) none of these is particularly dangerous. We could even assume that the lock is safe and the work being done with the resource is perfectly synchronized and impossible to corrupt. Whatever. The bad news is what happens in the ensure block.

If we assume we've gotten through the main body of code without incident, we now enter the ensure. The main thread is about to kill the timeout thread, when BAM, the raise call fires. Now we're in a bit of a predicament. We're already outside the protected body, so the remaining code in the ensure is going to fail. What's worse, we're about to leave a resource locked that may never get unlocked, so even if we can gracefully handle the timeout error somewhere else, we're in trouble.

What if we move the timer kill inside the protected body, to ensure we kill the timer before proceeding to the lock release?

main = Thread.current
timer = Thread.new { sleep 5; raise }
begin
lock(some_resource)
do_some_work
timer.kill
ensure
unlock_some_resource
end

Now we have to deal with the flip side of the coin: if the work we're performing raises an exception, we won't kill the timer thread, and all hell breaks loose. Specifically, after our lock has been released and we've bubbled the exception somewhere up into the call stack, BAM, the raise call fires. Now it's anybody's guess what we've screwed up in our system. And the same situation applies to any thread you might want to call Thread#raise or Thread#kill against: you can't make any guarantees about what damage you'll do.

There's a good FAQ in the Java SE platform docs entitled Why Are Thread.stop, Thread.suspend, Thread.resume and Runtime.runFinalizersOnExit Deprecated?, which covers this phenomenon in more detail. You see, in the early days of Java, it had all these same operations: killing a thread with Thread.stop(), causing a thread to raise an arbitrary exception with Thread.stop(Throwable), and a few others for suspending and resuming threads. But they were a mistake, and in any current Java SE platform implementation, these methods no longer function.

It is provably unsafe to be able to kill a target thread or cause it to raise an exception arbitrarily.

So what about net/protocol.rb? Here's the relevant code, used to fill the read buffer from the protocol's socket:
def rbuf_fill
timeout(@read_timeout) {
@rbuf << @io.sysread(8196)
}
end

This is from JRuby's copy; MRI performs a read of 1024 bytes at a time (spinning up a thread for each) and Rubinius has both a size modification and a timeout.rb modification to use a single timeout thread. But the problems with timeout remain; specifically, if you have any code that uses net/protocol (like net/http, which I'm guessing a few of you rely on) you have a chance that a timeout error might be raised in the middle of your code. You are all rescuing Timeout::Error when you use net/http to read from a URL, right? What, you aren't? Well you better go add that to every piece of code you have that calls net/http, and while you're at it add it to every other library in the world that uses net/http. And then you can move on to the other net/* protocols and third-party libraries that use timeout.rb. Here's a quick list to get you started.

Ok, so you don't want to do all that. What are your options? Here's a few suggestions to help you on your way:
  1. Although you don't have to take my word for it, eventually you're going to have to accept the truth. Thread#kill, Thread#raise, timeout.rb, net/protocol.rb all suffer from these problems. net/protocol.rb could be fixed to use nonblocking IO (select), as could I suspect most of the other libraries, but there is no safe way to use Thread#kill or Thread#raise. Let me repeat that: there is no safe way to use Thread#kill or Thread#raise.
  2. Start lobbying the various Ruby implementations to eliminate Thread#kill and Thread#raise (and while you're at it, eliminate Thread#critical= as well, since it's probably the single largest thing preventing Ruby from being both concurrently-threaded and high-performance).
  3. Start lobbying the library and application maintainers using Thread#kill, Thread#raise, and timeout.rb to stop.
  4. Stop using them yourself.
Now I want this post to be productive, so I'll give a brief overview of how to avoid using these seductively powerful and inherently unsafe features:
  • If you want to be able to kill a thread, write its code such that it periodically checks whether it should terminate. That allows the thread to safely clean up resources and prepare to "die itself".
  • If you need to time out an operation, you're going to have to find a different way to do it. With IO, it's pretty easy. Just look up IO#select and learn to love it. With arbitrary code and libraries, you may be able successfully lobby the authors to add timeout options, or you may be able to hook into them yourself. If you can't do either of those...we'll, you're SOL. Welcome to threads. I hope others will post suggestions in the comments.
  • If you think you can ignore this, think again. Eventually you're going to get bitten in the ass, be it from a long-running transaction that gets corrupted due to a timeout error or a filesystem operation that wipes out some critical file. You're not going to escape this one, so we should start trying to fix it now.
I'm hoping this will start a discussion eventually leading to these features being deprecated and removed from use. I believe Ruby's viability in an increasingly parallel computing world depends on getting threading and concurrency right, and Thread#raise, Thread#kill, and the timeout.rb library need to go away.

Thursday, February 21, 2008

FOSDEM

It's 5am in Brussels and I'm awake. That can only mean one thing. Time to blog!

This weekend I'm presenting JRuby at FOSDEM, the "Free and Open Source Developers European Meeting." I was invited to talk, and who could pass up an invitation to Belgium?

I've been trying to shake things up with my recent JRuby talks. At Lang.NET, I obviously dug into technical details a lot more because it was a crowd that could stomach such things. At acts_as_conference, I threw out the old JRuby on Rails demo and focused only on things that make JRuby on Rails different (better) than classic Ruby on Rails development, such as improved performance, easier deployment, and access to a world of Java libraries. And FOSDEM will include all new content as well: I'm spending Friday putting together a talk that discusses JRuby capabilities and status while simultaneously illustrating the impact community developers have had on the project. After all, it's an OSS conference, so I'll continue my recent trend and try to present something directly on-topic.

For those of you unable to attend, it turns out there will be a live video stream of some of the talks, including the whole Programming Languages track. I don't think I've ever been livestreamed before.

Saturday, February 16, 2008

JRuby RC2 Released; What's Next?

Today, Tom got the JRuby 1.1 RC2 release out. It's an astounding collection of performance improvements and compatibility fixes. Here's Tom's JRuby 1.1 RC2 announcement.

Let's recap a little bit:
  • There have been additional performance improvements in RC2 over RC1. Long story short, performance of most trivial numeric benchmarks approaches or exceeds Ruby 1.9, while most others are on par. So in general, we have started using Ruby 1.9 performance as our baseline.
  • Unusual for an RC cycle are the 260 bugs we fixed since RC1. Yes, we're bending the rules of RCs a bit, but if we can fix that many bugs in just over a month, who can really fault us for doing so? Nobody can question that JRuby is more compatible with Ruby 1.8.6 than any other Ruby implementation available.
  • We've also included a number of memory use improvements that should reduce the overall memory usage of a JRuby instance and perhaps more importantly reduce the memory cost of JIT compiled code (so called "Perm Gen" space used by Ruby code that's been compiled to JVM bytecode). In quick measurements, an application running N instance of JRuby used roughly 1/Nth as much perm gen space.
So there's really two big messages that come along with this release. The first is that JRuby is rapidly becoming the Ruby implementation of choice if you want to make a large application as fast and as scalable as possible. The thousands of hours of time we've poured into compatibility, stability, and performance are really paying off. The second message comes from me to you:

If you haven't looked at JRuby, now's the time to do so.

All the raw materials are falling into place. JRuby is ready to be used and abused, and we're ready to take it to the next level for whatever kinds of applications you want to throw at it. And not only are we ready, it's our top priority for JRuby. We want to make JRuby work for you.

And that brings me to the "What's Next" portion of this post. Where do we go from here?

We've got a window for additional improvements and fixes before 1.1 final. That much is certain, and we want to fill that time with everything necessary to make JRuby 1.1 a success. So we need your help...find bugs, file reports, let us know about performance bottlenecks. And if you're able, help us field those reports by fixing issues and contributing patches.

But we also have a unique opportunity here. JRuby offers features not found in any other Ruby implementation, features we've only begun to utilize:

Inside a single JVM process, JRuby can be used to host any number of applications, scaling them to any number of concurrent users.

This one applies equally well to Rails and to other web frameworks rapidly gaining popularity like Merb. And new projects like the GlassFish gem are leading the way to simple, scalable, no-hassle hosting for Ruby web applications. But we're pretty resource-limited on the JRuby project. We've got two full-time committers and a handful of part-timers and after-hours contributors pouring their free time into helping out. For JRuby web app hosting to improve and meet your requirements, we're going to need your use cases, your experience, and your input. We're attempting to build the most scalable, best-performing Ruby web platform with JRuby, but we're doing it in true OSS style. No secrets, no hidden agendas. This is going to be your web platform, or our efforts are wasted. So what should it look like?

The GlassFish gem is the first step. It leverages some of the best features of the Java platform: high-speed asynchronous IO, native threading, built-in management and monitoring, and application namespace isolation (yes, even classloaders have a good side) to make Ruby web applications a push-button affair to deploy and scale. With one command, your app is production-ready. No mongrel packs to manage, no cluster of apps to monitor, and no WAR file relics to slow you down. "glassfish_rails myapp" and you're done; it's true one-step deployment. Unfortunately right now it only supports Rails. We want to make it not only a rock-solid and high-performance Rails server, but also a general-purpose "mod_ruby" for all Ruby web-development purposes. It's the right platform at the right time, and we're ready to take it to the next level. But we need you to try it out and let us know what it needs.

JRuby's performance regularly exceeds Ruby 1.8.6, and in many cases has started to exceed Ruby 1.9.

At this point I'm convinced JRuby will be able to claim the title of "fastest Ruby implementation", for some definition of "Ruby". And if we're not there yet, we will be soon. With most benchmarks meeting or exceeding Ruby 1.8.6 and many approaching or exceeding Ruby 1.9 we're feeling pretty good. What I've learned is that performance is important, but it's not the driving concern for most Ruby developers, especially those building web applications usually bounded by unrelated bottlenecks in IO or database access. But that's only what I've been able to gather from conversations with a few of you. Is there more we need to do? Should we keep going on performance, or are there other areas we should focus on? Do you have cases where JRuby's performance doesn't meet your needs?

JRuby performance future is largely an open question right now. A stock JRuby install performs really well, "better enough" that many folks are using it for performance-sensitive apps already. We know there's bottlenecks, but we've solving them as they come up, and we're on the downward slope. Outside of bottlenecks, do we have room to grow? You bet we do. I've got prototype and "experimental" features already implemented and yet to be explored that will improve JRuby's performance even more. Of course there's always tradeoffs. You might have to accept a larger memory footprint, or you may have to turn off some edge-case features of Ruby that incur automatic performance handicaps. And some of the wildest improvements will depend on dynamic invocation support (hopefully in JDK 7) and a host of other features (almost certain to be available in the OpenJDK "Multi-language VM" subproject). But where performance improvements are needed, they're going to happen, and if I have any say they're going to happen in such a way that all other JVM languages can benefit as well. I'm looking to you folks to help us prioritize performance and point us in the right direction for your needs.

JRuby makes the JVM and the Java platform really shine, with excellent language performance and a "friendlier" face on top of all those libraries and all that JVM magic.

I think this is an area we've only just started to realize. Because JRuby is hosted on the JVM, we have access to the best JIT technology, the best GC technology, and the best collection of libraries ever combined into a single platform. Say what you will about Java. You may love the language or you may hate it...but it's becoming more and more obvious that the JVM is about a lot more than Java. JRuby is, to my knowledge, the only time a non-JVM language has been ported to the JVM and used for real-world, production deployments of a framework never designed with the JVM in mind. We've taken an unspecified single-implementation language (Matz's Ruby) and its key application framework, (Rails) and delivered a hosting and deployment option that at least parallels the original and in many ways exceeds it. And this is only the first such case...the Jython guys now have Django working, and it's only a matter of time before Jython becomes a real-world usable platform for Python web development. And there's already work being done to make Merb--a framework inspired by Rails but without many of Rails' warts--run perfectly in JRuby. And it's all open source, from the JVM up. This is your future to create.

I think the next phase for JRuby will bring tighter, cleaner integration with the rest of the Java platform. Already you can call any Java-based library as though it were just another piece of Ruby code. But it's not as seamless as we'd like. Some issues, like choosing the right target method or coercing types appropriately, are issues all dynamic languages face calling into static-typed code. Groovy has various features we're likely to copy, features like explicit casting and coercing of types and limited static type declaration at the edges. Frameworks like the DLR from Microsoft have a similar approach, since they've been designed to make new languages "first class" from day one. We will work to find ways to solve these sorts of problems for all JVM languages at the same time we solve them for JRuby. But there's also a lot that needs to come from Ruby users. What can we do to make the JVM and all those libraries work for you?

I guess there's a simple bottom line to all this. JRuby is an OSS project driven mostly by community contributors (5 out of 8 committers working in their free time and hundreds of others contributing patches and bug reports), based on an OSS platform (not only OpenJDK, but a culture of Free software and open source that permeates the entire Java ecosystem), hosting OSS frameworks and libraries (Rails, Merb, and the host of other apps in the Ruby world). All this is meaningless without you and your applications. We're ready to pour our effort into making this platform work for you. Are you ready to help us?