Monday, November 19, 2012

Refining Ruby

What does the following code do?

If you answered "it upcases two strings and adds them together, returning the result" you might be wrong because of a new Ruby feature called "refinements".

Let's start with the problem refinements are supposed to solve: monkey-patching.

Monkey-patching

In Ruby, all classes are mutable. Indeed, when you define a new class, you're really just creating an empty class and filling it with methods. The ability to mutate classes at runtime has been used (or abused) by many libraries and frameworks to decorate Ruby's core classes with additional (or replacement) behavior. For example, you might add a "camelize" method to String that knows how to convert under_score_names to camelCaseNames. This is lovingly called "monkey-patching" by the Ruby community.

Monkey-patching can be very useful, and many patterns in Ruby are built around the ability to modify classes. It can also cause problems if a library patches code in a way the user does not expect (or want), or if two libraries try to apply conflicting patches. Sometimes, you simply don't want patches to apply globally, and this is where refinements come in.

Localizing Monkeypatches

Refinements have been discussed as a feature for several years, sometimes under the name "selector namespaces". In essence, refinements are intended to allow monkey-patching only within certain limited scopes, like within a library that wants to use altered or enhanced versions of core Ruby types without affecting code outside the library. This is the case within the ActiveSupport library that forms part of the core of Rails.

ActiveSupport provides a number of extensions (patches) to the core Ruby classes like String#pluralize, Range#overlaps?, and Array#second. Some of these extensions are intended for use by Ruby developers, as conveniences that improve the readability or conciseness of code. Others exist mostly to support Rails itself. In both cases, it would be nice if we could prevent those extensions from leaking out of ActiveSupport into code that does not want or need them.

Refinements

In short, refinements provide a way to make class modifications that are only seen from within certain scopes. In the following example, I add a "camelize" method to the String class that's only seen from code within the Foo class.



With the Foo class refined, we can see that the "camelize" method is indeed available within the "camelize_string" method but not outside of the Foo class.



On the surface, this seems like exactly what we want. Unfortunately, there's a lot more complexity here than meets the eye.

Ruby Method Dispatch

In order to do a method call in Ruby, a runtime simply looks at the target object's class hierarchy, searches for the method from bottom to top, and upon finding it performs the call. A smart runtime will cache the method to avoid performing this search every time, but in general the mechanics of looking up a method body are rather simple.

In an implementation like JRuby, we might cache the method at what's called the "call site"—the point in Ruby code where a method call is actually performed. In order to know that the method is valid for future calls, we perform two checks at the call site: that the incoming object is of the same type as for previous calls; and that the type's hierarchy has not been mutated since the method was cached.

Up to now, method dispatch in Ruby has depended solely on the type of the target object. The calling context has not been important to the method lookup process, other than to confirm that visibility restrictions are enforced (primarily for protected methods, since private methods are rejected for non–self calls). That simplicity has allowed Ruby implementations to optimize method calls and Ruby programmers to understand code by simply determining the target object and methods available on it.

Refinements change everything.

Refinements Basics

Let's revisit the camelize example again.



The visible manifestation of refinements comes via the "refine" and "using" methods.

The "refine" method takes a class or module (the String class, in this case) and a block. Within the block, methods defined (camelize) are added to what might be called a patch set (a la monkey-patching) that can be applied to specific scopes in the future. The methods are not actually added to the refined class (String) except in a "virtual" sense when a body of code activates the refinement via the "using" method.

The "using" method takes a refinement-containing module and applies it to the current scope. Methods within that scope should see the refined version of the class, while methods outside that scope do not.

Where things get a little weird is in defining exactly what that scope should be and in implementing refined method lookup in such a way that does not negatively impact the performance of unrefined method lookup. In the current implementation of refinements, a "using" call affects all of the following scopes related to where it is called:
  • The direct scope, such as the top-level of a script, the body of a class, or the body of a method or block
  • Classes down-hierarchy from a refined class or module body
  • Bodies of code run via eval forms that change the "self" of the code, such as module_eval
It's worth emphasizing at this point that refinements can affect code far away from the original "using" call site. It goes without saying that refined method calls must now be aware of both the target type and the calling scope, but what of unrefined calls?

Dynamic Scoping of Method Lookup

Refinements (in their current form) basically cause method lookup to be dynamically scoped. In order to properly do a refined call, we need to know what refinements are active for the context in which the call is occurring and the type of the object we're calling against. The latter is simple, obviously, but determining the former turns out to be rather tricky.

Locally-applied refinements

In the simple case, where a "using" call appears alongside the methods we want to affect, the immediate calling scope contains everything we need. Calls in that scope (or in child scopes like method bodies) would perform method lookup based on the target class, a method name, and the hierarchy of scopes that surrounds them. The key for method lookup expands from a simple name to a name plus a call context.

Hierarchically-applied refinements

Refinements applied to a class must also affect subclasses, so even when we don't have a "using" call present we still may need to do refined dispatch. The following example illustrates this with a subclass of Foo (building off the previous example).



Here, the camelize method is used within a "map" call, showing that refinements used by the Foo class apply to Bar, its method definitions, and any subscopes like blocks within those methods. It should be apparent now why my first example might not do what you expect. Here's my first example again, this time with the Quux class visible.



The Quux class uses refinements from the BadRefinement module, effectively changing String#upcase to actually do String#reverse. By looking at the Baz class alone you can't tell what's supposed to happen, even if you are certain that str1 and str2 are always going to be String. Refinements have effectively localized the changes applied by the BadRefinement module, but they've also made the code more difficult to understand; the programmer (or the reader of the code) must know everything about the calling hierarchy to reason about method calls and expected results.

Dynamically-applied refinements

One of the key features of refinements is to allow block-based DSLs (domain-specific languages) to decorate various types of objects without affecting code outside the DSL. For example, an RSpec spec.



There's several calls here that we'd like to refine.
  • The "describe" method is called at the top of the script against the "toplevel" object (essentially a singleton Object instance). We'd like to apply a refinement at this level so "describe" does not have to be defined on Object itself.
  • The "it" method is called within the block passed to "describe". We'd like whatever self object is live inside that block to have an "it" method without modifying self's type directly.
  • The "should" method is called against an instance of MyClass, presumably a user-created class that does not define such a method. We would like to refine MyClass to have the "should" method only within the context of the block we pass to "it".
  • Finally, the "be_awesome" method—which RSpec translates into a call to MyClass#awesome?—should be available on the self object active in the "it" block without actually adding be_awesome to self's type.
In order to do this without having a "using" present in the spec file itself, we need to be able to dynamically apply refinements to code that might otherwise not be refined. The current implementation does this via Module#module_eval (or its argument-receiving brother, Module#module_exec).

A block of code passed to "module_eval" or "instance_eval" will see its self object changed from that of the original surrounding scope (the self at block creation time) to the target class or module. This is frequently used in Ruby to run a block of code as if it were within the body of the target class, so that method definitions affect the "module_eval" target rather than the code surrounding the block.

We can leverage this behavior to apply refinements to any block of code in the system. Because refined calls must look at the hierarchy of classes in the surrounding scope, every call in every block in every piece of code can potentially become refined in the future, if the block is passed via module_eval to a refined hierarchy. The following simple case might not do what you expect, even if the String class has not been modified directly.



Because the "+" method is called within a block, all bets are off. The str_ary passed in might not be a simple Array; it could be any user class that implements the "inject" method. If that implementation chooses, it can force the incoming block of code to be refined. Here's a longer version with such an implementation visible.



Suddenly, what looks like a simple addition of two strings produces a distinctly different result.



Now that you know how refinements work, let's discuss the problems they create.

Implementation Challenges

Because I know that most users don't care if a new, useful feature makes my life as a Ruby implementer harder, I'm not going to spend a great deal of time here. My concerns revolve around the complexities of knowing when to do a refined call and how to discover those refinements.

Current Ruby implementations are all built around method dispatch depending solely on the target object's type, and much of the caching and optimization we do depends on that. With refinements in play, we must also search and guard against types in the caller's context, which makes lookup much more complicated. Ideally we'd be able to limit this complexity to only refined calls, but because "using" can affect code far away from where it is called, we often have no way to know whether a given call might be refined in the future. This is especially pronounced in the "module_eval" case, where code that isn't even in the same class hierarchy as a refinement must still observe it.

There are numerous ways to address the implementation challenges.

Eliminate the "module_eval" Feature

At present, nobody knows of an easy way to implement the "module_eval" aspect of refinements. The current implementation in MRI does it in a brute-force way, flushing the global method cache on every execution and generating a new, refined, anonymous module for every call. Obviously this is not a feasible direction to go; block dispatch will happen very frequently at runtime, and we can't allow refined blocks to destroy performance for code elsewhere in the system.

The basic problem here is that in order for "module_eval" to work, every block in the system must be treated as a refined body of code all the time. That means that calls inside blocks throughout the system need to search and guard against the calling context even if no refinements are ever applied to them. The end result is that those calls suffer complexity and performance hits across the board.

At the moment, I do not see (nor does anyone else see) an efficient way to handle the "module_eval" case. It should be removed.

Localize the "using" Call

No new Ruby feature should cause across-the-board performance hits; one solution is for refinements to be recognized at parse time. This makes it easy to keep existing calls the way they are and only impose refinement complexity upon method calls that are actually refined.

The simplest way to do this is also the most limiting and the most cumbersome: force "using" to only apply to the immediate scope. This would require every body of code to "using" a refinement if method calls in that body should be refined. Here's a couple of our previous examples with this modification.



This is obviously pretty ugly, but it makes implementation much simpler. In every scope where we see a "using" call, we simply force all future calls to honor refinements. Calls appearing outside "using" scopes do not get refined and perform calls as normal.

We can improve this by making "using" apply to child scopes as well. This still provides the same parse-time "pseudo-keyword" benefit without the repetition.



Even better would be to officially make "using" a keyword and have it open a refined scope; that results in a clear delineation between refined and unrefined code. I show two forms of this below; the first opens a scope like "class" or "module", and the second uses a "do...end" block form.



It would be fair to say that requiring more explicit scoping of "using" would address my concern about knowing when to do a refined call. It does not, however, address the issues of locating active refinements at call time.

Locating Refinements

In each of the above examples, we still must pass some state from the calling context through to the method dispatch logic. Ideally we'd only need to pass in the calling object, which is already passed through for visibility checking. This works for refined class hierarchies, but it does not work for the RSpec case, since the calling object in some cases is just the top-level Object instance (and remember we don't want to decorate Object).

It turns out that there's already a feature in Ruby that follows lexical scoping: constant lookup. When Ruby code accesses a constant, the runtime must first search all enclosing scopes for a definition of that constant. Failing that, the runtime will walk the self object's class hierarchy. This is similar to what we want for the simplified version of refinements.

If we assume we've localized refinements to only calls within "using" scopes, then at parse time we can emit something like a RefinedCall for every method call in the code. A RefinedCall would be special in that it uses both the containing scope and the target class to look up a target method. The lookup process would proceed as follows:
  1. Search the call's context for refinements, walking lexical scopes only
  2. If refinements are found, search for the target method
  3. If a refined method is found, use it for the call
  4. Otherwise, proceed with normal lookup against the target object's class
Because the parser has already isolated refinement logic to specific calls, the only change needed is to pass the caller's context through to method dispatch.

Usability Concerns

There are indeed flavors of refinements that can be implemented reasonably efficiently, or at least implemented in such a way that unrefined code will not pay a price. I believe this is a requirement of any new feature: do no harm. But harm can come in a different form if a new feature makes Ruby code harder to reason about. I have some concerns here.

Let's go back to our "module_eval" case.



Because there's no "using" anywhere in the code, and we're not extending some other class, most folks will assume we're simply concatenating strings here. After all, why would I expect my "+" call to do something else? Why should my "+" call ever do something else here?

Ruby has many features that might be considered a little "magical". In most cases, they're only magic because the programmer doesn't have a good understanding of how they work. Constant lookup, for example, is actually rather simple...but if you don't know it searches both lexical and hierarchical contexts, you may be confused where values are coming from.

The "module_eval" behavior of refinements simply goes too far. It forces every Ruby programmer to second-guess every block of code they pass into someone else's library or someone else's method call. The guarantees of standard method dispatch no longer apply; you need to know if the method you're calling will change what calls your code makes. You need to understand the internal details of the target method. That's a terrible, terrible thing to do to Rubyists.

The same goes for refinements that are active down a class hierarchy. You can no longer extend a class and know that methods you call actually do what you expect. Instead, you have to know whether your parent classes or their ancestors refine some call you intend to make. I would argue this is considerably worse than directly monkey-patching some class, since at least in that case every piece of code has a uniform view.

The problems are compounded over time, too. As libraries you use change, you need to again review them to see if refinements are in play. You need to understand all those refinements just to be able to reason about your own code. And you need to hope and pray two libraries you're using don't define different refinements, causing one half of your application to behave one way and the other half of your application to behave another way.

I believe the current implementation of refinements introduces more complexity than it solves, mostly due to the lack of a strict lexical "using". Rubyists should be able to look at a piece of code and know what it does based solely on the types of objects it calls. Refinements make that impossible.

Update: Josh Ballanco points out another usability problem: "using" only affects method bodies defined temporally after it is called. For example, the following code only refines the "bar" method, not the "foo" method.


This may simply be an artifact of the current implementation, or it may be specified behavior; it's hard to tell since there's no specification of any kind other than the implementation and a handful of tests. In any case, it's yet another confusing aspect, since it means the order in which code is loaded can actually change which refinements are active.

tl;dr

My point here is not to beat down refinements. I agree there are cases where they'd be very useful, especially given the sort of monkey-patching I've seen in the wild. But the current implementation overreaches; it provides several features of questionable value, while simultaneously making both performance and understandability harder to achieve. Hopefully we'll be able to work with Matz and ruby-core to come up with a more reasonable, limited version of refinements...or else convince them not to include refinements in Ruby 2.0.

Monday, October 15, 2012

So You Want To Optimize Ruby

I was recently asked for a list of "hard problems" a Ruby implementation really needs to solve before reporting benchmark numbers. You know...the sort of problems that might invalidate early perf numbers because they impact how you optimize Ruby. This post is a rework of my response...I hope you find it informative!

Fixnum to Bignum promotion

In Ruby, Fixnum math can promote to Bignum when the result is out of Fixnum's range. On implementations that use tagged pointers to represent Fixnum (MRI, Rubinius, MacRuby), the Fixnum range is somewhat less than the base CPU bits (32/64). On JRuby, Fixnum is always a straight 64-bit signed value.

This promotion is a performance concern for a couple reasons:
  • Every math operation that returns a new Fixnum must be range-checked. This slows all Fixnum operations.
  • It is difficult (if not impossible) to predict whether a Fixnum math operation will return a Fixnum or a Bignum. Since Bignum is always represented as a full object (not a primitive or a tagged pointer) this impacts optimizing Fixnum math call sites.

Floating-point performance

A similar concern is the performance of floating point values. Most of the native implementations have tagged values for Fixnum but only one I know of (Macruby) uses tagged values for Float. This can skew expectations because an implementation may perform very well on integer math and considerably worse on floating-point math due to the objects created (and collected). JRuby uses objects for both Fixnum and Float, so performance is roughly equivalent (and slower than I'd like).

Closures

Any language that supports closures ("blocks" in Ruby) has to deal with efficiently accessing frame-local data from calls down-stack. In Java, both anonymous inner classes and the upcoming lambda feature treat frame-local values (local variables, basically) as immutable...so their values can simply be copied into the closure object or carried along in some other way. In Ruby, local variables are always mutable, so an eventual activation of a closure body needs to be able to write into its containing frame. If a runtime does not support arbitrary frame access (as is the case on the JVM) it may have to allocate a separate data structure to represent those frame locals...and that impacts performance.

Bindings and eval

The eval methods in Ruby can usually accept an optional binding under which to run. This means any call to binding must return a fully-functional execution environment, and in JRuby this means both eval and binding force a full deoptimization of the surrounding method body.

There's an even more unpleasant aspect to this, however: every block can be used as a binding too.

All blocks can be turned into Proc and used as bindings, which means every block in the system has to have full access to values in the containing call frame. Most implementers hate this feature, since it means that optimizing call frames in the presence of blocks is much more difficult. Because they can be used as a binding, that of course means literally all frame data must be accessible: local variables; frame-local $ variables like $~; constants lookup environment; method visibility; and so on.

callcc and Continuation

JRuby doesn't implement callcc since the JVM doesn't support continuations, but any implementation hoping to optimize Ruby will have to take a stance here. Continuations obviously make optimization more difficult since you can branch into and out of execution contexts in rather unusual ways.

Fiber implementation

In JRuby, each Fiber runs on its own thread (though we pool the native thread to reduce Fiber spin-up costs). Other than that they operate pretty much like closures.

A Ruby implementer needs to decide whether it will use C-style native stack juggling (which makes optimizations like frame elimination trickier to implement) or give Fibers their own stacks in which to execute independently.

Thread/frame/etc local $globals

Thread globals are easy, obviously. All(?) host systems already have some repesentation of thread-local values. The tricky ones are explicit frame globals like $~ and $_ and implicit frame-local values like visibility, etc.

In the case of $~ and $_, the challenge is not in representing accesses of them directly but in handling implicit reads and writes of them that cross call boundaries. For example, calling [] on a String and passing a Regexp will cause the caller's frame-local $~ (and related values) to be updated to the MatchData for the pattern match that happens inside []. There are a number of core Ruby methods like this that can reach back into the caller's frame and read or write these values. This obviously makes reducing or eliminating call frames very tricky.

In JRuby, we track all core methods that read or write these values, and if we see those methods called in a body of code (the names, mind you...this is a static inspection), we will stand up a call frame for that body. This is not ideal. We would like to move these values into a separate stack that's lazily allocated only when actually needed, since methods that cross frames like String#[] force other methods like Array#[] to deoptimize too.

C extension support

If a given Ruby implementation is likely to fit into the "native" side of Ruby implementations (as opposed to implementations like JRuby or IronRuby that target an existing managed runtime), it will need to have a C extension story.

Ruby's C extension API is easier to support than some languages' native APIs (e.g. no reference-counting as in Python) but it still very much impacts how a runtime optimizes. Because the API needs to return forever-valid object references, implementations that don't give out pointers will have to maintain a handle table. The API includes a number of macros that provide access to object internals; they'll need to be simulated or explicitly unsupported. And the API makes no guarantees about concurrency and provides few primitives for controlling concurrent execution, so most implementations will need to lock around native downcalls.

An alternative for a new Ruby implementation is to expect extensions to be written in the host runtime's native language (Java or other JVM languages for JRuby; C# or other .NET languages for IronRuby, etc). However this imposes a burden on folks implementing language extensions, since they'll have to support yet another language to cover all Ruby implementations.

Ultimately, though, the unfortunate fact for most "native" impls is that regardless of how fast you can run Ruby code, the choke point is often going to be the C API emulation, since it will require a lot of handle-juggling and indirection compared to MRI. So without supporting the C API, there's a very large part of the story missing...a part of the story that accesses frame locals, closure bodies, bindings, and so on.

Of course if you can run Ruby code as fast as C, maybe it won't matter. :) Users can just implement their extensions in Ruby. JRuby is starting to approach that kind of performance for non-numeric, non-closure cases, but that sort of perf is not yet widespread enough to bank on.

Ruby 1.9 encoding support

Any benchmark that touches anything relating to binary text data must have encoding support, or you're really fudging the numbers. Encoding touches damn near everything, and can add a significant amount of overhead to String-manipulating benchmarks.

Garbage collection and object allocation

It's easy for a new impl to show good performance on benchmarks that do no allocation (or little allocation) and require no GC, like raw numerics (fib, tak, etc). Macruby and Rubinius, for example, really shine here. But many impls have drastically different performance when an algorithm starts allocating objects. Very few applications are doing pure integer numeric algorithms, so object
allocation and GC performance are an absolutely critical part of the performance story.

Concurrency / Parallelism

If you intend to be an impl that supports parallel thread execution, you're going to have to deal with various issues before publishing numbers. For example, threads can #kill or #raise each other, which in
a truly parallel runtime requires periodic safepoints/pings to know whether a cross-thread event has fired. If you're not handling those safepoints, you're not telling the whole story, since they impact execution.

There's also the thread-safety of runtime structures to be considered. As an example, Rubinius until recently had a hard lock around a data structure responsible for invalidating call sites, which meant that its simple inline cache could see a severe performance degradation at polymorphic call sites (they've since added polymorphic caching to ameliorate this case). The thread-safety of a Ruby implementation's core runtime structures can drastically impact even straight-line, non-concurrent performance.

Of course, for an impl that doesn't support parallel execution (which would put it in the somewhat more limited realm of MRI), you can get away with GIL scheduling tricks. You just won't have a very good in-process scaling story.

Tracing/debugging

All current impls support tracing or debugging APIs, though some (like
JRuby) require you to enable support for them via command-line or compile-time flags. A Ruby implementation needs to have an answer for this, since the runtime-level hooks required will have an impact...and may require users to opt-in.

ObjectSpace

ObjectSpace#each_object needs to be addressed before talking about performance. In JRuby, supporting each_object over arbitrary types was a major performance issue, since we had to track all objects in a separate data structure in case they were needed. We ultimately decided each_object would only work with Class and Module, since those were the major practical use cases (and tracking Class/Module hierarchies is far easier than tracking all objects in the system).

Depending on how a Ruby implementation tracks in-memory objects (and depending on the level of accuracy expected from ObjectSpace#each_object) this can impact how allocation logic and GC are optimized.

Method invalidation

Several implementations can see severe global effects due to methods like Object#extend blowing all global caches (or at least several caches), so you need to be able to support #extend in a reasonable way before talking about performance. Singleton objects also have a similar effect, since they alter the character of method caches by introducing new anonymous types at any time (and sometimes, in rapid succession).

In JRuby, singleton and #extend effects are limited to the call sites that see them. I also have an experimental branch that's smarter about type identity, so simple anonymous types (that have only had modules included or extended into them) will not damage caches at all. Hopefully we'll land that in a future release.

Constant lookup and invalidation

I believe all implementations have implemented constant cache invalidation as a global invalidation, though there are other more complicated ways to do it. The main challenge is the fact that constant lookup is tied to both lexical scope and class hiearchy, so invalidating individual constant lookup sites is usually infeasible. Constant lookup is also rather tricky and must be implemented correctly before talking about the performance of any benchmark that references constants.

Rails

Finally, regardless of how awesome a new Ruby implementation claims to be, most users will simply ask "but does it run Rails?" You can substitute your favorite framework or library, if you like...the bottom line is that an awesome Ruby implementation that doesn't run any Ruby applications is basically useless. Beware of crowing about your victory over Ruby performance before you can run code people actually care about.

Wednesday, September 26, 2012

Explanation of Warnings From MRI's Test Suite

JRuby has, for some time now, run the same test suite as MRI (C Ruby, Matz's Ruby). Because not all tests pass, we use minitest-excludes to mask out the failures, and over time we unmask stuff as we fix it.

However, there's a number of warnings we get from the suite that are nonfatal and unmaskable. I thought I'd show them to you and tell their stories.

JRuby 1.9 mode only supports the `psych` YAML engine; ignoring `syck`

When we started implementing support for the new "psych" YAML engine that Aaron Patterson created (atop libyaml) for Ruby 1.9, we decided that we would not support the broken "syck" engine anymore. The libyaml version is strictly YAML spec compliant, and this is our contribution to ridding the world of "syck"'s broken YAML forever.

GC.stress= does nothing on JRuby

JRuby does not have direct control over the JVM's GC, and so we can't implement things like GC.stress=, which MRI uses to put the GC into "stress" mode (GCing much more frequently to better test GC stability and behavior). There are flags for the JVM to do this sort of testing, but since we don't really need to test the JVM's GC for correctness and stability, we have not exposed those flags directly.

This flag is used in a number of MRI tests to force GC to happen more often and/or to actually test GC behaviors.

SAFE levels are not supported in JRuby

JRuby does not support standard Ruby's security model, "safe levels", because we believe safe levels are a flawed, too-coarse mechanism. On JRuby, you can use standard Java security policies.

We have debated mapping the various Ruby safe levels to equivalent sets of Java security permissions, but have never gotten around to it.

GC.enable does nothing on JRuby / GC.disable does nothing on JRuby

There's no standard API on the JVM to disable the garbage collector completely, so GC.enable and GC.disable do nothing in JRuby.

It's also interesting to note that while you can request a GC run from the JVM by calling System.gc, JRuby also stubs out Ruby's GC.start. We opted to do this because GC.start is used in some Ruby libraries as a band-aid around Ruby's sometimes-slow GC, but the same call on JRuby is both unnecessary (because GC overhead is rarely a problem) and a major performance hit (because it triggers a full GC over the entire heap).

Sunday, September 16, 2012

An experiment in static compilation of Ruby: FASTRUBY!

While at GoGaRuCo this weekend, I finally made good on an experiment I had been thinking about for a while: a static compiler for Ruby. I thought I'd share it with you good people today.

First we have a simple Ruby script with a class in it:



We compile it with fastruby, and it produces two .java source files: Hello.java and RObject.java.

Hello.java implements the methods the Ruby class does in the script, and calls the same methods (with some mangling for invalid Java method names like _plus_ and _lt_).



RObject.java implements stubs for all method names seen in the script. As a result, all dynamic calls can just be virtual invocations against RObject. Classes that implement one of the methods will just work and the call is direct. Classes that don't implement the called method will raise an error.



RKernel comes with fastruby, and provides Kernel-level methods like "puts", plus methods for coercing to Java types like toBoolean and toString. It also caches some built-in singleton values like nil.




And there's a few other classes for this script to work. It should be easy to see how we could fill them out to do everything the equivalent Ruby classes do.




I don't have any support for a "main" method yet, so I wrote a little runner script to test it.


And away we go!


This is about 30% faster than JRuby with invokedynamic. It is not doing any boundschecking (for rolling over to Bignum) but it is also not caching 1...256 Fixnum objects like JRuby does, nor caching them in any calls along the way (note that it creates three new RFixnums for every recursion that JRuby would not recreate). I call that pretty good.

Obviously because this is designed to compile the whole system at once, we could also emit optimized versions of methods that look like they're doing math. That is yet to come, if I continue this little experiment at all.

There's also some fun possibilities here. By specifying Java types, the compiler could add normal Java methods. Implementing interfaces could be done directly. And Android applications built with this tool would be entirely statically optimizable, only shipping the small amount of code they actually call and having a very minimal runtime.

Pretty neat?

Tuesday, September 4, 2012

Avoiding Hash Lookups in a Ruby Implementation

I had an interesting realization tonight: I'm terrified of hash tables. Specifically, my work on JRuby (and even more directly, my work optimizing JRuby) has made me terrified to ever consider using a hash table in the hot path of any program or piece of code if there's any possibility of eliminating it. And what I've learned over the years is that the vast majority of execution-related (as opposed to data-related, purely dynamic-sourced lookup tables) hash tables are totally unnecessary.

Some background might be interesting here.

Hashes are a Language Designer's First Tool

Anyone who's ever designed a simple language knows that pretty much everything you do is trivial to implement as a hash table. Dynamically-expanding tables of functions or methods? Hash table! Variables? Hash table! Globals? Hash table!

In fact, some languages never graduate beyond this phase and remain essentially gobs and gobs of hash tables even in fairly recent implementations. I won't name your favorite language here, but I will name one of mine: Ruby.

Ruby: A Study in Hashes All Over the Freaking Place

As with many dynamic languages, early (for some definition of "early") implementations of Ruby used hash tables all over the place. Let's just take a brief tour through the many places hash tables are used in Ruby 1.8.7

(Author's note: 1.8.7 is now, by most measures, the "old" Ruby implementation, having been largely supplanted by the 1.9 series which boasts a "real" VM and optimizations to avoid most hot-path hash lookup.)

In Ruby (1.8.7), all of the following are (usually) implemented using hash lookups (and of these, many are hash lookups nearly every time, without any caching constructs):
  • Method Lookup: Ruby's class hierarchy is essentially a tree of hash tables that contain, among other things, methods. Searching for a method involves searching the target object's class. If that fails, you must search the parent class, and so on. In the absence of any sort of caching, this can mean you search all the way up to the root of the hierarchy (Object or Kernel, depending what you consider root) to find the method you need to invoke. This is also known as "slow".
  • Instance Variables: In Ruby, you do not declare ahead of time what variables a given class's object instances will contain. Instead, instance variables are allocated as they're assigned, like a hash table. And in fact, most Ruby implementations still use a hash table for variables under some circumstances, even though most of these variables can be statically determined ahead of time or dynamically determined (to static ends) at runtime.
  • Constants: Ruby's constants are actually "mostly" constant. They're a bit more like "const" in C, assignable once and never assignable again. Except that they are assignable again through various mechanisms. In any case, constants are also not declared ahead of time and are not purely a hierarchically-structured construct (they are both lexically and hierarchically scoped), and as a result the simplest implementation is a hash table (or chains of hash tables), once again.
  • Global Variables: Globals are frequently implemented as a top-level hash table even in modern, optimized language. They're also evil and you shouldn't use them, so most implementations don't even bother making them anything other than a hash table.
  • Local Variables: Oh yes, Ruby has not been immune to the greatest evil of all: purely hash table-based local variables. A "pure" version of Python would have to do the same, although in practice no implementations really support that (and yes, you can manipulate the execution frame to gain "hash like" behavior for Python locals, but you must surrender your Good Programmer's Card if you do). In Ruby's defense, however, hash tables were only ever used for closure scopes (blocks, etc), and no modern implementations of Ruby use hash tables for locals in any way.
There are other cases (like class variables) that are less interesting than these, but this list serves to show how easy it is for a language implementer to fall into the "everything's a hash, dude!" hole, only to find they have an incredibly flexible and totally useless language. Ruby is not such a language, and almost all of these cases can be optimized into largely static, predictable code paths with nary a hash calculation or lookup to be found.

How? I'm glad you asked.

JRuby: The Quest For Fewer Hashes

If I were to sum up the past 6 years I've spent optimizing JRuby (and learning how to optimize dynamic languages) it would be with the following phrase: Get Rid Of Hash Lookups.

When I tweeted about this realization yesterday, I got a few replies back about better hashing algorithms (e.g. "perfect" hashes) and a a few replies from puzzled folks ("what's wrong with hashes?"), which made me realize that it's not always apparent how unnecessary most (execution-related) hash lookups really are (and from now on, when I talk about unnecessary or optimizable hash lookups, I'm talking about execution-related hash lookups; you data folks can get off my back right now).

So perhaps we should talk a little about why hashes are bad in the first place.

What's Wrong With a Little Hash, Bro?

The most obvious problem with using hash tables is the mind-crunching frustration of finding THE PERFECT HASH ALGORITHM. Every year there's a new way to calculate String hashes, for example, that's [ better | faster | securer | awesomer ] than all precedents. JRuby, along with many other languages, actually released a security fix last year to patch the great hash collision DoS exploit so many folks made a big deal about (while us language implementers just sighed and said "maybe you don't actually want a hash table here, kids"). Now, the implementation we put in place has again been "exploited" and we're told we need to move to cryptographic hashing. Srsly? How about we just give you a crypto-awesome-mersenne-randomized hash impl you can use for all your outward-facing hash tables and you can leave us the hell alone?

But I digress.

Obviously the cost of calculating hash codes is the first sin of a hash table. The second sin is deciding how, based on that hash code, you will distribute buckets. Too many buckets and you're wasting space. Too few and you're more likely to have a collision. Ahh, the intricate dance of space and time plagues us forever.

Ok, so let's say we've got some absolutely smashing hash algorithm and foresight enough to balance our buckets so well we make Lady Justice shed a tear. We're still screwed, my friends, because we've almost certainly defeated the prediction and optimization capabilities of our VM or our M, and we've permanently signed over performance in exchange for ease of implementation.

It is conceivable that a really good machine can learn our hash algorithm really well, but in the case of string hashing we still have to walk some memory to give us reasonable assurance of unique hash codes. So there's performance sin #1 violated: never read from memory.

Even if we ignore the cost of calculating a hash code, which at worst requires reading some object data from memory and at best requires reading a cached hash code from elsewhere in memory, we have to contend with how the buckets are implemented. Most hash tables implement the buckets as either of the typical list forms: an array (contiguous memory locations in a big chunk, so each element must be dereferenced...O(1) complexity) or a linked list (one entry chaining to the next through some sort of memory dereference, leading to O(N) complexity for searching collided entries).

Assuming we're using simple arrays, we're still making life hard for the machine since it has to see through at least one and possibly several mostly-opaque memory references. By the time we've got the data we're after, we've done a bunch of memory-driven calculations to find a chain of memory dereferences. And you wanted this to be fast?

Get Rid Of The Hash

Early attempts (of mine and others) to optimize JRuby centered around making hashing as cheap as possible. We made sure our tables only accepted interned strings, so we could guarantee they'd already calculated and cached their hash values. We used the "programmer's hash", switch statements, to localize hash lookups closer to the code performing them, rather than trying to balance buckets. We explored complicated implementations of hierarchical hash tables that "saw through" to parents, so we could represent hierarchical method table relationships in (close to) O(1) complexity.

But we were missing the point. The problem was in our representing any of these language features as hash tables to begin with. And so we started working toward the implementation that has made JRuby actually become the fastest Ruby implementation: eliminate all hash lookups from hot execution paths.

How? Oh right, that's what we were talking about. I'll tell you.

Method Tables

I mentioned earlier that in Ruby, each class contains a method table (a hash table from method name to a piece of code that it binds) and method lookup proceeds up the class hierarchy. What I didn't tell you is that both the method tables and the hierarchy are mutable at runtime.

Hear that sound? It's the static-language fanatics' heads exploding. Or maybe the "everything must be mutable always forever or you are a very bad monkey" fanatics. Whatever.

Ruby is what it is, and the ability to mix in new method tables and patch existing method tables at runtime is part of what makes it attractive. Indeed, it's a huge part of what made frameworks like Rails possible, and also a huge reason why other more static (or more reasonable, depending on how you look at it) languages have had such difficulty replicating Rails' success.

Mine is not to reason why. Mine is but to do and die. I have to make it fast.

Proceeding from the naive implementation, there are certain truths we can hold at various times during execution:
  • Most method table and hierarchy manipulation will happen early in execution. This was true when I started working on JRuby and it's largely true now, in no small part due to the fact that optmizing method tables and hierarchies that are wildly different all the time is really, really hard (so no implementer does it, so no user should do it). Before you say it: even prototype-based languages like Javascript that appear to have no fixed structure do indeed settle into a finite set of predictable, optimizable "shapes" which VMs like V8 can take advantage of.
  • When changes do happen, they only affect a limited set of observers. Specifically, only call sites (the places where you actually make calls in code) need to know about the changes, and even they only need to know about them if they've already made some decision based on the old structure.
So we can assume method hierarchy structure is mostly static, and when it isn't there's only a limited set of cases where we care. How can we exploit that?

First, we implement what's called an "inline cache" at the call sites. In other words, every place where Ruby code makes a method call, we keep a slot in memory for the most recent method we looked up. In another quirk of fate, it turns out most calls are "monomorphic" ("one shape") so caching more than one is usually not beneficial.

When we revisit the cache, we need to know we've still got the right method. Obviously it would be stupid to do a full search of the target object's class hierarchy all over again, so what we want is to simply be able to examine the type of the object and know we're ok to use the same method. In JRuby, this is (usually) done by assigning a unique serial number to every class in the system, and caching that serial number along with the method at the call site.

Oh, but wait...how do we know if the class or its ancestors have been modified?

A simple implementation would be to keep a single global serial number that gets spun every time any method table or class hierarchy anywhere in the system is modified. If we assume that those changes eventually stop, this is good enough; the system stabilizes, the global serial number never changes, and all our cached methods are safely tucked away for the machine to branch-predict and optimize to death. This is how Ruby 1.9.3 optimizes inline caches (and I believe Ruby 2.0 works the same way).

Unfortunately, our perfect world isn't quite so perfect. Methods do get defined at runtime, especially in Ruby where people often create one-off "singleton methods" that only redefine a couple methods for very localized use. We don't want such changes to blow all inline caches everywhere, do we?

Let's split up the serial number by method name. That way, if you are only redefining the "foobar" method on your singletons, only inline caches for "foobar" calls will be impacted. Much better! This is how Rubinius implements cache invalidation.

Unfortunately again, it turns out that the methods people override on singletons are very often common methods like "hash" or "to_s" or "inspect", which means that a purely name-based invalidator still causes a large number of call sites to fail. Bummer.

In JRuby, we went through the above mechanisms and several others, finally settling on one that allows us to only ever invalidate the call sites that actually called a given method against a given type. And it's actually pretty simple: we spin the serial numbers on the individual classes, rather than in any global location.

Every Ruby class has one parent and zero or more children. The parent connection is obviously a hard link, since at various points during execution we need to be able to walk up the class hierarchy. In JRuby, we also add a weak link from parents to children, updated whenever the hierarchy changes. This allows changes anywhere in a class hiearchy to cascade down to all children, localizing changes to just that subhierarchy rather than inflicting its damage upon more global scopes.

Essentially, by actively invalidating down-hierarchy classes' serial numbers, we automatically know that matching serial numbers at call sites mean the cached method is 100% ok to use. We have reduced O(N) hierarchically-oriented hash table lookups to a single identity check. Victory!

Instance Variables

Optimizing method lookups actually turned out to be the easiest trick we had to pull. Instance variables defied optimization for a good while. Oddly enough, most Ruby implementations stumbled on a reasonably simple mechanism at the same time.

Ruby instance variables can be thought of as C++ or Java fields that only come into existence at runtime, when code actually starts using them. And where C++ and Java fields can be optimized right into the object's structure, Ruby instance variables have typically been implemented as a hash table that can grow and adapt to a running program as it runs.

Using a hash table for instance variables has some obvious issues:
  • The aforementioned performance costs of using hashes
  • Space concerns; a collection of buckets already consumes space for some sort of table, and too many buckets means you are using way more space per object than you want
At first you might think this problem can be tackled exactly the same way as method lookup, but you'd be wrong. What do we cache at the call site? It's not code we need to keep close to the point of use, it's the steps necessary to reach a point in a given object where a value is stored (ok, that could be considered code...just bear with me for a minute).

There are, however, truths we can exploit in this case as well.
  • A given class of objects will generally reference a small, finite number of variable names during the lifetime of a given program.
  • If a variable is accessed once, it is very likely to be accessed again.
  • The set of variables used by a particular class of objects is largely unique to that class of objects.
  • The majority of the variables ever to be accessed can be determined by inspecting the code contained in that class and its superclasses.
This gives us a lot to work with. Since we can localize the set of variables to a given class, that means we can store something at the class level. How about the actual layout of the values in object instances of that class?

This is how most current implementations of Ruby actually work.

In JRuby, as instance variables are first assigned, we bump a counter on the class that indicates an offset into an instance variable table associated with instances of that class. Eventually, all variables have been encountered and that table and that counter stop changing. Future instances of those objects, then, know exactly how larger the table needs to be and which variables are located where.

Invalidation of a given instance variable "call site" is then once again a simple class identity check. If we have the same class in hand, we know the offset into the object is guaranteed to be the same, and therefore we can go straight in without doing any hash lookup whatsoever.

Rubinius does things a little differently here. Instead of tracking the offsets at runtime, the Rubinius VM will examine all code associated with a class and use that to make a guess about how many variables will be needed. It sets up a table on the class ahead of time for those statically-determined names, and allocates exactly as much space for the object's header + those variables in memory (as opposed to JRuby, where the object and its table are two separate objects). This allows Rubinius to pack those known variables into a tighter space without hopping through the extra dereference JRuby has, and in many cases, this can translate to faster access.

However, both cases have their failures. In JRuby's version, we pay the cost of a second object (an array of values) and a pointer dereference to reach it, even if we can cache the offset 100% successfully at the call site. This translates to larger memory footprints and somewhat slower access times. In Rubinius, variables that are dynamically allocated fall back on a simple hash table, so dynamically-generated (or dynamically-mutated) classes may end up accessing some values in a much slower way than others.

The quest for perfect Ruby instance variable tables continues, but at least we have the tools to almost completely eliminate hashes right now.

Constants

The last case I'm going to cover in depth is that of "constant" values in Ruby.

Constants are, as I mentioned earlier, stored on classes in another hash table. If that were their only means of access, they would be uninteresting; we could use exactly the same mechanism for caching them as we do for methods, since they'd follow the same structure and behavior (other than being somewhat more static than method tables). Unfortunately, that's not the case; constants are located based on both lexical and hierarchical searches.

In Ruby, if you define a class or module, all constants lexically contained in that type's enclosing scopes are also visible within the type. This makes it possible to define new lexically-scoped aliased for values that might otherwise be difficult to retrieve without walking a class hierarchy or requiring a parent/child relationship to make those aliases visible. It also defeats nearly all reasonable mechanisms for eliminating hash lookups.

When you access a constant in Ruby, the implementation must first search all lexically-enclosing scopes. Each scope has a type (class or module) associated, and we check that type (and not its parents) for the constant name in question. Failing that, we fall back on the current type's class hierarchy, searching all the way up to the root type. Obviously, this could be far more searching than even method lookup, and we want to eliminate it.

If we had all the space in the world and no need to worry about dangling references, using our down-hierarchy method table invalidation would actually work very well here. We'd simply add another hierarchy for invalidation: lexical scopes. In practice, however, this is not feasible (or at least I have not found a way to make it feasible) since there are many times more lexical scopes in a given system than there are types, and a large number of those scopes are transient; we'd be tracking thousands or tens of thousands of parent/child relationships weakly all over the codebase. Even worse, invalidation due to constant updates or hierarchy changes would have to proceed both down the class hierarchy and throughout all lexically-enclosing scopes in the entire system. Ouch!

The current state of the art for Ruby implementations is basically our good old global serial number. Change a constant anywhere in Ruby 1.9.3, Rubinius, or JRuby, and you have just caused all constant access sites to invalidate (or they'll invalidate next time they're encountered). Now this sounds bad, perhaps because I told you it was bad above for method caching. But remember that the majority of Ruby programmers advise and practice the art of keeping constants...constant. Most of the big-name Ruby folks would call it a bug if your code is continually assigning or reassigning constants at runtime; there are other structures you could be using that are better suited to mutation, they might say. And in general, most modern Ruby libraries and frameworks do keep constants constant.

I'll admit we could do better here, especially if the world changed such that mutating constants was considered proper and advisable. But until that happens, we have again managed to eliminate hash lookups by caching values based on a (hopefully rarely modified) global serial number.

The Others

I did not go into the others because the solutions are either simple or not particularly interesting.

Local variables in any sane language (flame on!) are statically determinable at parse/compile time (rather than being dynamically scoped or determined at runtime). In JRuby, Ruby 1.9.3, and Rubinius, local variables are in all cases a simple tuple of offset into an execution frame and some depth at which to find the appropriate frame in the case of closures.

Global variables are largely discouraged, and usually only accessed at boot time to prepare more locally-defined values (e.g. configuration or environment variable access). In JRuby, we have experimented with mechanisms to cache global variable accessor logic in a way similar to instance variable accessors, but it turned out to be so rarely useful that we never shipped it.

Ruby also has another type of variable called a "class variable", which follows lookup rules almost identical to methods. We don't currently optimize these in JRuby, but it's on my to-do list.

Final Words

There are of course many other ways to avoid hash lookups, with probably the most robust and ambitious being code generation. Ruby developers, JIT compiler writers, and library authors have all used code generation to take what is a mostly-static lookup table and turn it into actually-static code. But you must be careful here to not fall into the trap of simply stuffing your hash logic into a switch table; you're still doing a calculation and some kind of indirection (memory dereference or code jump) to get to your target. Analyze the situation and figure out what immutable truths there are you can exploit, and you too can avoid the evils of hashes.