Monday, November 19, 2012

Refining Ruby

What does the following code do?

If you answered "it upcases two strings and adds them together, returning the result" you might be wrong because of a new Ruby feature called "refinements".

Let's start with the problem refinements are supposed to solve: monkey-patching.


In Ruby, all classes are mutable. Indeed, when you define a new class, you're really just creating an empty class and filling it with methods. The ability to mutate classes at runtime has been used (or abused) by many libraries and frameworks to decorate Ruby's core classes with additional (or replacement) behavior. For example, you might add a "camelize" method to String that knows how to convert under_score_names to camelCaseNames. This is lovingly called "monkey-patching" by the Ruby community.

Monkey-patching can be very useful, and many patterns in Ruby are built around the ability to modify classes. It can also cause problems if a library patches code in a way the user does not expect (or want), or if two libraries try to apply conflicting patches. Sometimes, you simply don't want patches to apply globally, and this is where refinements come in.

Localizing Monkeypatches

Refinements have been discussed as a feature for several years, sometimes under the name "selector namespaces". In essence, refinements are intended to allow monkey-patching only within certain limited scopes, like within a library that wants to use altered or enhanced versions of core Ruby types without affecting code outside the library. This is the case within the ActiveSupport library that forms part of the core of Rails.

ActiveSupport provides a number of extensions (patches) to the core Ruby classes like String#pluralize, Range#overlaps?, and Array#second. Some of these extensions are intended for use by Ruby developers, as conveniences that improve the readability or conciseness of code. Others exist mostly to support Rails itself. In both cases, it would be nice if we could prevent those extensions from leaking out of ActiveSupport into code that does not want or need them.


In short, refinements provide a way to make class modifications that are only seen from within certain scopes. In the following example, I add a "camelize" method to the String class that's only seen from code within the Foo class.

With the Foo class refined, we can see that the "camelize" method is indeed available within the "camelize_string" method but not outside of the Foo class.

On the surface, this seems like exactly what we want. Unfortunately, there's a lot more complexity here than meets the eye.

Ruby Method Dispatch

In order to do a method call in Ruby, a runtime simply looks at the target object's class hierarchy, searches for the method from bottom to top, and upon finding it performs the call. A smart runtime will cache the method to avoid performing this search every time, but in general the mechanics of looking up a method body are rather simple.

In an implementation like JRuby, we might cache the method at what's called the "call site"—the point in Ruby code where a method call is actually performed. In order to know that the method is valid for future calls, we perform two checks at the call site: that the incoming object is of the same type as for previous calls; and that the type's hierarchy has not been mutated since the method was cached.

Up to now, method dispatch in Ruby has depended solely on the type of the target object. The calling context has not been important to the method lookup process, other than to confirm that visibility restrictions are enforced (primarily for protected methods, since private methods are rejected for non–self calls). That simplicity has allowed Ruby implementations to optimize method calls and Ruby programmers to understand code by simply determining the target object and methods available on it.

Refinements change everything.

Refinements Basics

Let's revisit the camelize example again.

The visible manifestation of refinements comes via the "refine" and "using" methods.

The "refine" method takes a class or module (the String class, in this case) and a block. Within the block, methods defined (camelize) are added to what might be called a patch set (a la monkey-patching) that can be applied to specific scopes in the future. The methods are not actually added to the refined class (String) except in a "virtual" sense when a body of code activates the refinement via the "using" method.

The "using" method takes a refinement-containing module and applies it to the current scope. Methods within that scope should see the refined version of the class, while methods outside that scope do not.

Where things get a little weird is in defining exactly what that scope should be and in implementing refined method lookup in such a way that does not negatively impact the performance of unrefined method lookup. In the current implementation of refinements, a "using" call affects all of the following scopes related to where it is called:
  • The direct scope, such as the top-level of a script, the body of a class, or the body of a method or block
  • Classes down-hierarchy from a refined class or module body
  • Bodies of code run via eval forms that change the "self" of the code, such as module_eval
It's worth emphasizing at this point that refinements can affect code far away from the original "using" call site. It goes without saying that refined method calls must now be aware of both the target type and the calling scope, but what of unrefined calls?

Dynamic Scoping of Method Lookup

Refinements (in their current form) basically cause method lookup to be dynamically scoped. In order to properly do a refined call, we need to know what refinements are active for the context in which the call is occurring and the type of the object we're calling against. The latter is simple, obviously, but determining the former turns out to be rather tricky.

Locally-applied refinements

In the simple case, where a "using" call appears alongside the methods we want to affect, the immediate calling scope contains everything we need. Calls in that scope (or in child scopes like method bodies) would perform method lookup based on the target class, a method name, and the hierarchy of scopes that surrounds them. The key for method lookup expands from a simple name to a name plus a call context.

Hierarchically-applied refinements

Refinements applied to a class must also affect subclasses, so even when we don't have a "using" call present we still may need to do refined dispatch. The following example illustrates this with a subclass of Foo (building off the previous example).

Here, the camelize method is used within a "map" call, showing that refinements used by the Foo class apply to Bar, its method definitions, and any subscopes like blocks within those methods. It should be apparent now why my first example might not do what you expect. Here's my first example again, this time with the Quux class visible.

The Quux class uses refinements from the BadRefinement module, effectively changing String#upcase to actually do String#reverse. By looking at the Baz class alone you can't tell what's supposed to happen, even if you are certain that str1 and str2 are always going to be String. Refinements have effectively localized the changes applied by the BadRefinement module, but they've also made the code more difficult to understand; the programmer (or the reader of the code) must know everything about the calling hierarchy to reason about method calls and expected results.

Dynamically-applied refinements

One of the key features of refinements is to allow block-based DSLs (domain-specific languages) to decorate various types of objects without affecting code outside the DSL. For example, an RSpec spec.

There's several calls here that we'd like to refine.
  • The "describe" method is called at the top of the script against the "toplevel" object (essentially a singleton Object instance). We'd like to apply a refinement at this level so "describe" does not have to be defined on Object itself.
  • The "it" method is called within the block passed to "describe". We'd like whatever self object is live inside that block to have an "it" method without modifying self's type directly.
  • The "should" method is called against an instance of MyClass, presumably a user-created class that does not define such a method. We would like to refine MyClass to have the "should" method only within the context of the block we pass to "it".
  • Finally, the "be_awesome" method—which RSpec translates into a call to MyClass#awesome?—should be available on the self object active in the "it" block without actually adding be_awesome to self's type.
In order to do this without having a "using" present in the spec file itself, we need to be able to dynamically apply refinements to code that might otherwise not be refined. The current implementation does this via Module#module_eval (or its argument-receiving brother, Module#module_exec).

A block of code passed to "module_eval" or "instance_eval" will see its self object changed from that of the original surrounding scope (the self at block creation time) to the target class or module. This is frequently used in Ruby to run a block of code as if it were within the body of the target class, so that method definitions affect the "module_eval" target rather than the code surrounding the block.

We can leverage this behavior to apply refinements to any block of code in the system. Because refined calls must look at the hierarchy of classes in the surrounding scope, every call in every block in every piece of code can potentially become refined in the future, if the block is passed via module_eval to a refined hierarchy. The following simple case might not do what you expect, even if the String class has not been modified directly.

Because the "+" method is called within a block, all bets are off. The str_ary passed in might not be a simple Array; it could be any user class that implements the "inject" method. If that implementation chooses, it can force the incoming block of code to be refined. Here's a longer version with such an implementation visible.

Suddenly, what looks like a simple addition of two strings produces a distinctly different result.

Now that you know how refinements work, let's discuss the problems they create.

Implementation Challenges

Because I know that most users don't care if a new, useful feature makes my life as a Ruby implementer harder, I'm not going to spend a great deal of time here. My concerns revolve around the complexities of knowing when to do a refined call and how to discover those refinements.

Current Ruby implementations are all built around method dispatch depending solely on the target object's type, and much of the caching and optimization we do depends on that. With refinements in play, we must also search and guard against types in the caller's context, which makes lookup much more complicated. Ideally we'd be able to limit this complexity to only refined calls, but because "using" can affect code far away from where it is called, we often have no way to know whether a given call might be refined in the future. This is especially pronounced in the "module_eval" case, where code that isn't even in the same class hierarchy as a refinement must still observe it.

There are numerous ways to address the implementation challenges.

Eliminate the "module_eval" Feature

At present, nobody knows of an easy way to implement the "module_eval" aspect of refinements. The current implementation in MRI does it in a brute-force way, flushing the global method cache on every execution and generating a new, refined, anonymous module for every call. Obviously this is not a feasible direction to go; block dispatch will happen very frequently at runtime, and we can't allow refined blocks to destroy performance for code elsewhere in the system.

The basic problem here is that in order for "module_eval" to work, every block in the system must be treated as a refined body of code all the time. That means that calls inside blocks throughout the system need to search and guard against the calling context even if no refinements are ever applied to them. The end result is that those calls suffer complexity and performance hits across the board.

At the moment, I do not see (nor does anyone else see) an efficient way to handle the "module_eval" case. It should be removed.

Localize the "using" Call

No new Ruby feature should cause across-the-board performance hits; one solution is for refinements to be recognized at parse time. This makes it easy to keep existing calls the way they are and only impose refinement complexity upon method calls that are actually refined.

The simplest way to do this is also the most limiting and the most cumbersome: force "using" to only apply to the immediate scope. This would require every body of code to "using" a refinement if method calls in that body should be refined. Here's a couple of our previous examples with this modification.

This is obviously pretty ugly, but it makes implementation much simpler. In every scope where we see a "using" call, we simply force all future calls to honor refinements. Calls appearing outside "using" scopes do not get refined and perform calls as normal.

We can improve this by making "using" apply to child scopes as well. This still provides the same parse-time "pseudo-keyword" benefit without the repetition.

Even better would be to officially make "using" a keyword and have it open a refined scope; that results in a clear delineation between refined and unrefined code. I show two forms of this below; the first opens a scope like "class" or "module", and the second uses a "do...end" block form.

It would be fair to say that requiring more explicit scoping of "using" would address my concern about knowing when to do a refined call. It does not, however, address the issues of locating active refinements at call time.

Locating Refinements

In each of the above examples, we still must pass some state from the calling context through to the method dispatch logic. Ideally we'd only need to pass in the calling object, which is already passed through for visibility checking. This works for refined class hierarchies, but it does not work for the RSpec case, since the calling object in some cases is just the top-level Object instance (and remember we don't want to decorate Object).

It turns out that there's already a feature in Ruby that follows lexical scoping: constant lookup. When Ruby code accesses a constant, the runtime must first search all enclosing scopes for a definition of that constant. Failing that, the runtime will walk the self object's class hierarchy. This is similar to what we want for the simplified version of refinements.

If we assume we've localized refinements to only calls within "using" scopes, then at parse time we can emit something like a RefinedCall for every method call in the code. A RefinedCall would be special in that it uses both the containing scope and the target class to look up a target method. The lookup process would proceed as follows:
  1. Search the call's context for refinements, walking lexical scopes only
  2. If refinements are found, search for the target method
  3. If a refined method is found, use it for the call
  4. Otherwise, proceed with normal lookup against the target object's class
Because the parser has already isolated refinement logic to specific calls, the only change needed is to pass the caller's context through to method dispatch.

Usability Concerns

There are indeed flavors of refinements that can be implemented reasonably efficiently, or at least implemented in such a way that unrefined code will not pay a price. I believe this is a requirement of any new feature: do no harm. But harm can come in a different form if a new feature makes Ruby code harder to reason about. I have some concerns here.

Let's go back to our "module_eval" case.

Because there's no "using" anywhere in the code, and we're not extending some other class, most folks will assume we're simply concatenating strings here. After all, why would I expect my "+" call to do something else? Why should my "+" call ever do something else here?

Ruby has many features that might be considered a little "magical". In most cases, they're only magic because the programmer doesn't have a good understanding of how they work. Constant lookup, for example, is actually rather simple...but if you don't know it searches both lexical and hierarchical contexts, you may be confused where values are coming from.

The "module_eval" behavior of refinements simply goes too far. It forces every Ruby programmer to second-guess every block of code they pass into someone else's library or someone else's method call. The guarantees of standard method dispatch no longer apply; you need to know if the method you're calling will change what calls your code makes. You need to understand the internal details of the target method. That's a terrible, terrible thing to do to Rubyists.

The same goes for refinements that are active down a class hierarchy. You can no longer extend a class and know that methods you call actually do what you expect. Instead, you have to know whether your parent classes or their ancestors refine some call you intend to make. I would argue this is considerably worse than directly monkey-patching some class, since at least in that case every piece of code has a uniform view.

The problems are compounded over time, too. As libraries you use change, you need to again review them to see if refinements are in play. You need to understand all those refinements just to be able to reason about your own code. And you need to hope and pray two libraries you're using don't define different refinements, causing one half of your application to behave one way and the other half of your application to behave another way.

I believe the current implementation of refinements introduces more complexity than it solves, mostly due to the lack of a strict lexical "using". Rubyists should be able to look at a piece of code and know what it does based solely on the types of objects it calls. Refinements make that impossible.

Update: Josh Ballanco points out another usability problem: "using" only affects method bodies defined temporally after it is called. For example, the following code only refines the "bar" method, not the "foo" method.

This may simply be an artifact of the current implementation, or it may be specified behavior; it's hard to tell since there's no specification of any kind other than the implementation and a handful of tests. In any case, it's yet another confusing aspect, since it means the order in which code is loaded can actually change which refinements are active.


My point here is not to beat down refinements. I agree there are cases where they'd be very useful, especially given the sort of monkey-patching I've seen in the wild. But the current implementation overreaches; it provides several features of questionable value, while simultaneously making both performance and understandability harder to achieve. Hopefully we'll be able to work with Matz and ruby-core to come up with a more reasonable, limited version of refinements...or else convince them not to include refinements in Ruby 2.0.


  1. The keyword is the most appealing to me and I love that idea... not only does it imply what is going on but it uses a "Ruby-ish" way of giving you a context.

    It would also give us a way to only refine certain methods on the eigen and instance without affecting the entire class or module which gives us even more power to scope the way we monkey patch but I don't even know if that kind of chess move even entered into your head when thinking about that :P but I immediately thought of cases where I could refine certain internal methods with common and "global" internal methods and keep those scoped onto those specific sets of methods like so: and even have private and protected fall down into the refined methods as well!

  2. By the way, adding a new syntactic `using X ... end` construct would suddenly make `using` a keyword, or otherwise you won't be able to keep Ruby's parser LALR(1), a feature matz desires to keep. (I believe that avoiding keywording `using` will require Ruby to use a GLR parser, which is considerably slower.)

    If you elect to add a keyword, through, then all previously valid code with that name becomes invalid. ECMAScript committee has had a lot of pain with that with the transition to ES6/Harmony.

    1. I believe Charlie made a typo in that example. The second usage contains a do after the using. I would not think that would have any issues with regards to parsing then.

    2. Doh...I did not realize he was showing two separate syntaxes in one snippet. The parser could still allow 'using' as a special call form and stay lalr(1), but it would be weird to update the grammar like that just to make it a call node.

  3. Thank you for taking the time to write this all down!

    If I may offer my own tl;dr -- With refinements, if you have two different locations in your program where the *same* method is called on the *same* object with the *same* arguments and the object has the *same* ancestor hierarchy, you might get **different** results.

    To me, this seems like some sort of violation of a basic principle of OOP. Refinements introduce position-specific effects into your code (not scope-specific, literally *position*-specific).

    Which brings me to the one point you missed. Because of the way that refinements are currently implemented in 2.0 such that they only affect methods defined after "using" has been called, you can actually have the same code do two different things depending on the order in which source files were loaded.

    1. Yes, this is indeed true, and is a complexity I opted not to go into because I think it's mostly an implementation artifact. But it is definitely the case...because "using" only affects methods defined after it is called, the order in which files are loaded can actually change which refinements will be active. Add that as another terribly confusing aspect of the current implementation.

      I will add a note about it.

    2. Isn't it the same with monkey-patching?

      If I load two files and both of them override String#upcase, the order in which they are loaded will affect the actual implementation of upcase that will be called in the rest of my program.

    3. It's not quite the same. In the monkeypatching case, all calls after the final monkey patch will invoke the same monkey patched method. For refinements, after everything is loaded, different calls on the same object with the same method name can still invoke different methods.

  4. My real world problems with monkey patching have been a difficulty in upgrading libraries because application code relies on a dubiously altered infrastructure library. No localization would help.

    I've rarely, if ever, experienced a collision or unexpected patch.

    I'm not sure any performance degradation is worth a new facility that solves a problem that does not exist. I'm certainly not an expert at ruby implementations, but if the method dispatch process is made more complicated, wouldn't that make it more difficult to optimize?

    The idea for refinements is creative and I tip my hat to the people who have worked on developing the ideas. I'm not sure that adopting in 2.0 is the best way to experiment.

  5. Charles, for the case of RSpec DSL, I'd just like to let you know that it can be still expressive without requiring use of monkey patches by using a similar strategy I use in OOJSpec. Here is an example of how it would look like (notice that RSpec already supports the "expect" notation):

    RSpec.describe "Some behavior" do |s|
    s.example "some example" do
    s.expect( respond_to :stub

    It is even possible for "s." to go away in the example above without requiring any monkey patches.

    And it still remains pretty readable. I think monkey patches were abused, mainly because we can do that in Ruby. So, people coming from Java or C++ loved this feature and decided to use it just because they could, to demonstrate those awesome powerful features of their new language to their friends.

    Even though Ruby gives programmers great language features it doesn't mean it incentives programmers to write Monkey Patches or abuse from those features just because they can. They should be the last resource in my opinion, and used cautiously.

    I'm not even worried about performance or the impact such features have on language implementers, like you. I understand the performance impact but even when we get computers a million times faster than we currently have I'll still think that such features should be avoided when possible.

    The reason is exactly the same argument people use to make use of such features: readability. One might think that being able to write "describe" from the top-level object improves readability. I don't think so. I think "RSpec.describe" is a better fit because it helps me to figure out where "describe" has been defined. Something like above is also more readable to me than monkey patching Numeric:

    using TimeConversion do
    expires_at = 2.days.from_now

    I really hope the Ruby community will change their mind about code organization some day and write code that is less magic and much easier for others to follow and understand where each method is coming from.

    1. Thanks for this comment, Rodrigo!

      I agree with you...I have never understood the value of monkey-patching classes all over the place when you could simply have modules with those functions included into your hierarchy. Honestly, there's little justification for monkeypatching something like #camelize when you can simple include Camelize; camelize(str) for the exact same effect.

      RSpec is a good example of how far you might go with monkey-patching for a DSL. It adds methods to the highest levels of the object hierarchy, making it difficult or impossible to use those method names yourself (or at least not without making them harder to test via RSpec). Honestly, I don't feel like refinements are a good idea because they'll make it even easier to monkey-patch with impunity, and different contexts will *frequently* have different patches active. Seems more confusing to me.

  6. Already posted this on ruby-lang but since nobody responded i'll repeat. The solution seems simple to me:

    a) change "Classes down-hierarchy from a refined class or module body" to just the module/namespace hierarchy. Instead of a directed graph you'll only have to traverse a linear ancestor chain. This should make cache invalidation much easier

    b) separate module_eval from the refinement context binding issue. Make lambdas explicitly bound to a module context and add a separate method to rebind them. Only when this separate method is called you would have to invalidate the caches.
    Usually these kinds of lambdas are only generated at startup and stored in class variables. So they only need to be rebound once.

    c) add a method to remove refinements downstream from a module

    d) make all anonymous modules (and by extension: anonymous classes) be refinement-free by default. i.e. they should not participate in refinement inheritance. this way they can be used as sandbox for DSLs and the like.

  7. Charles, thanks for taking the time to write this up.

    When I first heard about refinements after Ruby Conf 2010, I thought they sounded nifty, and was excited about the possibility of having them as a new language feature. However, I've heard @brixen argue against them on a few occasions and it really made me reconsider if they belong in ruby. Your blog post sheds additional light here, and I think I'd vote against their inclusion if I had any say.

    You bring up RSpec as an example of a DSL that could use refinements, but I work on RSpec and I honestly don't see us using them anytime soon, if ever. It's important for rubyists to be able to use RSpec to test their gems and to decide what versions of ruby they want to support without RSpec forcing them to use ruby 2.0 (or any particularly recent version), so it'd be a long time before we would consider using such a new feature of ruby (probably RSpec 4.0 at the earliest). On top of that, we've been finding solutions to having the nice RSpec DSL without monkey patching every object in the system with lots of methods like old RSpec versions used to do. Specifically:

    * In 2.11, we introduced the new `expect` syntax[1], as Rodrigo mentioned above. Relying on every object in the system responding to a particular message (i.e. `should`) in a uniform way, when RSpec does not own every object in the system, has led to some confusing problems at times. Refinements would not solve that...RSpec would still not own every object in the system and an individual object could still respond to `should` differently (e.g. by proxying it to another object, for example).
    * In 2.11, we changed the way `describe` is made available to the top level[2] so that it is not added to every object in the system. Refinements were not needed for this.

    I think monkey patching is a wonderful feature of ruby but it really needs to be handled with care. I've got a few guidelines for monkey patching that I follow that I really with the ruby community as a whole would would allow the benefits of monkey patching without the potential conflicts that come from rubyists monkey-patching willy-nilly:

    * Monkey patching in application (i.e. non-shared/non-gem) code is fine, but should be done with care. A monkey patch in your individual application won't be imported by anyone else. I tend to use domain-specific monkey patches in my apps -- for example, I recently added a `Date#quarter` method to an application I'm working on because we're doing a lot of timeframe stuff around calendar quarters.
    * In gems, if the main point of the gem is monkey patches (e.g. activesupport), that's fine. Users know what they're getting when they decide to use your gem.
    * However, if your gem's main point isn't to monkey patch something....then it shouldn't monkey patch anything, and it shouldn't use any gems that monkey patch something. Don't use activesupport in your gem just because it's convenient, because you force a bunch of non-obvious baggage on users of your gem.

    In short, it's not monkeypatching in general that's the problem so much as it's non-obvious, surprise monkey patches that you didn't realize were added to your system by using gem X.


  8. I have an idea for the solution to this. It is in two stages, so please don't stop reading when you see the first.

    1 (The raw idea)

    In abstract terms imagine it like this: Each time a refinement is made to a class, push the class on a stack. All calls to the class are made to the 'stack' version of the class, which will be the most in-scope version of the modified class. Each class has a stack. When you fall back a scope, pop the refinement off the stack and fall back to the previous version.

    2 (The refinement of this idea)

    The stack is actually a stack of indirections which are a set of pointers to the "change list". So for each stack item there is a pointer to a short list which contains the index's for the altered items and the indexes to the base class items. This way it isn't the actual code that goes on, but lookup to the code.

    You are adding a minimum overhead of a lookup for each class method call, but this again could be cached as a set of flags holding the modification status of each class, leading to a single check for non-modified classes.

    That's my idea - maybe it's a bad one, but I just thought I would post it.

  9. Great article!

    I agree with you about the problems of that kind of refinements.

    But just before I know your post today, I've made an (ingenuous) implementation which works on 1.8.7 and 1.9.3. It is lexically scoped but I used enable and disable instead of using. And the refinements are strongly limited to ranges in the file where they're enabled (no subclasses, no hidden behaviors). So I believe it won't have that evil cons that you discussed here. ;-)

    Here we go:

  10. I wish Sequel had refinements. It pollutes Object namespace with useful shortcuts, but they're only useful when databasing (there I would want using the patches).

    We can use Sequel without core extensions, but then we can't use that shortcuts anymore.

    Refinements would be perfect for this.

  11. If we can make temporary classes, methods, modules, constants, variables and namespace.

    Given that we just declare them once and use as per needed-basis only.

    - We can achieve namespace pollution-free environment.
    - Clean and less cluttered codebase.
    - Focused business login code.
    - Optimized code because we use only them when we need it.
    - Work on the problem on hand without too much distractions and concerns of trying to setup our classes, modules for our needs.
    - Segration of what stays in the codebase (important, full-time objects), and what would be temporary (for quick computations, and those that are only required by certain classes, methods, etc., required for certain problems to solve. these are the as per contract basis only objects)