As you've probably heard by now, Oracle has decided to file suit against Google, claiming multiple counts of infringement against Java or JVM patents and copyrights they acquired when they assimilated Sun Microsystems this past year. Since I'm unlikely to keep my mouth shut about even trivial matters, something this big obviously requires at least a couple thousand words.
Who Am I?
Any post of this nature really requires an author to identify where they stand, so their unavoidable biases can be taken with the appropriate dosage of salt. Rather than having you dig through my past and learn who and what I am, I'll just lay it out here.
I am a Java developer. I've been a Java developer since 1996 or so, when I got my first University job writing stupid little applets in this new-fangled web language. That job expanded into a web development position, also using Java, and culminated with me joining a few senior developers for a 6-month shared development gig with IBM's then-nascent Pacific Development Center in Vancouver, BC. Since then I've had a string of Java-related jobs...some as a trenches developer, some as a lead, some as "architect", but all of them heavily wrapped up in this thing called Java. And I can't say that I've ever been particularly annoyed with Java as a language or a platform. Perhaps I haven't spent enough time on other runtimes, or perhaps I've got tunnel-vision after being a Java developer for so many years. But I'd like to think that I've become "seasoned" enough as a developer to realize no platform is perfect, and the manifold benefits of the JVM and the Java platform vastly outweigh the troublesome aspects.
I am an open-source developer. In the late 90s, I worked in earnest on my first open-source project: the LiteStep desktop replacement for Windows. At the time, the LiteStep project was a loosely-confederated glob of C code and amateur C hackers. Being a Windows user at the time, I was looking to improve my situation...specifically, I had worked for years on a small application called Hack-It that exposed aspects of the win32 API normally unavailable through standard Windows UI elements, and I was interested in taking that further. LiteStep was not my creation. It had many developers before me and many after, but my small contribution to the project was an almost complete rewrite in amateur-friendly C++ and a decoupling of the core LiteStep "kernel" from the various plugin mechanisms. I was also interviewed for a Wired article on the then-new domain of "skinning" computers, desktops, applications, and so on, though none of my quotes made it into the article. After LiteStep, I fell back into mostly anonymous corporate software development, all still using Java and many open-source technologies, but not much of a visible presence in the OSS world. Then, in 2004 while working as the lead "Java EE Architect" for a multi-million-dollar US government contract, I found JRuby.
I am a JRuby developer. Since 2004 (or really since late 2005, when I started helping out in earnest) I've been partially responsible for turning JRuby from an interesting novelty project into one of the top Ruby implementations. We've become well known as one of the best-performing – if not the best-performing – Ruby implementations, even faced with increasing competition from the young upstarts. We're also increasingly popular (and perhaps the easiest path) for bringing Ruby and its many paradigm-shifting libraries and frameworks (like Rails) to Java and JVM users around the world – without them having to change platforms or leave any of their legacy code behind. Part of my interest in JRuby has been to bring Ruby to the JVM, plain and simple. I like Ruby, I like the Ruby community, and on most days I like the cockiness and enthusiasm of those community members toward trying crazy new things. But another large part of my interest in JRuby is more sinister: I want to prove to naysayers what a great platform the JVM actually is, and perhaps make them think twice about knee-jerk biases they've carried and cultivated for so many years.
You'll notice I refer to JRuby not as "it" or "she" or "he", but as "we". "We've become well known...We're also increasingly popular..." That's not an accident. There's now over five years of my efforts in JRuby, and I consider it to be as much a part of me as I am a part of it. And so because of that, I have a much deeper, emotional investment in the platform upon which JRuby rests.
I am a Java developer. I am an open-source developer. I am a JRuby developer and a Ruby fan.
I am not a lawyer.
The Facts, According to Me
These are the facts as I see them. You're free to disagree with my interpretation of the world, and I encourage you to do so in the comments, on other forums, over email, or to my face (but buy me a beer first).
On Java
The Java platform is big. Really big. You just won't believe how vastly hugely mindbogglingly big it is. And by big, I mean it's everywhere.
There are three mainstream JVMs people know about: JRockit (WebLogic's first and then Oracle's after it acquired them), Hotspot (Which came to Sun through an acquisition and eventually became OpenJDK), and J9 (IBM's own JVM, fully-licensed and with all its shots). Upon those three JVMs lives a gigantic world. If you want the details, there's numerous studies and reports about the use of Java in all manner of business, from the hippest new startups (Twitter recently switched much of their stack to the JVM) to the oldest of the old financial concerns. It's the favored choice for government server applications, the strongest not-quite-completely-Free managed runtime for open-source libraries and applications, and now with Android it's rapidly becoming one of the strongest (if not the strongest) mobile OS platform (even though Android isn't *really* Java, as I'll get into later). You may love or hate Java, but I guarantee it's part of your life in some way or another.
There are a few open-source implementations of Java. The most well-known is OpenJDK, the Hotspot JVM that Sun relicensed under the GPL and set Free into the world. There's also Apache Harmony, whose class libraries form part of Dalvik's (Android's VM) Java-compatibility layer. There's GNU Classpath, a GPL-based implementation of the Java class libraries used for the ahead-of-time Java compiler GCJ. There's JamVM, which leverages Classpath to provide a very light, very minimal, (and very simple) JVM implementation. And there's others of varying qualities and relevance like IKVM (Java for .NET), VMKit (a Java compiler atop LLVM), and so on. OpenJDK is certainly the big daddy, though, and its release as GPL guarantees we'll at least have a solid Java 6 implementation forever.
Java is not an entirely open platform, what with the now-obvious encumbrances of patents and copyrights (not to mention draconian policies toward Java's various specifications, which are often very slow to evolve due to the JCP quagmire). That's not a great state of affairs, and if nothing else you have to recognize that folks at Sun at least tried to release the platform from its shackles by chasing OpenJDK. But the process of "freeing" Java has been pretty rocky; OpenJDK itself took years to gain acceptance from OSS purists, and the choice of the GPL has meant that folks afeared of the GPL's "viral" side still had to look for other options (which is a large part of why Apache Harmony was used as part of the basis for Android). Perhaps the biggest nail in the coffin is that Sun's Java test kit, the gold standard of whether an implementation is "compliant" or not, has never been released in open-source form, ultimately binding the hands of developers who wished to build a fully-compatible open-source Java.
Java is not an entirely closed platform, either. OpenJDK was a huge step in the direction of Freeing Java, and the Java community in general has a very strong OSS ethos. There's no piece of Java software that isn't at least partially based on open-source componenents, and most Java library, framework, or application developers either initially or eventually open-source some or all of their works. Open-source development and the Java platform go hand-in-hand, and without that relationship the platform would not be where it is today. Contrast that to other popular environments like Microsoft's .NET – which has been admirably Freed through open standards, but which has not yet become synonymous with or popular for OSS development – or Apple's various platforms – which aren't based on open-standards *or* open-source, but which have managed to become many OSS developers' environment of choice...for writing or consuming non-Apple open-source software. Among the corporation-controlled runtimes, the Java platform has more OSS in its blood than all others combined...many times more.
Java is not perfect, but it's pretty darn good. Every platform has its warts. The Java platform represents a decade and a half of tradeoffs, and it's impossible in that amount of time to make everyone happy all the time. One of the big contentious items is the addition in Java 5 of parametric polymorphism as a compile-time trick without also adding VM-level support for reifying per-type specializations as .NET can do. But ask most Java developers if they'd rather have nothing at all, and you'll get mixed responses. The sad, crippled version of generics in Java 5 doesn't do everything static-typing purists want, nor does it really extend to runtime at all (making reflective introspection almost impossible), but they do provide some nice surface-level sugar for Java developers. The same can be said of many Java "features" and tradeoffs. JavaEE became an abortively complicated jumble of mistakes (tradeoffs that went bad), but even upstarts that arguably made better decisions initially have themselves graduated into chaos (I believe the Spring framework has now grown even larger than the largest Java EE conglomerate, and Microsoft's periodically reboots their blessed-framework-of-the-week, resulting in an even more disruptive environment than a slow-moving, bulky standard like JavaEE). Designing good software is hard. Designing good *big* software is exponentially harder. Designing good *big* software that pleases everyone is impossible.
Why People Hate Java
Java is second only to Microsoft's platforms for being both wildly successful and almost universally hated by the self-sure software elite. The reasons for this are manifold and complex.
First of all, back in the 90s Java started getting shoved down everyone's throat. Developers were increasingly told to investigate this new platform, since their managers and long-disconnected tech leads kept hearing how great it was from Sun Microsystems, then a big deal in server applications and hardware. So developers that were happily using other environments (many of which exist to this day) often found themselves forced to suck it up and become Java developers. Making matters worse, Java itself was designed to be a fairly limited language...or at least limited in how easily a developer could paint themselves into a corner. Many features those reluctant developers had become used to in other environments were explicitly rejected for Java on the grounds that they added too much complexity, too much confusion, and too little value to trenches developers. So people that were happily doing Perl or C++ or Smalltalk or what have you were suddenly forced into a little J-shaped box and forced to write all those same applications upon Java and the JVM at a time when both were still poorly-suited to those domains. Those folks have had a white-hot hate for anything relating to Java ever since, and many will stop at nothing to see the entire platform ejected into space.
Second, as mentioned quickly above, Java in the 90s was simply not that great a platform. It had most of the current warts (classpath issues, VM limitations, poor system-level integration, a very limited language) on top of the fact that it was slow (optimizing JVMs didn't come around until the 2000s), marketed for highly-visible, highly-fickle application domains like desktop and browser-based applications (everyone's cursed a Java app or applet at some point in their life), and still largely driven and controlled by a single company (at a time when many developers were trying to get out from under Microsoft's thumb). It wasn't until Java 1.2 that we started to get a large and diverse update to Java's core libraries. Java 1.3 was the first release to ship Hotspot, which started to get the performance monkey off our backs. Java 1.5 brought the first major changes to the Java language, all designed to aid developers in expressing what they meant in standard ways (like using type-safe enums instead of static final ints, or generics for compiler-level assurances of collection homogeneity). And Java 6, the last major version, made great strides in improving startup time, overall performance, and manageability of JVM processes. Java 7, should it ever ship, will bring new changes to the Java language like support for closures and other syntactic sugar, better system-level integration features as found in NIO.2, and the feather in the cap: VM-level support for function objects and non-standard invocation sequences via Method Handles and InvokeDynamic. But unless you've been a Java developer for the past decade, all you remember is the roaring 90s and the pain Java caused you as a developer or a user.
Third, the Java language and environment has stagnated. Given years of declining fortunes at Sun Microsystems, disagreement among JCP members about the direction the platform should go, and a year of uncertainty triggered by Sun's collapse and rescue at the hands of Oracle, it's surprising anything's managed to get done at all. Java 7 is now many years overdue; they were talking about it when I joined Sun in 2006, and hoped to have preview releases within a year. For both technical and political reasons, it's taken a long time to bring the platform to the next level, and as a result many of the truly excellent improvements have remained on the shelf (much to my dismay...we really could use them in JRuby). For fast-moving technology hipsters, that's as good as dying on the vine; you need to shift paradigms on a regular schedule or you're yesterday's news.
Update: At least one commenter also pointed out that it took a long time for Java to be "everywhere", and even today most users still need to download and install it at least once on any newly-installed OS. Notable exceptions include Mac OS X, which ships a cracker-jack Java 6 based on Hotspot, and some flavors of Linux that come with some sort of Java installed out of the box. But this was definitely a very real problem; developers were being pushed to write apps and applets in Java, and users were forced to download a multi-megabyte installer just to run them...at a time when downloading multi-megabyte software was often a very painful ordeal. That would put a bad taste in anyone's mouth.
It's because of these and similar reasons that folks like Google finally said "enough is enough," and opted to start doing their own things. On the JRuby project, we've routinely hacked around the limitations of the JVM, be they related to its piss-poor process management APIs, its cumbersome support for binding native libraries, or its stubborn reluctance to become the world's greatest dynamic language VM. I've thought on numerous occasions how awesome it would be to spin off a company that took OpenJDK and made it "right" for the kinds of development people want to do today (and I'd love to be a part of that company), but such ventures are both expensive and light on profitability. Nobody pays for platforms or runtimes...they pay for services around those platforms or runtimes, services which are often anathema to the developers of those platforms and runtimes. So it required someone "bigger" to make that happen...someone who could write off the costs of the platform by funding it in creative new ways. Someone with a massive existing investment in Java. Someone with deep pockets and an army of the best developers in the business who love nothing more than a challenge. Someone like Google.
Why Android?
(Note that a lot of this is based on what information I've managed to glean from various conversations. Clarifications or corrections are welcome.)
There's an incredibly successful mobile Java platform out there. One that boasts millions of devices from almost all the major manufacturers, in form factors ranging from crappy mid-00s clamshells to high-end smartphones. A platform with hundreds or thousands of games and applications and freely-available development tools. That platform is Java ME.
Java ME started out as an effort to bring Java back to its original roots: as a language and environment for writing embedded applications. The baseline ME profiles are pretty bare; I did some CLDC development years ago and had to implement my own buffered streams and various data structures just to get by. Even the biggest profiles are still fairly restricted, and I don't believe any of them have ever graduated beyond Java 1.3-level featuresets. So Sun did a great job of getting Java ME on devices, back when people cared about Sun...and then they let mobile Java stagnate to a terrible degree while they spent all resources trying to get people to use Java EE and trying to get Java EE to suck less. So while resources were getting poured into EE, people started to form the same opinions of mobile Java they had formed about desktop and server Java years earlier.
At the same time, Java ME was one of the few Java-related technologies that brought in money. You see, in order for handset manufacturers to ship (and boast about) Java ME support, they had to license the technology from Sun. It wasn't a huge cash cow, but it was a cow nonetheless. Java ME actually made money for Sun. So in true Sun form, they loused it up terribly.
Fast forward to a few years ago. Google, recognizing that mobile devices finally were becoming the next great technology market, decided that leaving the mobile world in the hands of proprietary platforms was a bad idea. Java ME seemed like it could be an answer, but Sun was starting to get desperate for both revenue and relevance...and they'd started to back a completely new horse-that-would-be-cow called JavaFX, which they hoped to pimp as the next great development environment for in-browser and on-device apps alike. They weren't interested in making Java ME be what Google wanted it to be.
Google decided to take the hard route: they'd fund development of a new platform, building it entirely from open-source components, and leveraging two of the best platform technologies available: Linux, for the kernel, and Java, for the runtime environment. However there was a problem with Java: it was encumbered by all sorts of patents and copyrights and specifications and restrictions. Hell, even OpenJDK itself, the most complete and competitive OSS implementation of Java, could not be customized and shipped in binary-only form by hardware manufacturers and service providers due to it being GPL. So the answer was to build a new VM, use unencumbered versions of the core Java class libraries, and basically remake the world in a new, copyright and patent-free image. Android was born.
There's many parts to Android, several of which I'm not really qualified to talk about. But the application environment that runs atop the Dalvik VM needs some explanation.
First, there's the VM. Dalvik is *not* a JVM. It doesn't run JVM bytecode, and you can't ship JVM bytecode expecting it to work on Dalvik. You must recompile it to Dalvik's own bytecode using one of the provided translation tools. This is similar to how IKVM gets Java code to run on .NET: you're not actually running a JVM, you're transforming your code into a different form so it will run on someone else's VM. So it bears repeating, lest anyone get confused: Dalvik is not a JVM...it just plays one on TV.
Second, there's the core Java class libraries. Android supports a rough (but large) subset of the Java 1.5 class libraries. That subset is large enough that projects as complicated as JRuby can basically run unmodified on Android, with very few restrictions (a notable one is the fact that since we can't generate JVM bytecode, we can't reoptimize Ruby code at runtime right now). In order to do this without licensing Sun's class libraries (as most other mainstream Java runtimes like JRockit and J9 do), Google opted to go with the not-quite-complete-but-pretty-close Apache Harmony class libraries, which had for years been developed independent of Sun or OpenJDK but never really tested against the Java compatibility kits (and there's a long and storied history behind this situation).
So by building their own non-JVM VM and using translated versions of non-Sun, non-encumbered class libraries, Google hoped to avoid (or at least blunt) the possibility that their "unofficial", "unlicensed" mobile Java platform might face a legal test. In short, they hoped to build the open mobile Java platform developers wanted without the legal and financial encumbrances of Java ME.
At first, they seemed to be on a gravy train with biscuit wheels.
Splitting Up the Pie
Sun Microsystems was not amused. A little over a year ago, when several Sun developers started to take an eager interest in Android, we were all told to back off. It wasn't yet clear whether Android stood on solid legal ground, and Sun execs didn't want egg on their face if a bunch of their own employees turned out to be supporting a platform they'd eventually have to attack. Furthermore, it was an embarrassment to see Android drawing in the same developers Sun really, really wanted to look at JavaFX or PersonalJava or whatever the latest attempt to bring developers back might be. Android actually *was* a great platform that supported existing Java developers and libraries incredibly well (without actually being a Java environment), and for the first time there was a serious contender to "standard" Java that Sun had absolutely no control over.
To make matters worse, handset manufacturers started to sign on in droves to this new non-Java ME platform, which meant all that technology licensing revenue was reaching a dead end. Nobody (including me) wanted to do Java ME development anymore. Everyone wanted to do Android development.
Now we must say one thing to Sun's credit: they didn't do what Oracle is now attempting to do. As James Gosling blogged recently, patent litigation just wasn't in Sun's blood...even if there might have been legal ground to file suit. So while we Sun employees were still quietly discouraged from looking at or talking about Android, the rest of the world took Sun's silence as carte blanche to stuff Android into everything from phones to TVs, and mobile app developers started to think there might be hope for a real competitor to Apple's iPhone. Things might have proceeded in this way indefinitely, with Android continuing to grab market share (it recently passed iPhone in raw numbers with no slowing in sight) and mindshare (Android is far more approachable than almost any other mobile development environment, especially if you're one of the millions of developers who know Java.)
And then it all started to go wrong.
The Mantle of Java Passes to Oracle
If for nothing else, Jonathan Schwartz will be remembered as the man who broke open the Sun piƱata, simultaneously releasing more open-source software than any company in history and killing Sun in the process. Either Jonathan had no "step 2" or the inertia of a company built on closed-source products was too great to overcome. In either case, by spring of 2009 Sun was hemorrhaging. Many reports claim that Jonathan had started shopping Sun around to possible buyers as early as 2008, but it wasn't until 2009 that the first candidates started lining up. Initially, it was IBM, hoping to gobble up its former competitor along with the IP, patents, and copyrights they carried. That deal ultimately went south when Sun refused to consider any deal that IBM wouldn't promise to carry to completion, even in the face of regulatory roadblocks sure to come up. Many of us breathed a sigh of relief; if there's any Java company even more firmly stuck in the old world than Sun, it's IBM...and we weren't looking forward to dealing with that.
Once that deal fell through, folks like me became resigned to the fact that Sun was nearing the end of its independent life. Years of platform negligence, management incompetence, and resting on laurels had dug a hole far too deep for anyone to climb out of. Would it be Cisco, who had recently started building up an interesting new portfolio of application server hardware and virtualization software? What about VMWare, who had recently gobbled up Springsource and seemed to be making all the right moves toward a large-scale virtualized "everything cloud." Or perhaps Oracle, a long-time partner to Sun, whose software was either Java-based or widely deployed on Sun hardware and operating systems. Dear god, please don't let it be Oracle.
Don't get me wrong...Oracle's a highly successful company. They've managed to turn almost every acquisition into gold while coaxing profitability out of just about every one of their divisions. But Oracle's not a developer-oriented company (like Sun)...it's a profit-oriented company (unlike Sun, sadly), and you need to either feed the bottom line or feed others in the company that do. So when it turned out that Oracle would gobble up Sun, many of us OSS folks started to get a little nervous.
You see, many of us at Sun had been actively trying to change the perception of the platform from that of a corporate, enterprisey, closed world to that of a great VM with a great OSS ecosystem and an open-source reference implementation. Folks like Jonathan believed that by freeing Java we'd free the platform, and both the platform and the developer community would be better for it. We were half right...the OpenJDK genie is out of the bottle, and there's basically no way to put it back now (and for that, the world owes Sun a great debt). But only part of the platform was Freed...the patents and copyrights surrounding Hotspot and Java itself remained in place, carefully tucked away in the vault of a company that just didn't mount patent or copyright-driven legal attacks.
Oracle, now in control of those patents and copyrights, obviously has different plans.
The Suit
So now, after spending 4000 words of your time, we come to the meat of the article: the actual Oracle v Google suit. The full text is provided various places online, though the Software Patents Wiki has probably the best collection of related facts (though the wiki-driven discussions of the actual patents are woefully inaccurate).
The suit largely comes down to a patent-infringement battle. Oracle claims that by developing and distributing Android, Google is in violation of seven patents. There's also an amorphous copyright claim without much backing information ("Google probably stole something copyrighted so we'll list a bunch of stuff commonly stolen in that way"), so we'll skip that one today.
Before looking at the actual patents involved, I want to make one thing absolutely clear: Oracle has not already won this suit. Even after a couple days of analysis, nobody has any idea whether they *can* win such a suit, given that Google seems to have taken great pains to avoid legal entanglements when designing Android. So everybody needs to take a deep breath and let things progress as they should, and either trust that things will go the right direction or start doing your damndest to make sure they go the right direction.
With that said, let's take a peek at the patents, one by one. And as always, the "facts" here are based on my reading of the patents and my understanding of the related systems.
Protection Domains To Provide Security In A Computer System (6,125,447) and Controlling Access To A Resource (6,192,476)
The first two patents describe the Java Security Policy system, for controlling access to resources. One of the least-interesting but most-important aspects of the Java platform is its approach to security. Code under a specific classloader or thread can be forced to comply with a specific security policy by installing a security manager. These permissions control just about every aspect of the code's interaction with the JVM and with the host operating system: loading new code, reflectively accessing existing classes, accessing system-level resources like devices and filesystems, and so on. It's even easy for you to build up security policies of your own by checking for custom-named permissions and only granting them when appropriate. It's a pretty good system, and one of the reasons Java has a much stronger security track record than other runtimes that don't have pervasive security in mind from the beginning.
In order to host applications written for the Java platform, and to sandbox them in a compatible way, Android necessarily had to support the same security mechanisms. The problem here is the same problem that plagues many patents: what boils down to a fairly simple and obvious way to solve a problem (associate pieces of code with sets of permissions, don't let that code do anything outside those permissions) becomes so far-reaching that almost any reasonable *implementation* of that idea would violate these patents. In this case the '447 and '476 patents do describe mechanisms for implementing Java security policies, but even that simple implementation is very vague and would be hard to avoid with even a clean-room implementation.
Now I do not know exactly how Android implements security policies, but it's probably pretty close to what's described in these patents...since just about every implementation of security policies would be pretty close to what's described.
Method And Apparatus For Preprocessing And Packaging Class Files (5,966,702)
This is basically the patent governing the "Pack200" compression format provided as part of the JDK and used to better-compress class file archives.
Update: Alex Blewitt has posted a discussion of the Pack200 specification. He says this patent isn't nearly as comprehensive, but that it may touch upon how Pack200 works. His post is a more complete treatment of the details of the class file format and how Pack200 improves compression ratios for class archives. It also occurs to me now that this patent could be related to mobile/embedded Java too, where better compression would obviously have an enormous savings.
Java class files are filled with redundant data. For example, every class that contains code that calls PrintStream.println (as in System.out.println) contains the same "constant pool" entry identifying that method by name, a la "java/io/PrintStream.println:(Ljava/lang/String;)V". Every field lookup, class reference, literal string, or method invocation will have some sort of entry in the constant pool. Pack200 takes advantage of this fact by compressing all class files as a single unit, batching duplicate data into one place so that the actual unique class data boils down to just the unique class, method, and code structure.
The reason for having a separate compression format is because "zip" files, which includes Java's "jar" files, are notoriously bad at compressing many small files with redundant data. Because one of the features of the "zip" format is that you can easily pull a single file out, compressing all files together as a single unit prevents introducing any interdependencies between those files or a global table. This is a large part of why compression formats like "tar.gz" do a better job of compressing many small files: tar turns many files into one file, and gzip or bzip2 compress that one large file as a single unit (conversely, this is why you can't easily get a single file out of a tarball).
On Android, this is accomplished in a similar way by the "dex" tool, which in the process of translating JVM bytecode into Dalvik bytecode also localizes all duplicate class data in a single place. The general technique is standard data compression theory, so presumably the novelty lies in applying decades-old compression theory specifically to Java classfile structure.
If I've lost you at this point, we can summarize it this way: part of Oracle's suit lies in a patent for a better compression mechanism for archives containing many class files that takes advantage of redundant data in those files.
Are you laughing yet?
System And Method For Dynamic Preloading Of Classes Through Memory Space Cloning Of A Master Runtime System Process (7,426,720)
I'm not sure this patent ever saw the light of day in a mainstream JVM implementation. It describes a mechanism by which a master parent process could pre-load and pre-initialize code for a managed system, and then new processes that need to boot quickly would basically be memory-copied (plus copy-on-write friendly) "forks" of that master process, with the master maintaining overall control of those child processes through some sort of IPC.
Ignore for the moment the obvious prior art of "fork" itself as applied to pre-initializing application state for many children. Anyone who's ever used fork to initialize a heavy process or runtime to avoid the cost of reinitializing children has either violated this patent (if done since 2003) or has a compelling case for prior art (if done before 2003).
It's likely that this patent was formulated as an answer to the poor semantics of running many applications under the same JVM. Java servlets and later Java EE made it possible to consider deploying all of your company's applications in a single process, isolated by classloaders and security policies. What they never really addressed was the fact that code isn't the only thing you're sharing in this model; you're also sharing memory space, CPU time, and process resources like file descriptors. No amount of Java classloader or security trickery could make this a seamless multiapp environment, and so work like this patent hoped to find a lightweight way for all those child applications to actually live as their own processes.
On Android, this manifests in the fact that each application runs independently, and they (like most operating systems) fork off from either the kernel process or some master process.
In this case, Oracle's banking on being able to litigate with a patent for a very common application of "fork".
Method And Apparatus For Resolving Data References In Generated Code (RE38,104)
This patent, invented by James Gosling himself, basically describes a mechanism by which symbolic data references in code (e.g. Java field references) can be resolved dynamically at runtime into actual direct memory accesses, eliminating the symbolic lookup overhead. It's part of standard JIT optimization techniques, and there's a lot of references in this patent many great JIT patents and papers of the past.
Here there may actually be merit, or as much merit as can be found in a software patent to begin with. The patent itself is tiny, as most of these patents are. The techniques seem obvious to me, but perhaps they're obvious because this patent helped make them standard. I'm not qualified to judge. What I can say is that I can't imagine a VM in existence that doesn't violate the spirit – if not the letter – of this patent as well. All systems with symbolic references will seek to eliminate the symbolic references in favor of direct access. The novelty of this patent may be in doing that translation on the fly...not even at a decidedly coarse-grained per-method level, but by rewriting code while the method is actually executing.
I would guess that this is a patent filed during the development of Java's earlier JIT technologies, before systems like Hotspot came along to do a much better large-scale, cross-method job of optimization. It doesn't seem like it would be hard to debunk the novelty of the patent, or at least show prior art that makes it irrelevant.
Update: I actually found a reference in the article Dalvik Optimization and Verification with dexopt to the technique described here (about 3/4 down the page, under "Optimization"):
"The Dalvik optimizer does the following: ... For instance field get/put, replace the field index with a byte offset. ..."
But Dalvik still does this only once, before running the code (actually, at install time); not *while* running the code as described in the patent.
Interpreting Functions Utilizing A Hybrid Of Virtual And Native Machine Instructions (6,910,205)
This patent, invented by Lars Bak of V8 fame, describes a mechanism for building a "mixed mode" VM that can execute interpreted code and compiled (presumably optimized) code in the same VM process, and flip between the two over time to produce better-optimized compiled code. This describes the basic underpinnings of VMs like Hotspot, which alternate between interpreting virtual machine code and executing real machine code even within the same thread of execution (and sometimes, even branching from virtual code to real code and back within the same method body). Any other VMs that are mixed mode would probably violate this patent, so its impact could reach much farther than Android. (In a sense, even JRuby might violate this patent, though our two mixed modes are both virtual instruction sets.)
Now you might think the other mainstream JVMs would violate this patent, but they don't. Neither JRockit nor J9 have interpreters; they both go immediately to native code with various tiers of instrumentation to do the runtime profile data gathering. They iterative regenerate native code with successively more and better optimizations. Lars most recent VM, the V8 Javascript VM at the heart of Chrome, also goes straight to native code.
Now here's where it gets weird: Up until Froyo (Android 2.2) Dalvik did a once-only compilation to native code before anything started executing, which means by definition that it was not mixed-mode. And even in Froyo, I believe it still does its initial execution in native code form with instrumentation to allow subsequent compiles to do a better job. Dalvik does not have an interpreter, Dalvik does not interpret Dalvik bytecode.
Perhaps someone can explain how this patent even applies to Dalvik or Android?
Update: A couple commenters correct me here: Dalvik actually was 100% interpreted before Froyo, and is now a standard mixed-mode environment post-Froyo. So if this suit had been filed a year ago this patent might not have been applicable, but it probably is now.
Method And System for Performing Static Initialization (6,061,520)
Sigh. This patent appears to revolve completely around a mechanism by which the static initialization of arrays could be "play executed" in a preloader and then rewritten to do static initialization in one shot, or at least more efficiently than running dozens of class initializers that just construct arrays and populate them. Of all the patents, this is probably the narrowest, and the mechanism described are again not very unusual, but there's probably a good chance that the "dex" tool does something along these lines to tidy up static initializers in Android applications.
Given the "preloader" aspect of this patent, I'd surmise that it was formulated in part to simplify static initialization of code on embedded devices or in applet environments (because on servers...the boot time of static initialization is probably of little concern). Because of the much more limited nature of embedded environments (especially in 1998, when this patent was filed) it would be very beneficial to turn programmatic data initialization into a simple copy operation or a specialized virtual machine instruction. And this may be why it could apply to Android; it's another sort of embedded Java, with a preloader (either dex or the dexopt tool that jit-compiles your app on the device) and resource limitations that would warrant optimizing static initialization.
So, Does the Suit Have Merit?
I'll again reiterate that I'm not a lawyer. I'm just a Java developer with a logical mind and a penchant for debunking myths about the Java platform.
The collection of patents specified by the suit seems pretty laughable to me. If I were Google, I wouldn't be particularly worried about showing prior art for the patents in question or demonstrating how Android/Dalvik don't actually violate them. Some, like the "mixed mode" patent, don't actually seem to apply at all. It feels very much like a bunch of Sun engineers got together in a room with a bunch of lawyers and started digging for patents that Google might have violated without actually knowing much about Android or Dalvik to begin with.
But does the suit have merit? It depends if you consider baseless or over-general patents to have merit. The most substantial patent listed here is the "mixed mode" patent, and unless I'm wrong that one doesn't apply. The others are all variations on prior art, usually specialized for a Java runtime environment (and therefore with some question as to whether they can apply to a non-Java runtime environment that happens to have a translator from Java code). Having read through the suit and scanned the patents, I have to say I'm not particularly worried. But then again, I don't know what sort of magic David Boies and company might be able to pull off.
What Might Happen?
In the unlikely event of a total victory by Oracle, there's probably a lot of possible outcomes. I don't see the "death of Java" among them. There's also the possibility that Google could win a convincing victory. What might happen in each case?
The Nuclear Option
The worst case scenario would be that Android is completely destroyed, all Android handsets are confiscated by the Oracle mafia and burned in the city square, and all hope for a Free and Open Java are forever laid to rest. Long live Mono.
To understand why this won't happen we need to explore Oracle's possible motives.
As I mentioned above, Java ME actually did bring licensing revenue to Sun. There's a lot of handset manufacturers, millions of handsets, and every one put a couple cents (or a couple bucks?) in Sun's pocket. In the heady high times of Java ME, it was the only managed mobile runtime in town, with sound and graphics and standard UI elements. It wasn't always pretty, but it worked well and it was really easy to write for.
Now with Android rapidly becoming the preferred mobile and embedded Java, it's become apparent that there's no future for Java ME - or at least no future in the expanding "smart" consumer electronics business. Java ME lives on in Blackberries, some other low-end phones, in most Blu-Ray devices (BD-J is a standard for writing Java apps that run on Blu-Ray systems, utilizing one of the richer class libraries available for Java ME), and in some sub-micro devices like Ajile's AJ-200 Java-based multimedia CPU. If you want Java on a phone or in your TV, Android is taking that world by storm. That means Java ME licensing revenue is rapidly drying up.
So why wouldn't Oracle want to take a bite of the rapidly-growing Android pie? Would they turn down a portion of that revenue and instead completely destroy a very popular and successful mobile Java, or would they just strongarm a few bucks out of Google and Android handset manufacturers? Remember we're talking about a profit-driven company here. Java ME is never going to come back to smartphones, that much is certain and I don't think even Oracle could argue it. There's no profit in filing this suit just to kill Android, since it would just mean competing mobile platforms like Windows Phone, RIM, Symbian, or iOS would just canibalize their younger brother. Instead of getting a slice of the fastest-growing segment of Java developers, you'd kill off the entire segment and force those developers to non-Java, non-Oracle-friendly platforms.
Oracle may be big and evil, but they're not stupid.
Google Licensing Deal
A more likely outcome might be that Google would be forced to license the patents or pay royalties on Android revenue. I honestly believe this is the goal of this lawsuit; Oracle wants to get their foot into the door of the smartphone world, and they know they can't innovate enough to make up for the collapse of Java ME. So they're hoping that by sabre-rattling a few patents, Google will be forced (or scared) into sharing the harvest.
Given the contents of the suit and the patents, I think this one is pretty unlikely too. Much of Android and Dalvik's designs are specifically crafted to avoid Java entanglements, and I think it's unlikely if this suit goes to trial that Oracle's lawyers would be able to make a convincing argument that the patents were both novel and that they were violated by Google. But let's not put anything past either the lawyers or the US federal court system, eh?
Nothing At All
There's a good chance that either Oracle or the court will realize quickly that the case has no merit, and drop all charges. I'm obviously hoping for this one, but it's likely to take the longest of all. First, the court would need to gather all facts in the case, which could take months (especially given the highly technical nature of some of the compaints). Then there's the rebuttals of those facts, sorting out the wheat from the chaff, deciding there's not enough there to proceed, and either Oracle backs out or the court tosses the case. In the latter case, there's the possibility of appeals, and things could start to get very expensive.
Total Collapse of Software Patents
This is probably the one of the highest-profile cases involving software patents in recent years. The other would be Apple's recent suit against HTC for design elements of the iPhone. Several other bloggers and analysts have called out the possibility that this could lead to the death of software patents in general. I think that's a bit optimistic, but both Google *and* Oracle have come down officially against patents in the past (though perhaps Oracle's had a change of heart since acquiring Sun's portfolio).
As much as I'd like to see it happen, software patents probably won't be dead in the next year or two. But this might be a nail in the coffin.
What Does This Mean for Java?
Now we come to the biggest question of all: how does this suit affect the Java world, regardless of outcome?
Well it's obviously not great to have two Java heavyweights bickering like schoolchildren, and it would be positively devastating if Android were obliterated because of this. But I think the real damage will be in how the developer community perceives Java, rather than in any lasting impact on the platform itself.
Let's return to some of our facts. First off, nothing in this suit would apply to any of the three mainstream JVMs that 99% of the world's Java runs on. Hotspot and JRockit are both owned by Oracle, and J9 is subject to the Java specification's patent grant for compliant implementations. The lesson here is that Android is the first Java-like environment since Microsoft's J++ to attempt to unilaterally subset or superset the platform (with the difference in Android's case being that it doesn't claim to be a Java environment, and it may not actually need the patent grant). Other Java implementations that "follow the Rules" are in the clear, and so 99% of the world's use of Java is in the clear. Sorry, Java haters...this isn't your moment.
This certainly does some damage to the notion of open-source Java implementations, but only those that are not (or can not be) compliant with the specification. As the Apache Harmony folks know all too well, it's really hard to build a clean-room implementation of Java and expect to get the "spec compliance patent grant" if you don't actually have the tools necessary to show spec compliance. Tossing the code over to Sun to run compliance testing is a nonstarter; the actual test kit is enormous and requires a huge time investment to set up and run (and Sun/Oracle have better things to do with their time than help out a competing OSS Java implementation). If the test kit had been open-sourced before Sun foundered, there would be no problem; everyone that wanted to make an open-source java would just aim for 100% compliance with the spec and all would be well. As it stands, independently implemented (i.e. non-OpenJDK) open-source Java is a really hard thing to create, especially if you have to clean-room implement all the class libraries yourself. Android has neatly dodged this issue by letting Android just be what it is: a subset of a Java-like platform that doesn't actually run Java bytecode and doesn't use any code from OpenJDK.
How will it affect Android if this case drags on? It could certainly hurt Android's adoption by hardware manufacturers, but they're already getting such an oustanding deal on the platform that they might not even care. Android is the first platform that has the potential to unify all hardware profiles, freeing manufacturers from the drudgery of building their own OSes or licensing OSes from someone else. Hell, HTC rose from zero to Hero largely because of their backing of Android and shipping of Android devices. Are they going to back off from that platform now just because Oracle's throwing lawyerbombs at Google? Probably not.
What Does This Mean For You?
If you're a non-Android Java developer...don't lose sleep over this. The details are going to take months to play out, and regardless of the outcome you're probably not going to be affected. Be happy, do great things, and keep making the Java platform a better place.
If you're an Android developer...don't lose sleep over this. Even if things go the way of the "Nuclear Option", you've still got a lot of time to build and sell apps and improve yourself as a developer. For a bit of novelty, start considering what a migration path might look like and turn that into a nice Android-agnostic application layer, something that's largely lacking in the current Android APIs. Or explore Android development in languages like JRuby, which are based on off-platform ecosystems that will survive regardless of Android's fate. Whatever you do, don't panic and run for the hills, and don't tell your friends to panic.
If you're mad as hell about this...I sympathize. I'm personally going to do whatever I can to keep people informed and keep pushing Android, including but not limited to writing 8000-word essays with my moderately-educated analysis of the "facts". I welcome your help in that fight, and I think it's a damn good time for people that want an open Java and an open mobile platform to show their quality by standing up and letting the world know we're here.
"All that is necessary for the triumph of evil is for good men to do nothing."
Do something, and we'll get through this together.
Footnote: Java Copyrights
I'd love for someone versed in copyright law to provide a brief analysis of how the Java copyrights described (vaguely) in the lawsuit might play out in Android. Java is certainly not ignored as a concept in Android docs, tools, and libraries, but it's unclear to me whether those copyrights amount to something enforceable when it comes to Android or Dalvik.
Update: "Crazy" Bob Lee emailed me to clear up a few facts. First off, Android and OpenJDK first came out around roughly the same time, so there was never really time to consider using OpenJDK's GPL'ed class libraries in Android. Bob also claims that Dalvik's design decisions were all technical and not made to circumvent IP, but it seems impossible to me that IP, patent, and licensing issues didn't have *some* influence on those decisions. He goes on to say that Android relies on process separation to sandbox applications, rather than leveraging Java security policies (or similar mechanisms (which Bob insists are badly designed anyway, and I might agree). Finally, he believes that in the worst case scenario, Dalvik would probably only require minor modifications to address the complaints in this suit. The "nuclear option" is, according to Bob, out of the realm of possibility.
Thanks for the clarifications, Bob!
Sunday, August 15, 2010
Monday, July 19, 2010
What JRuby C Extension Support Means to You
As part of the Ruby Summer of Code, Tim Felgentreff has been building out C extension support for JRuby. He's already made great progress, with simple libraries like Thin and Mongrel working now and larger libraries like RMagick and Yajl starting to function. And we haven't even reached the mid-term evaluation yet. I'd say he gets an "A" so far.
I figured it was time I talked a bit about C extensions, what they mean (or don't mean) for JRuby, and how you can help.
The Promise of C Extensions
One of the "last mile" features keeping people from migrating to JRuby has been their dependence on C extensions that only work on regular Ruby. In some cases, these extensions have been written to improve performance, like the various json libraries. Some of that performance could be less of a concern under Ruby 1.9, but it's hard to claim that any implementation will be able to run Ruby as fast as C for general-purpose libraries any time soon.
However, a large number of extensions – perhaps a majority of extensions – exist only to wrap a well-known and well-trusted C library. Nokogiri, for example, wraps the excellent libxml. RMagick wraps ImageMagick. For these cases, there's no alternative on regular Ruby...it's the C library or nothing (or in the case of Nokogiri, your alternatives are only slow and buggy pure-Ruby XML libraries).
For the performance case, C extensions on JRuby don't mean a whole lot. In most cases, it would be easier and just as performant to write that code in Java, and many pure-Ruby libraries perform well enough to reduce the need for native code. In addition, there are often libraries that already do what the perf-driven extensions were written for, and it's trivial to just call those libraries directly from Ruby code.
But the library case is a bit stickier. Nokogiri does have an FFI version, but it's a maintenance headache for them and a bug report headache for us, due to the lack of a C compiler tying the two halves together. There's a pure-Java Nokogiri in progress, but building both the Ruby bindings and emulating libxml behavior takes a long time to get right. For libraries like RMagick or the native MySQL and SQLite drivers, there are basically no options on the JVM. The Google Summer of Code project RMagick4J, by Sergio Arbeo, was a monumental effort that still has a lot of work left to be done. JDBC libraries work for databases, but they provide a very different interface from the native drivers and don't support things like UNIX domain sockets.
There's a very good chance that JRuby C extension support won't perform as well as C extensions on C Ruby, but in many cases that won't matter. Where there's no equivalent library now, having something that's only 5-10x slower to call – but still runs fast and matches API – may be just fine. Think about the coarse-grained operations you feed to a MySQL or SQLite and you get the picture.
So ultimately, I think C extensions will be a good thing for JRuby, even if they only serve as a stopgap measure to help people migrate small applications over to native Java equivalents. Why should the end goal be native Java equivalents, you ask?
The Peril of C Extensions
Now that we're done with the happy, glowing discussion of how great C extension support will be, I can make a confession: I hate C extensions. No feature of C Ruby has done more to hold it back than the desire for backward compatibility with C extensions. Because they have direct pointer access, there's no easy way to build a better garbage collector or easily support multiple runtimes in the same VM, even though various research efforts have tried. I've talked with Koichi Sasada, the creator of Ruby 1.9's "YARV" VM, and there's many things he would have liked to do with YARV that he couldn't because of C extension backward compatibility.
For JRuby, supporting C extensions will limit many features that make JRuby compelling in the first place. For example, because C extensions often use a lot of global variables, you can't use them from multiple JRuby runtimes in the same process. Because they expect a Ruby-like threading model, we need to restrict concurrency when calling out from Java to C. And all the great memory tooling I've blogged about recently won't see C extensions or the libraries they call, so it introduces an unknown.
All that said, I think it's a good milestone to show that we can support C extensions, and it may make for a "better JNI" for people who really just want to write C or who simply need to wrap a native library.
How You Can Help
There's a few things I think users like you can help with.
First off, we'd love to know what extensions you are using today, so we can explore what it would take to run them under JRuby (and so we can start exploring pure-Java alternatives, too.) Post your list in the comments, and we'll see what we can come up with.
Second, anyone that knows C and the Ruby C API (like folks who work on extensions) could help us fill out bits and pieces that are missing. Set up the JRuby cext branch (I'll show you how in a moment), and try to get your extensions to build and load. Tim has already done the heavy lifting of making "gem install xyz" attempt to build the extension and "require 'xyz'" try to load the resulting native library, so you can follow the usual processes (including extconf.rb/mkmf.rb for non-gem building and testing.) If it doesn't build ok, help us figure out what's missing or incorrect. If it builds but doesn't run, help us figure out what it's doing incorrectly.
Building JRuby with C Extension Support
Like building JRuby proper, building the cext work is probably the easiest thing you'll do all day (assuming the C compiler/build/toolchain doesn't bite you.
At this point you should have a JRuby build (run with bin/jruby) that can gem install and load native extensions.
I figured it was time I talked a bit about C extensions, what they mean (or don't mean) for JRuby, and how you can help.
The Promise of C Extensions
One of the "last mile" features keeping people from migrating to JRuby has been their dependence on C extensions that only work on regular Ruby. In some cases, these extensions have been written to improve performance, like the various json libraries. Some of that performance could be less of a concern under Ruby 1.9, but it's hard to claim that any implementation will be able to run Ruby as fast as C for general-purpose libraries any time soon.
However, a large number of extensions – perhaps a majority of extensions – exist only to wrap a well-known and well-trusted C library. Nokogiri, for example, wraps the excellent libxml. RMagick wraps ImageMagick. For these cases, there's no alternative on regular Ruby...it's the C library or nothing (or in the case of Nokogiri, your alternatives are only slow and buggy pure-Ruby XML libraries).
For the performance case, C extensions on JRuby don't mean a whole lot. In most cases, it would be easier and just as performant to write that code in Java, and many pure-Ruby libraries perform well enough to reduce the need for native code. In addition, there are often libraries that already do what the perf-driven extensions were written for, and it's trivial to just call those libraries directly from Ruby code.
But the library case is a bit stickier. Nokogiri does have an FFI version, but it's a maintenance headache for them and a bug report headache for us, due to the lack of a C compiler tying the two halves together. There's a pure-Java Nokogiri in progress, but building both the Ruby bindings and emulating libxml behavior takes a long time to get right. For libraries like RMagick or the native MySQL and SQLite drivers, there are basically no options on the JVM. The Google Summer of Code project RMagick4J, by Sergio Arbeo, was a monumental effort that still has a lot of work left to be done. JDBC libraries work for databases, but they provide a very different interface from the native drivers and don't support things like UNIX domain sockets.
There's a very good chance that JRuby C extension support won't perform as well as C extensions on C Ruby, but in many cases that won't matter. Where there's no equivalent library now, having something that's only 5-10x slower to call – but still runs fast and matches API – may be just fine. Think about the coarse-grained operations you feed to a MySQL or SQLite and you get the picture.
So ultimately, I think C extensions will be a good thing for JRuby, even if they only serve as a stopgap measure to help people migrate small applications over to native Java equivalents. Why should the end goal be native Java equivalents, you ask?
The Peril of C Extensions
Now that we're done with the happy, glowing discussion of how great C extension support will be, I can make a confession: I hate C extensions. No feature of C Ruby has done more to hold it back than the desire for backward compatibility with C extensions. Because they have direct pointer access, there's no easy way to build a better garbage collector or easily support multiple runtimes in the same VM, even though various research efforts have tried. I've talked with Koichi Sasada, the creator of Ruby 1.9's "YARV" VM, and there's many things he would have liked to do with YARV that he couldn't because of C extension backward compatibility.
For JRuby, supporting C extensions will limit many features that make JRuby compelling in the first place. For example, because C extensions often use a lot of global variables, you can't use them from multiple JRuby runtimes in the same process. Because they expect a Ruby-like threading model, we need to restrict concurrency when calling out from Java to C. And all the great memory tooling I've blogged about recently won't see C extensions or the libraries they call, so it introduces an unknown.
All that said, I think it's a good milestone to show that we can support C extensions, and it may make for a "better JNI" for people who really just want to write C or who simply need to wrap a native library.
How You Can Help
There's a few things I think users like you can help with.
First off, we'd love to know what extensions you are using today, so we can explore what it would take to run them under JRuby (and so we can start exploring pure-Java alternatives, too.) Post your list in the comments, and we'll see what we can come up with.
Second, anyone that knows C and the Ruby C API (like folks who work on extensions) could help us fill out bits and pieces that are missing. Set up the JRuby cext branch (I'll show you how in a moment), and try to get your extensions to build and load. Tim has already done the heavy lifting of making "gem install xyz" attempt to build the extension and "require 'xyz'" try to load the resulting native library, so you can follow the usual processes (including extconf.rb/mkmf.rb for non-gem building and testing.) If it doesn't build ok, help us figure out what's missing or incorrect. If it builds but doesn't run, help us figure out what it's doing incorrectly.
Building JRuby with C Extension Support
Like building JRuby proper, building the cext work is probably the easiest thing you'll do all day (assuming the C compiler/build/toolchain doesn't bite you.
- Check out (or fork and check out) the JRuby repository from http://github.com/jruby/jruby:
git clone git://github.com/jruby/jruby.git
- Switch to the "cext" branch:
git checkout -b cext origin/cext
- Do a clean build of JRuby plus the cext subsystem:
ant clean build-jruby-cext-native
At this point you should have a JRuby build (run with bin/jruby) that can gem install and load native extensions.
Saturday, July 17, 2010
Browsing Memory with Ruby and Java Debug Interface
This is the third post in a series. The first two were on Browsing Memory the JRuby Way and Finding Leaks in Ruby Apps with Eclipse Memory Analyzer
Hello again, friends! I'm back with more exciting memory analysis tips and tricks! Ready? Here we go!
After my previous two posts, several folks asked if it's possible to do all this stuff from Ruby, rather than using Java or C-based apps shipped with the JVM. The answer is yes! Because of the maturity of the Java platform, there are standard Java APIs you can use to access all the same information the previous tools consumed. And since we're talking about JRuby, that means you have Ruby APIs you can use to access that information.
That's what I'm going to show you today.
Introducing JDI
The APIs we'll be using are part of the Java Debug Interface (JDI), a set of Java APIs for remotely inspecting a running application. It's part of the Java Platform Debugger Architecture, which also includes a C/++ API, a wire protocol, and a raw wire protocol API. Exploring those is left as an exercise for the reader...but they're also pretty cool.
We'll use the Rails app from before, inspecting it immediately after boot. JDI provides a number of ways to connect up to a running VM, using VirtualMachineManager; you can either have the debugger make the connection or the target VM make the connection, and optionally have the target VM launch the debugger or the debugger launch the target VM. For our example, we'll have the debugger attach to a target VM listening for connections.
Preparing the Target VM
The first step is to start up the application with the appropriate debugger endpoint installed. This new flag is a bit of a mouthful (and we should make a standard flag for JRuby users), but we're simply setting up a socket-based listener on port 12345, running as a server, and we don't want to suspend the JVM when the debugger connects.
The -J-Djruby.reify.classes bit I talked about in my first post. It makes Ruby classes show up as Java classes for purposes of heap inspection.
The rest is just running the server in production mode.
As you can see, remote debugging is already baked into the JVM, which means we didn't have to write it or debug it. And that's pretty awesome.
Let's connect to our Rails process and see what we can do.
Connecting to the target VM
In order to connect to the target VM, you need to do the Java factory dance. We start with the com.sun.jdi.Bootstrap class, get a com.sun.jdi.VirtualMachineManager, and then connect to a target VM to get a com.sun.jdi.VirtualMachine object.
Notice that I didn't dig out the socket connector explicitly here, because on my system, the first connector always appears to be the socket connector. Here's the full list for me on OS X:
The ProcessAttach connector there isn't as magical as it looks; all it does is query the target process to find out what transport it's using (dt_socket in our case) and then calls the right connector (e.g. SocketAttach in the case of dt_socket or SharedMemoryAttach if you use dt_shmem on Windows). In our case, we know it's listening on a socket, so we're using the SocketAttach connector directly.
The rest is pretty simple: we get the default arguments from the connector, twiddle them to have the right hostname and port number, and attach to the VM. Now we have a VirtualMachine object we can query and twiddle; we're inside the matrix.
With Great Power...
So, what can we do with this VirtualMachine object? We can:
We can basically make the target VM dance any way we want, even going so far as to write our own debugger entirely in Ruby code. But that's a topic for another day. Right now, we're going to do some memory inspection.
Creating a Histogram of the Heap
The simplest heap inspection we might do is to produce a histogram of all objects on the heap. And as you might expect, this is one of the easiest things to do, because it's the first thing everyone looks for when debugging a memory issue.
VirtualMachine.all_classes gives you a list (a java.util.List, but we make those behave mostly like a Ruby Array) of every class the JVM has loaded, including Ruby classes, JRuby core and runtime classes, and other Java classes that JRuby and the JVM use. VirtualMachine.instance_counts takes that list of classes and returns another list of instance counts. Zip the two together, and we have an array of classes and instance counts. So easy!
Let's take these two pieces and put them together in an easy-to-use class
I've taken the liberty of expanding the connection process to handle pids and other arguments passed in. So to get a histogram from a VM listening on localhost port 12345, we can simply do:
Now of course this list is going to have a lot of JRuby and Java objects that we might not be interested in, so we'll want to filter it to just the Ruby classes. On JRuby master, all the generated Ruby classes start with a package name "ruby". Unfortunately, jitted Ruby methods start with a package of "ruby.jit" right now, so we'll want to filter those out too (unless you're interested in them, of course...JRuby is an open book!)
If we run this short script against our Rails application, we see similar results to the previous posts (but it's cooler, because we're doing it all from Ruby!)
Just so we're all on the same page, it's important to know what we're actually dealing with here. VirtualMachine.all_classes returns a list of com.sun.jdi.ReferenceType objects. Let's ri that.
You can see there's quite a bit more you can do with a ReferenceType. Let's try something.
Digging Deeper Into TimezoneTransitionInfo
Let's actually take some time to explore our old friend TimezoneTransitionInfo (hereafter referred to as TTI). Instead of walking all classes in the system, we'll want to just grab TTI directly. For that we use VirtualMachine.classes_by_name, which returns a list of classes on the target VM of that name. There should be only one, since we only have a single JRuby instance in our server, so we'll grab that class and request exactly one instance of it...any old instance.
Running this we can see we've got the reference we're looking for.
ReferenceType.instances returns a list (no larger than the specified size, or all instances if you specify 0) of com.sun.jdi.ObjectReference objects.
Among the weirder things like disabling garbage collection for this object or listing all threads waiting on this object's monitor (a la 'synchronize' in Java), we can access the object's fields through getValue and setValue.
Let's examine the instance variables TTI contains. You may recall from previous posts that all Ruby objects in JRuby store their instance variables in an array, to avoid the large memory and cpu cost of storing them in a map. We can grab a reference to that array and display its contents.
And the new output:
Since the varTable field is a simple Object[] in Java, the reference we get to it is of type com.sun.jdi.ArrayReference.
Of course each of these references can be further explored, but already we can see that this TTI instance has seven instance variables: two TimezoneOffsetInfo objects, two Fixnums, and three nils. But we don't have instance variable names!
Instance variable names are only stored on the object's class. There, a table of names to offsets is kept up-to-date as new instance variable names are discovered. We can access this from the TTI class reference and combine it with the variable table to get the output we want to see.
This looks a bit complicated, but there's actually a lot of boilerplate here we could put into a utility class. For example, the metaClass and variableNames fields are standard on all (J)Ruby objects and classes, respectively. But considering that we're actually walking a remote VM's *live* heap...this is pretty simple code.
Here's what our script outputs now:
We could go even deeper, but I think you get the idea.
Your Turn
Here's a gist of the three scripts we've created, so you can refer to and build off of them. And of course the javadocs and ri docs will help you as well, plus everything we've done here you can do in a jirb session.
There's a lot to the JDI API, but once you've got the VirtualMachine object in hand it's pretty easy to follow. As you'd expect from any debugger API, you need to know a bit about how things work on the inside, but through the magic of JRuby it's actually possible to write most of those fancy memory and debugging tools entirely in Ruby. Perhaps this article has peaked your interest in exploring JRuby internals using JDI and you might start to write debugging tools. Perhaps we can ship a few utilities to make some of the boilerplate go away. In any case, I hope this series of articles shows that JRuby users have an amazing library of tools available to them, and you don't even have to leave your comfort zone if you don't want to.
Note: The variableNames field is a recent addition to JRuby master, so if you'd like to play with that you'll probably want to build JRuby yourself or wait for a nightly build that picks it up. But you can certainly do a lot of exploring even without that patch.
Hello again, friends! I'm back with more exciting memory analysis tips and tricks! Ready? Here we go!
After my previous two posts, several folks asked if it's possible to do all this stuff from Ruby, rather than using Java or C-based apps shipped with the JVM. The answer is yes! Because of the maturity of the Java platform, there are standard Java APIs you can use to access all the same information the previous tools consumed. And since we're talking about JRuby, that means you have Ruby APIs you can use to access that information.
That's what I'm going to show you today.
Introducing JDI
The APIs we'll be using are part of the Java Debug Interface (JDI), a set of Java APIs for remotely inspecting a running application. It's part of the Java Platform Debugger Architecture, which also includes a C/++ API, a wire protocol, and a raw wire protocol API. Exploring those is left as an exercise for the reader...but they're also pretty cool.
We'll use the Rails app from before, inspecting it immediately after boot. JDI provides a number of ways to connect up to a running VM, using VirtualMachineManager; you can either have the debugger make the connection or the target VM make the connection, and optionally have the target VM launch the debugger or the debugger launch the target VM. For our example, we'll have the debugger attach to a target VM listening for connections.
Preparing the Target VM
The first step is to start up the application with the appropriate debugger endpoint installed. This new flag is a bit of a mouthful (and we should make a standard flag for JRuby users), but we're simply setting up a socket-based listener on port 12345, running as a server, and we don't want to suspend the JVM when the debugger connects.
jruby -J-agentlib:jdwp=transport=dt_socket,server=y,address=12345,suspend=n -J-Djruby.reify.classes=true script/server -e production
The -J-Djruby.reify.classes bit I talked about in my first post. It makes Ruby classes show up as Java classes for purposes of heap inspection.
The rest is just running the server in production mode.
As you can see, remote debugging is already baked into the JVM, which means we didn't have to write it or debug it. And that's pretty awesome.
Let's connect to our Rails process and see what we can do.
Connecting to the target VM
In order to connect to the target VM, you need to do the Java factory dance. We start with the com.sun.jdi.Bootstrap class, get a com.sun.jdi.VirtualMachineManager, and then connect to a target VM to get a com.sun.jdi.VirtualMachine object.
vmm = Bootstrap.virtual_machine_manager
sock_conn = vmm.attaching_connectors[0] # not guaranteed to be Socket
args = sock_conn.default_arguments
args['hostname].value = "localhost"
args['port'].value = "12345"
vm = sock_conn.attach(args)
Notice that I didn't dig out the socket connector explicitly here, because on my system, the first connector always appears to be the socket connector. Here's the full list for me on OS X:
➔ jruby -rjava -e "puts com.sun.jdi.Bootstrap.virtual_machine_manager.attaching_connectors
> "
[com.sun.jdi.SocketAttach (defaults: timeout=, hostname=charles-nutters-macbook-pro.local, port=),
com.sun.jdi.ProcessAttach (defaults: pid=, timeout=)]
The ProcessAttach connector there isn't as magical as it looks; all it does is query the target process to find out what transport it's using (dt_socket in our case) and then calls the right connector (e.g. SocketAttach in the case of dt_socket or SharedMemoryAttach if you use dt_shmem on Windows). In our case, we know it's listening on a socket, so we're using the SocketAttach connector directly.
The rest is pretty simple: we get the default arguments from the connector, twiddle them to have the right hostname and port number, and attach to the VM. Now we have a VirtualMachine object we can query and twiddle; we're inside the matrix.
With Great Power...
So, what can we do with this VirtualMachine object? We can:
- walk all classes and objects on the heap
- install breakpoints and step-debug any running code
- inspect and modify the current state of any running thread, even manipulating in-flight arguments and variables
- replace already-loaded classes with new definitions (such as to install custom instrumentation)
➔ ri --java com.sun.jdi.VirtualMachine
-------------------------------------- Class: com.sun.jdi.VirtualMachine
(no description...)
------------------------------------------------------------------------
Instance methods:
-----------------
allClasses, allThreads, canAddMethod, canBeModified,
canForceEarlyReturn, canGetBytecodes, canGetClassFileVersion,
canGetConstantPool, canGetCurrentContendedMonitor,
canGetInstanceInfo, canGetMethodReturnValues,
canGetMonitorFrameInfo, canGetMonitorInfo, canGetOwnedMonitorInfo,
canGetSourceDebugExtension, canGetSyntheticAttribute, canPopFrames,
canRedefineClasses, canRequestMonitorEvents,
canRequestVMDeathEvent, canUnrestrictedlyRedefineClasses,
canUseInstanceFilters, canUseSourceNameFilters,
canWatchFieldAccess, canWatchFieldModification, classesByName,
description, dispose, eventQueue, eventRequestManager, exit,
getDefaultStratum, instanceCounts, mirrorOf, mirrorOfVoid, name,
process, redefineClasses, resume, setDebugTraceMode,
setDefaultStratum, suspend, toString, topLevelThreadGroups,
version, virtualMachine
We can basically make the target VM dance any way we want, even going so far as to write our own debugger entirely in Ruby code. But that's a topic for another day. Right now, we're going to do some memory inspection.
Creating a Histogram of the Heap
The simplest heap inspection we might do is to produce a histogram of all objects on the heap. And as you might expect, this is one of the easiest things to do, because it's the first thing everyone looks for when debugging a memory issue.
classes = VM.all_classes
counts = VM.instance_counts(classes)
classes.zip(counts)
VirtualMachine.all_classes gives you a list (a java.util.List, but we make those behave mostly like a Ruby Array) of every class the JVM has loaded, including Ruby classes, JRuby core and runtime classes, and other Java classes that JRuby and the JVM use. VirtualMachine.instance_counts takes that list of classes and returns another list of instance counts. Zip the two together, and we have an array of classes and instance counts. So easy!
Let's take these two pieces and put them together in an easy-to-use class
require 'java'
module JRuby
class Debugger
VMM = com.sun.jdi.Bootstrap.virtual_machine_manager
attr_accessor :vm
def initialize(options = {})
connectors = VMM.attaching_connectors
if options[:port]
connector = connectors.find {|ac| ac.name =~ /Socket/}
elsif options[:pid]
connector = connectors.find {|ac| ac.name =~ /Process/}
end
args = connector.default_arguments
for k, v in options
args[k.to_s].value = v.to_s
end
@vm = connector.attach(args)
end
# Generate a histogram of all classes in the system
def histogram
classes = @vm.all_classes
counts = @vm.instance_counts(classes)
classes.zip(counts)
end
end
end
I've taken the liberty of expanding the connection process to handle pids and other arguments passed in. So to get a histogram from a VM listening on localhost port 12345, we can simply do:
JRuby::Debugger.new(:hostname => 'localhost', :port => 12345).histogram
Now of course this list is going to have a lot of JRuby and Java objects that we might not be interested in, so we'll want to filter it to just the Ruby classes. On JRuby master, all the generated Ruby classes start with a package name "ruby". Unfortunately, jitted Ruby methods start with a package of "ruby.jit" right now, so we'll want to filter those out too (unless you're interested in them, of course...JRuby is an open book!)
require 'jruby_debugger'
# connect to the VM
debugr = JRuby::Debugger.new(:hostname => 'localhost', :port => 12345)
histo = debugr.histogram
# sort by count
histo.sort! {|a,b| b[1] <=> a[1]}
# filter to only user-created Ruby classes with >0 instances
histo.each do |cls,num|
next if num == 0 || cls.name[0..4] != 'ruby.' || cls.name[5..7] == 'jit'
puts "#{num} instances of #{cls.name[5..-1].gsub('.', '::')}"
end
If we run this short script against our Rails application, we see similar results to the previous posts (but it's cooler, because we're doing it all from Ruby!)
➔ jruby ruby_histogram.rb | head -10
11685 instances of TZInfo::TimezoneTransitionInfo
1071 instances of Gem::Version
1012 instances of Gem::Requirement
592 instances of TZInfo::TimezoneOffsetInfo
432 instances of Gem::Dependency
289 instances of Gem::Specification
142 instances of ActiveSupport::TimeZone
118 instances of TZInfo::DataTimezoneInfo
118 instances of TZInfo::DataTimezone
45 instances of Gem::Platform
Just so we're all on the same page, it's important to know what we're actually dealing with here. VirtualMachine.all_classes returns a list of com.sun.jdi.ReferenceType objects. Let's ri that.
➔ ri --java com.sun.jdi.ReferenceType
--------------------------------------- Class: com.sun.jdi.ReferenceType
(no description...)
------------------------------------------------------------------------
Instance methods:
-----------------
allFields, allLineLocations, allMethods, availableStrata,
classLoader, classObject, compareTo, constantPool,
constantPoolCount, defaultStratum, equals, failedToInitialize,
fieldByName, fields, genericSignature, getValue, getValues,
hashCode, instances, isAbstract, isFinal, isInitialized,
isPackagePrivate, isPrepared, isPrivate, isProtected, isPublic,
isStatic, isVerified, locationsOfLine, majorVersion, methods,
methodsByName, minorVersion, modifiers, name, nestedTypes,
signature, sourceDebugExtension, sourceName, sourceNames,
sourcePaths, toString, virtualMachine, visibleFields,
visibleMethods
You can see there's quite a bit more you can do with a ReferenceType. Let's try something.
Digging Deeper Into TimezoneTransitionInfo
Let's actually take some time to explore our old friend TimezoneTransitionInfo (hereafter referred to as TTI). Instead of walking all classes in the system, we'll want to just grab TTI directly. For that we use VirtualMachine.classes_by_name, which returns a list of classes on the target VM of that name. There should be only one, since we only have a single JRuby instance in our server, so we'll grab that class and request exactly one instance of it...any old instance.
tti_class = debugr.vm.classes_by_name('ruby.TZInfo.TimezoneTransitionInfo')[0]
tti_obj = tti_class.instances(1)[0]
puts tti_obj
Running this we can see we've got the reference we're looking for.
➔ jruby tti_digger.rb
instance of ruby.TZInfo.TimezoneTransitionInfo(id=2)
ReferenceType.instances returns a list (no larger than the specified size, or all instances if you specify 0) of com.sun.jdi.ObjectReference objects.
➔ ri --java com.sun.jdi.ObjectReference
------------------------------------- Class: com.sun.jdi.ObjectReference
(no description...)
------------------------------------------------------------------------
Instance methods:
-----------------
disableCollection, enableCollection, entryCount, equals, getValue,
getValues, hashCode, invokeMethod, isCollected, owningThread,
referenceType, referringObjects, setValue, toString, type,
uniqueID, virtualMachine, waitingThreads
Among the weirder things like disabling garbage collection for this object or listing all threads waiting on this object's monitor (a la 'synchronize' in Java), we can access the object's fields through getValue and setValue.
Let's examine the instance variables TTI contains. You may recall from previous posts that all Ruby objects in JRuby store their instance variables in an array, to avoid the large memory and cpu cost of storing them in a map. We can grab a reference to that array and display its contents.
var_table_field = tti_class.field_by_name('varTable')
tti_vars = tti_obj.get_value(var_table_field)
puts "varTable: #{tti_vars}"
puts tti_vars.values.map(&:to_s)
And the new output:
➔ jruby tti_digger.rb
varTable: instance of java.lang.Object[7] (id=13)
instance of ruby.TZInfo.TimezoneOffsetInfo(id=15)
instance of ruby.TZInfo.TimezoneOffsetInfo(id=16)
instance of org.jruby.RubyFixnum(id=17)
instance of org.jruby.RubyFixnum(id=18)
instance of org.jruby.RubyNil(id=19)
instance of org.jruby.RubyNil(id=19)
instance of org.jruby.RubyNil(id=19)
Since the varTable field is a simple Object[] in Java, the reference we get to it is of type com.sun.jdi.ArrayReference.
➔ ri --java com.sun.jdi.ArrayReference
-------------------------------------- Class: com.sun.jdi.ArrayReference
(no description...)
------------------------------------------------------------------------
Instance methods:
-----------------
disableCollection, enableCollection, entryCount, equals, getValue,
getValues, hashCode, invokeMethod, isCollected, length,
owningThread, referenceType, referringObjects, setValue, setValues,
toString, type, uniqueID, virtualMachine, waitingThreads
Of course each of these references can be further explored, but already we can see that this TTI instance has seven instance variables: two TimezoneOffsetInfo objects, two Fixnums, and three nils. But we don't have instance variable names!
Instance variable names are only stored on the object's class. There, a table of names to offsets is kept up-to-date as new instance variable names are discovered. We can access this from the TTI class reference and combine it with the variable table to get the output we want to see.
# get the metaclass object and class reference
metaclass_field = tti_class.field_by_name('metaClass')
tti_class_obj = tti_obj.get_value(metaclass_field)
tti_class_class = tti_class_obj.reference_type
# get the variable names from the metaclass object
var_names_field = tti_class_class.field_by_name('variableNames')
var_names = tti_class_obj.get_value(var_names_field)
# splice the names and values together
table = var_names.values.zip(tti_vars.values)
puts table
This looks a bit complicated, but there's actually a lot of boilerplate here we could put into a utility class. For example, the metaClass and variableNames fields are standard on all (J)Ruby objects and classes, respectively. But considering that we're actually walking a remote VM's *live* heap...this is pretty simple code.
Here's what our script outputs now:
➔ jruby tti_digger.rb
"@offset"
instance of ruby.TZInfo.TimezoneOffsetInfo(id=25)
"@previous_offset"
instance of ruby.TZInfo.TimezoneOffsetInfo(id=26)
"@numerator_or_time"
instance of org.jruby.RubyFixnum(id=27)
"@denominator"
instance of org.jruby.RubyFixnum(id=28)
"@at"
instance of org.jruby.RubyNil(id=29)
"@local_end"
instance of org.jruby.RubyNil(id=29)
"@local_start"
instance of org.jruby.RubyNil(id=29)
We could go even deeper, but I think you get the idea.
Your Turn
Here's a gist of the three scripts we've created, so you can refer to and build off of them. And of course the javadocs and ri docs will help you as well, plus everything we've done here you can do in a jirb session.
There's a lot to the JDI API, but once you've got the VirtualMachine object in hand it's pretty easy to follow. As you'd expect from any debugger API, you need to know a bit about how things work on the inside, but through the magic of JRuby it's actually possible to write most of those fancy memory and debugging tools entirely in Ruby. Perhaps this article has peaked your interest in exploring JRuby internals using JDI and you might start to write debugging tools. Perhaps we can ship a few utilities to make some of the boilerplate go away. In any case, I hope this series of articles shows that JRuby users have an amazing library of tools available to them, and you don't even have to leave your comfort zone if you don't want to.
Note: The variableNames field is a recent addition to JRuby master, so if you'd like to play with that you'll probably want to build JRuby yourself or wait for a nightly build that picks it up. But you can certainly do a lot of exploring even without that patch.
Monday, July 12, 2010
Finding Leaks in Ruby Apps with Eclipse Memory Analyzer
After my post on Browsing Memory the JRuby Way, one commenter and several other folks suggested I actually show using Eclipse MAT with JRuby. So without further ado...
The Eclipse Memory Analyzer, like many Eclipse-based applications, starts up with a "for dummies" page linking to various actions.

The most interesting use of MAT is to analyze a heap dump in a bit more interactive way than with the "jhat" tool. The analysis supports the "jmap" dump format, so we'll proceed to get a jmap dump of a "leaky" Rails application.
I've added this controller to a simple application:
Some genius has decided to save all recent request parameters into a constant on the LeakyController, keyed by time, wrapped in a custom type, and never cleaned out. Perhaps this was done temporarily for debugging, or perhaps we have a moron on staff. Either way, we need to find this problem and fix it.
We'll run this application and crank 10000 requests through the /leaky index, so the final request should output "There are 10000 elements now!"
After 10000 requests have completed, we notice this application seems to grow and grow until it maxes out the heap (JRuby, being on the JVM, automatically limits heap sizes for you). Let's start by using jmap to investigate the problem.
We can see our old friend TimezoneTransitionInfo in there, but of course we've learned to accept that one. But what's this LeakyController::MyData object we've apparently got 10000 instances of? Where are they coming from? Who's holding on to them?
At this point, we can proceed to get a memory dump and move over to MAT, or have MAT acquire and open the dump in one shot, similar to VisualVM. Let's have MAT do it for us.
Getting Our Heap Into MAT
(Caveat: While preparing this post, I discovered that the jmap tool for the current OS X Java 6 (build 1.6.0_20-b02-279-10M3065) is not properly dumping all information. As a result, many fields and objects don't show up in dump analysis tools like MAT. Fortunately, there's a way out; on OS X, you can grab Soylatte or OpenJDK builds from various sources that work properly. In my case, I'm using a local build of OpenJDK 7.)
From the File menu, we select Acquire Heap Dump.

The resulting dialog should be familiar, since it lists the same JVM processes the "jps" command listed above. (If you had to specify a specific JDK home, like me, you'll need to click the "Configure" button and set the "jdkhome" flag" for "HPROF jmap dump provider".)

We'll pick our Rails instance (pid 61976) and proceed.
MAT connects to the process, pulls a heap dump to disk, and immediately proceeds to parse and open it.

Once it has completed parsing, we're presented with a few different paths to follow.

On other days, we might be interested in doing some component-by-component browsing to look for fat objects or minor leaks, or we might want to revisit the results of previous analyses against this heap. But today, we really need to figure out this MyData leak, so we'll run the Leak Suspects Report.
Leak Suspects?

Are you kidding? A tool that can search out and report possible leaks in a running system? Yes, Virginia, there is a Santa Claus!
This is the good side of the "plague of choices" we have on the JVM. Because there's so many tools for almost every basic purpose (like the dozen – at least – memory inspection tools), tool developers have moved on to more specific needs like leak analysis. MAT is my favorite tool for leak-hunting (and it uses less memory than jhat for heap-browsing, which is great for larger dumps).
Once MAT has finished chewing on our heap, it presents a pie chart of possible leak suspects. The logic used essentially seeks out data structures whose accumulated size is large in comparison to the rest of the heap. In this case, MAT has identified three suspects that in total comprise over half of the live heap data.

Scrolling down we start to get details about these leak candidates.

So there's a Hash, a Module, and 711 Class objects in our list of suspects. The Class objects are probably just loaded classes, since the JRuby core classes and additional classes loaded from Rails and its dependent libraries will easily number in the hundreds. We'll ignore that one for now. There's also an unusually large Module taking up almost 4MB of memory. We'll come back to that.
The Hash seems like the most likely candidate. Let's expand that.
The first new block of information gives us a list of "shortest paths" to the "accumulation point", or the point at which all this potentially-leaking data is gathering. There's more to this in the actual application, but I'm showing the top of the "path" here.

At the top of this list, we see the RubyHash object originally reported as a suspect, and a tree of objects that lead to it. In this case, we go from the Hash itself into a ConcurrentHashMap (note that we're hiding nothing here; you can literally browse anything in memory) which in turn is referenced by the "constants" field of a Class. So already we know that this hash is being referenced in some class's constant table. Pretty cool, eh?
Let's make sure we've got the right Hash and not some harmless data structure inside Rails. If we scroll down a bit more, we see a listing of all the objects this Hash has accumulated. Let's see what's in there.

Ok, so it's a hashtable structure with a table of entries. Can we get more out of this?
Of course like most of these tools, just about everything is clickable. We can dive into one of the hash entries and see what's in there. Clicking on an entry gives us several new ways to display the downstream objects we've managed to aggregate. In this case, we'll just do "List Objects", and the suboption "With Outgoing References" for downstream data.

Now finally in the resulting view of this particular RubyHashEntry, we can see that our MyData object is happily tucked away inside.

Ok, so we definitely have the right data structure. Not only that, but we can see that the entry's "key" is a Time object (org.jruby.RubyTime). Let's go back to the "Shortest Paths" view and examine the ConcurrentHashMap entry that's holding this Hash object. Each entry in this hash maps a constant name to a value, so we should be able to see which constant is holding the leaking references.
(At this point you'll see the side effects of my switch to OpenJDK 7; the memory addresses have changed, but the structure is the same.)

We'll do another "List Objects" "with outgoing references" against the the HashEntry object immediately referencing our RubyHash.

And there it is! In the "key" field of the HashEntry, we see our constant name "LEAKING_ARRAY".
What About That Module?
Oh yeah, what about that Module that showed up in the leak suspects? It was responsible for almost 4MB of the heap. Let's go back and check it out.

A-ha! Eclipse MAT has flagged the Gem module as being a potential leak suspect. But why? Let's go back to the suspect report and look at the Accumulated Objects by Class table, toward the bottom.

Ok, so the Gem module eventually references nearly6000 Gem::Specification objects, which makes up the bulk of our 3.8MB. I guarantee I don't have 6000 gem versions installed. Perhaps that's something that RubyGems should endeavor to fix? Perhaps we've just used JRuby and Eclipse MAT to discover either a leak or wasteful memory use in RubyGems?
Evan Phoenix pointed out that I misread the columns. It's actually 249 Specification objects, their "self" size is almost 6000 bytes, and their "retained" size is 3.8MB. But that gives me an opportunity to show off another feature of MAT: Customized Retained Set calculation.
In this case, the retained size seems a bit suspect. Could there really be 3.8MB of data kept alive by Gem::Specification objects? It seems like a bit much, to be sure, but digging through the tree of references from the Gem module down shows there's several references to classes and modules, which in turn reference constant tables, method tables, and so on. How can we filter out that extra noise?
First we'll return to the view of the Gem module (two screenshots up) by going back to leak suspect #2, expanding "Shortest Paths". The topmost RubyModule in that list is the Gem module, so we're all set to calculate a Customized Retained Set.

The resulting dialog provides a list of options through which you can specify classes or fields to ignore when calculating the retained set from a given starting point. In this case, it's simple enough to filter out org.jruby.RubyClass and org.jruby.RubyModule, so that references from Gem::Specification back into the class/module hierarchy don't get included in calculations.

Which results in a similar view to those we've seen, but with objects sorted by retained heap.

Well what the heck? It looks like it's all String data?
JRuby's String implementation is an org.jruby.RubyString object, aggregating an org.jruby.util.ByteList object, aggregating a byte array, so the top three entries there in total are essentially all String memory. The best way to investigate where they're coming from is to do "List Objects" on RubyString, but instead of "with outgoing references" we'll use "with incoming references" to show where all those Strings are coming from.

Finally we have a view that lets us hunt through all these strings and see where they're coming from. Poking at the first few shows they're stored in constant tables of the Gem module (that last RubyModule I haven't expanded in). That's probably not a big deal. But if we sort the the list of RubyString objects by their retained sizes, we get a different picture of the system.

If we dig into the *largest* String objects, they appear to be referenced by Gem::Specification instance variables! So there's probably something worth investigating here.
It's also worth noting that any Ruby application is going to have a lot of Strings in it, so this isn't all that unusual to see. But it's nice to have a tool that lets you investigate potential inefficiencies (even down to the raw bytes!), and it's nice to know that at least some of that retained data for the Gem module is "real" and not just references back into the class hierarchy.
(And I'm not convinced all those Strings really *need* to be alive...but you're welcome to take it from here!)
Your Turn
Eclipse MAT is probably one of the nicest of the free tools. In addition to object browsing, leak detection, GC root analysis, and object query language support, there's a ton of other features, both in the main distribution and available from third parties. If you're hunting for memory leaks, or just want to investigate the memory usage of your (J)Ruby application, MAT is a tool worth playing with (and as always, I hope you will blog and report your experiences!)
The Eclipse Memory Analyzer, like many Eclipse-based applications, starts up with a "for dummies" page linking to various actions.

The most interesting use of MAT is to analyze a heap dump in a bit more interactive way than with the "jhat" tool. The analysis supports the "jmap" dump format, so we'll proceed to get a jmap dump of a "leaky" Rails application.
I've added this controller to a simple application:
class LeakyController < ApplicationController
class MyData
def initialize(params)
@params = params
end
end
LEAKING_ARRAY = {}
def index
LEAKING_ARRAY[Time.now] = MyData.new(params)
render :text => "There are #{LEAKING_ARRAY.size} elements now!"
end
end
Some genius has decided to save all recent request parameters into a constant on the LeakyController, keyed by time, wrapped in a custom type, and never cleaned out. Perhaps this was done temporarily for debugging, or perhaps we have a moron on staff. Either way, we need to find this problem and fix it.
We'll run this application and crank 10000 requests through the /leaky index, so the final request should output "There are 10000 elements now!"
~ ➔ ab -n 10000 http://localhost:3000/leaky
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 1000 requests
Completed 2000 requests
...
After 10000 requests have completed, we notice this application seems to grow and grow until it maxes out the heap (JRuby, being on the JVM, automatically limits heap sizes for you). Let's start by using jmap to investigate the problem.
~ ➔ jps -l
61976 org/jruby/Main
61999 sun.tools.jps.Jps
61837
~ ➔ jmap -histo 61976 | grep " ruby\." | head -5
37: 11685 280440 ruby.TZInfo.TimezoneTransitionInfo
40: 10000 240000 ruby.LeakyController.MyData
133: 970 23280 ruby.Gem.Version
137: 914 21936 ruby.Gem.Requirement
170: 592 14208 ruby.TZInfo.TimezoneOffsetInfo
We can see our old friend TimezoneTransitionInfo in there, but of course we've learned to accept that one. But what's this LeakyController::MyData object we've apparently got 10000 instances of? Where are they coming from? Who's holding on to them?
At this point, we can proceed to get a memory dump and move over to MAT, or have MAT acquire and open the dump in one shot, similar to VisualVM. Let's have MAT do it for us.
Getting Our Heap Into MAT
(Caveat: While preparing this post, I discovered that the jmap tool for the current OS X Java 6 (build 1.6.0_20-b02-279-10M3065) is not properly dumping all information. As a result, many fields and objects don't show up in dump analysis tools like MAT. Fortunately, there's a way out; on OS X, you can grab Soylatte or OpenJDK builds from various sources that work properly. In my case, I'm using a local build of OpenJDK 7.)
From the File menu, we select Acquire Heap Dump.

The resulting dialog should be familiar, since it lists the same JVM processes the "jps" command listed above. (If you had to specify a specific JDK home, like me, you'll need to click the "Configure" button and set the "jdkhome" flag" for "HPROF jmap dump provider".)

We'll pick our Rails instance (pid 61976) and proceed.
MAT connects to the process, pulls a heap dump to disk, and immediately proceeds to parse and open it.

Once it has completed parsing, we're presented with a few different paths to follow.

On other days, we might be interested in doing some component-by-component browsing to look for fat objects or minor leaks, or we might want to revisit the results of previous analyses against this heap. But today, we really need to figure out this MyData leak, so we'll run the Leak Suspects Report.
Leak Suspects?

Are you kidding? A tool that can search out and report possible leaks in a running system? Yes, Virginia, there is a Santa Claus!
This is the good side of the "plague of choices" we have on the JVM. Because there's so many tools for almost every basic purpose (like the dozen – at least – memory inspection tools), tool developers have moved on to more specific needs like leak analysis. MAT is my favorite tool for leak-hunting (and it uses less memory than jhat for heap-browsing, which is great for larger dumps).
Once MAT has finished chewing on our heap, it presents a pie chart of possible leak suspects. The logic used essentially seeks out data structures whose accumulated size is large in comparison to the rest of the heap. In this case, MAT has identified three suspects that in total comprise over half of the live heap data.

Scrolling down we start to get details about these leak candidates.

So there's a Hash, a Module, and 711 Class objects in our list of suspects. The Class objects are probably just loaded classes, since the JRuby core classes and additional classes loaded from Rails and its dependent libraries will easily number in the hundreds. We'll ignore that one for now. There's also an unusually large Module taking up almost 4MB of memory. We'll come back to that.
The Hash seems like the most likely candidate. Let's expand that.
The first new block of information gives us a list of "shortest paths" to the "accumulation point", or the point at which all this potentially-leaking data is gathering. There's more to this in the actual application, but I'm showing the top of the "path" here.

At the top of this list, we see the RubyHash object originally reported as a suspect, and a tree of objects that lead to it. In this case, we go from the Hash itself into a ConcurrentHashMap (note that we're hiding nothing here; you can literally browse anything in memory) which in turn is referenced by the "constants" field of a Class. So already we know that this hash is being referenced in some class's constant table. Pretty cool, eh?
Let's make sure we've got the right Hash and not some harmless data structure inside Rails. If we scroll down a bit more, we see a listing of all the objects this Hash has accumulated. Let's see what's in there.

Ok, so it's a hashtable structure with a table of entries. Can we get more out of this?
Of course like most of these tools, just about everything is clickable. We can dive into one of the hash entries and see what's in there. Clicking on an entry gives us several new ways to display the downstream objects we've managed to aggregate. In this case, we'll just do "List Objects", and the suboption "With Outgoing References" for downstream data.

Now finally in the resulting view of this particular RubyHashEntry, we can see that our MyData object is happily tucked away inside.

Ok, so we definitely have the right data structure. Not only that, but we can see that the entry's "key" is a Time object (org.jruby.RubyTime). Let's go back to the "Shortest Paths" view and examine the ConcurrentHashMap entry that's holding this Hash object. Each entry in this hash maps a constant name to a value, so we should be able to see which constant is holding the leaking references.
(At this point you'll see the side effects of my switch to OpenJDK 7; the memory addresses have changed, but the structure is the same.)

We'll do another "List Objects" "with outgoing references" against the the HashEntry object immediately referencing our RubyHash.

And there it is! In the "key" field of the HashEntry, we see our constant name "LEAKING_ARRAY".
What About That Module?
Oh yeah, what about that Module that showed up in the leak suspects? It was responsible for almost 4MB of the heap. Let's go back and check it out.

A-ha! Eclipse MAT has flagged the Gem module as being a potential leak suspect. But why? Let's go back to the suspect report and look at the Accumulated Objects by Class table, toward the bottom.

Ok, so the Gem module eventually references nearly
Evan Phoenix pointed out that I misread the columns. It's actually 249 Specification objects, their "self" size is almost 6000 bytes, and their "retained" size is 3.8MB. But that gives me an opportunity to show off another feature of MAT: Customized Retained Set calculation.
In this case, the retained size seems a bit suspect. Could there really be 3.8MB of data kept alive by Gem::Specification objects? It seems like a bit much, to be sure, but digging through the tree of references from the Gem module down shows there's several references to classes and modules, which in turn reference constant tables, method tables, and so on. How can we filter out that extra noise?
First we'll return to the view of the Gem module (two screenshots up) by going back to leak suspect #2, expanding "Shortest Paths". The topmost RubyModule in that list is the Gem module, so we're all set to calculate a Customized Retained Set.

The resulting dialog provides a list of options through which you can specify classes or fields to ignore when calculating the retained set from a given starting point. In this case, it's simple enough to filter out org.jruby.RubyClass and org.jruby.RubyModule, so that references from Gem::Specification back into the class/module hierarchy don't get included in calculations.

Which results in a similar view to those we've seen, but with objects sorted by retained heap.

Well what the heck? It looks like it's all String data?
JRuby's String implementation is an org.jruby.RubyString object, aggregating an org.jruby.util.ByteList object, aggregating a byte array, so the top three entries there in total are essentially all String memory. The best way to investigate where they're coming from is to do "List Objects" on RubyString, but instead of "with outgoing references" we'll use "with incoming references" to show where all those Strings are coming from.

Finally we have a view that lets us hunt through all these strings and see where they're coming from. Poking at the first few shows they're stored in constant tables of the Gem module (that last RubyModule I haven't expanded in). That's probably not a big deal. But if we sort the the list of RubyString objects by their retained sizes, we get a different picture of the system.

If we dig into the *largest* String objects, they appear to be referenced by Gem::Specification instance variables! So there's probably something worth investigating here.
It's also worth noting that any Ruby application is going to have a lot of Strings in it, so this isn't all that unusual to see. But it's nice to have a tool that lets you investigate potential inefficiencies (even down to the raw bytes!), and it's nice to know that at least some of that retained data for the Gem module is "real" and not just references back into the class hierarchy.
(And I'm not convinced all those Strings really *need* to be alive...but you're welcome to take it from here!)
Your Turn
Eclipse MAT is probably one of the nicest of the free tools. In addition to object browsing, leak detection, GC root analysis, and object query language support, there's a ton of other features, both in the main distribution and available from third parties. If you're hunting for memory leaks, or just want to investigate the memory usage of your (J)Ruby application, MAT is a tool worth playing with (and as always, I hope you will blog and report your experiences!)
Thursday, July 8, 2010
Browsing Memory the JRuby Way
There's been a lot of fuss made lately over memory inspection and profiling tools for Ruby implementations. And it's not without reason; inspecting a Ruby application's memory profile, much less diagnosing problems, has traditionally been very difficult. At least, difficult if you don't use JRuby.
So we have our test subject ready to go. To use the jmap tool, we need the pid of this process. Of course we can use the usual shell tricks to get it, but the JDK comes with a nice tool for finding all JVM pids active on the system: jps
From this, you can see I have three JVMs running on my system right now: jps itself; our JRuby instance; and a GlassFish server I used for testing earlier today. We're interested in the JRuby instance, pid 52857. Let's see what jmap can do with that.
The simplest option here is -histo, to print out a histogram of the objects on the heap. Let's run that against our JRuby instance.
The resulting output is a listing of literally every object in the system...not just Ruby objects even! The value of this should be apparent; not only can you start to investigate the memory overhead of code you've written, you'll also be able to investigate the memory overhead of every library and every piece of code running in the same process, right down to byte arrays (the "[B" above) and "native" Java strings ("java.lang.String" above). And so far we haven't had to do anything special to JRuby. Nice, eh?
So, back to the matter at hand: the Foo class from our example. Where is it?
Well, the answer is that it's right there; 10000 of those 10002 org.jruby.RubyObject instances are our Foo objects; the other two are probably objects constructed for JRuby runtime purposes. But obviously, there's nothing in this output that tells us how to find our Foo instances. This is what I'm remedying in JRuby 1.6.
On JRuby master, there's now a flag you can pass that will stand up a JVM class for every user-created Ruby class. Among the many benefits of doing this, we also get a more useful profile. Let's see how to use the flag (which will either be default or very easy to access by the time we release JRuby 1.6).
If we run jmap against this new instance, we see a more interesting result.
A-ha! There's our Foo instances! The "reify classes" option generates a JVM class of the same name as the Ruby class, prefixed by "ruby." to separate it from other JVM classes. Now we can start to see the real power of the tools, and we're just at the beginning. Let's see what a simple Rails application looks like.
This time I've opted to grep out just the "ruby." items in the histogram, and the results are pretty impressive! We can see the baffling fact that there's 970 instance of Gem::Version, using at least 23280 bytes of memory. We can see the even more depressing fact that there's 11685 live instances of TZInfo::TimezoneTransitionInfo, using at least 280440 bytes.
Now that we're getting useful data, let's look at the first of our tools in more detail: jmap and jhat.
jmap and jhat
As you might guess, I do a lot of profiling in the process of developing JRuby. I've used probably a dozen different tools at different times. But the first tool I always reach for is the jmap/jhat combination.
You've seen the simple case of using jmap above, generating a histogram of the live heap. Let's take a look at an offline heap dump.
That's how easy it is! The binary dump in heap.bin is supported by several tools: jhat (obviously), VisualVM, the Eclipse Memory Analysis Tool, and others. It's not officially a "standard" format, but it hasn't changed in a long time. Let's have a look at jhat options.
Generally you can just point jhat at a heap dump and away it goes. Occasionally if the heap is large, you may need to use the -J option to increase the maximum heap size of the JVM jhat runs in. Since we're running a Rails app, we'll bump the heap up a little bit.
"Server is ready"? Damn you Java people! Does everything have to be a server with you?
In this case, it's actually an incredibly useful tool. jhat starts up a small web application on port 7000 that allows you to click through the dump file. Let's see what that looks like.

Here's the front page of the tool. We see a listing of all JVM classes in the system. If you scroll to the bottom, there's a few more general functions.

Let's go with what we know and view the heap histogram again.

Here we can see that there's lots of objects taking up memory, and they're a mix of JVM-native types, JRuby implementation classes, and actual Ruby classes. In fact, here we can see our friend TZInfo::TimezoneTransitionInfo again. Let's click through.

Pretty mundane stuff so far; basically just information about the class itself. But you see at the bottom of this screenshot that we can go from here to viewing all instances of TimezoneTransitionInfo. Let's try that.

Ahh, that's more like it! Now we can see that there's a heck of a lot of these things floating around. Let's investigate a bit more and click through the first instance.

Now this is some cool stuff!
We can see that the JVM class generated for TimezoneTransitionInfo has three fields: metaClass, which points at the Ruby Class object; varTable, which is an array of Object references used for instance variables and other "internal" variables; and a flags field containing runtime flags for the object, like whether it's frozen, tainted, and so on. We can see that this object has no special flags set, and we can dig deeper into those fields if we like. We'll skip that today.
Moving further down, we see a few more amazing links. First, there's a list of all references to this object. Ahh, now we can start to investigate why they're staying in memory, even though we're not using them. We can even have jhat show us the full chains of references keeping these objects alive; a series of objects leading all the way back to one "rooted" by a thread or by global JVM state. And we can explore the other direction as well, walking all objects reachable from this one.
This is only a small part of what you can do with jmap and jhat, and they're so simple to use it feels almost criminal. But what if we want to inspect an application while it's running? Dumping heaps and analyzing them offline can tell you much of the story, but sometimes you just want to see the objects coming and going yourself. Let's move on to VisualVM.
VisualVM
VisualVM spawned out of the NetBeans profiling tools. One of the biggest complaints about the JVMs of old were that all the built-in tooling seemed to be designed for JVM engineers alone. Because Sun had the foresight to build and own their own IDE and related modules, it eventually became a natural fit to pull out the profiling tools for use by everyone. And so VisualVM was born.
On most systems with Java 6 installed, you should have a "jvisualvm" command. Let's run it now.

When you start up VisualVM, you're presented with a list of running JVMs, similar to using the 'jps' command. You can also connect to remote machines, browse offline heap and core dump files, and look through memory and CPU profiling snapshots from previous runs. Today, we'll just open up our running Rails app and see what we can see.

VisualVM connects to the running process and brings up a basic information pane with process information, JVM information, and so on. We're interested in monitoring heap usage, so let's move to the "Monitor" tab.

Already we're getting some useful information. This view shows CPU usage (currently zero, since it's an idle Rails app), Heap usage over time, and the number of JVM classes and threads that are active. We can trigger a full GC, if we'd like to tidy things up before we start poking around. But most importantly, we can do the jmap/jhat dance in one step, by clicking the Heap Dump button. Tantalizing, isn't it?

Initially, we see a basic summary of the heap: total size, number of classes and GC roots, and so on. We're looking for our friend TimezoneTransitionInfo, so let's look for it in the "Classes" pane.

Ahh, there it is, just a little ways down the list. The counts are as we expect, so let's double-click and dig a bit deeper.

Here we have a lot of the same information about object instances that we did with jhat, but presented in a much richer format. Almost everything is active; you can jump around the heap and do analysis that would take a lot of manual work very easily. Let's try another tool: the Retained Size calculator.
Because our JVM tools see all objects equally, the reported size for a Ruby object on the heap is only part of the story. There's also the variable table, the object's instance variables, and objects they reference to consider. Let's jump to a different object now, Gem::Version.
We don't want to have to scroll through the list of classes to find ruby.Gem.Version, so let's make use of the Object Query Language console. With the OQL console, you can write SQL-like queries to retrieve listings of objects in the heap. We'll search for all instances of ruby.Gem.Version.

The query runs and we get a listing of Gem::Version objects. Let's dig deeper and see how much retained memory each Version object is keeping alive.

Clicking on the "Compute Retained Sizes" link in the "Instances" pane prompts us with this dialog. We're tough...we can take it.

Reticulating splines...

So it looks like each of the Version objects take from 125 to 190 bytes for a total of 19400 bytes, most of which is from the variable table. What's in there?

Ahh...looks like there's a String and an Array. And of course we can poke around the heap ad infinatum, into and out of "native" JRuby and JVM classes, and truly get a complete picture of what our running applications look like. Now you're playing with power.
Your Turn
This is obviously only the tip of the iceberg. Tools like Eclipse Memory Analysis Tool include features for detecting leaks; VisualVM and NetBeans both allow you to turn on allocation tracing, to show where in your code all those objects are being created. There's tools for monitoring live GC behavior, and many of these tools even allow you to dig into a running heap and modify live objects. If you can dream it, there's a tool that can do it. And you get all that for free by using JRuby.
If you'd like to play with this, it all works with JRuby 1.5.1 but you won't get the nice JVM classes for Ruby classes. For that, you can pull and build JRuby master, download a 1.6.0.dev snapshot, or just wait for JRuby 1.6. And if you do play with these or other tools, I hope you'll let us know and blog about your experience!
In the future, I'll try to show some of the other tools plus some of the CPU profiling capabilities they bring to the table. For now, rest assured that if you're using JRuby, you really do have the best tools available to you.
Because JRuby runs on the JVM, we benefit from the dozens of tools that have been written for the JVM. Among these tools are numerous memory inspection, profiling, and reporting tools, some built into the JDK itself. Want a heap dump? Check out the jmap (Java memory map) and jhat (Java heap analysis tool) shipped with Hotspot-based JVMs (Sun, OpenJDK). Looking for a bit more? There's the Memory Analysis Tool based on Eclipse, the YourKit memory and CPU profiling app, VisualVM, now also shipped with Hotspot JVMs...and many more. There's literally dozens of these tools, and they provide just about everything you can imagine for investigating memory.
In this post, I'll show how you can use two of these tools: VisualVM, a simple, graphical tool for exploring a running JVM; and the jmap/jhat combination, which allows you to dump the memory heap to disk for inspection offline.
Getting JRuby Prepared
All these tools work with any version of JRuby, but as part of JRuby 1.6 development I've been adding some enhancements. Specifically, I've made some modifications that allow Ruby objects to show up side-by-side with Java objects in memory profiles. A little explanation is in order.
In JRuby, all the core classes are represented by "native" Java classes. Object is represented by org.jruby.RubyObject, String is org.jruby.RubyString, and so on. Normally, if you extend one of the core classes, we don't actually create a new "native" class to represent it; instead, all user-created classes that extend Object simply show up as RubyObject in memory. This is still incredibly useful; you can look into RubyObject and see the metaClass field, which indicates the actual Ruby type.
Let's see what that looks like, so we know where we're starting from. We'll run a simple script that creates a custom class, instantiates and saves 10000 instances of it, and then sleeps.
~/projects/jruby ➔ cat foo_heap_example.rb
class Foo
end
ary = []
10000.times { ary << Foo.new }
puts "ready for analysis!"
sleep
~/projects/jruby ➔ jruby foo_heap_example.rb
ready for analysis!
So we have our test subject ready to go. To use the jmap tool, we need the pid of this process. Of course we can use the usual shell tricks to get it, but the JDK comes with a nice tool for finding all JVM pids active on the system: jps
~/projects/jruby ➔ jps -l
52862 sun.tools.jps.Jps
52857 org/jruby/Main
48716 com.sun.enterprise.glassfish.bootstrap.ASMain
From this, you can see I have three JVMs running on my system right now: jps itself; our JRuby instance; and a GlassFish server I used for testing earlier today. We're interested in the JRuby instance, pid 52857. Let's see what jmap can do with that.
~/projects/jruby ➔ jmap
Usage:
jmap [option] <pid>
(to connect to running process)
jmap [option] <executable <core>
(to connect to a core file)
jmap [option] [server_id@]<remote server IP or hostname>
(to connect to remote debug server)
where <option> is one of:
<none> to print same info as Solaris pmap
-heap to print java heap summary
-histo[:live] to print histogram of java object heap; if the "live"
suboption is specified, only count live objects
-permstat to print permanent generation statistics
-finalizerinfo to print information on objects awaiting finalization
-dump:<dump-options> to dump java heap in hprof binary format
dump-options:
live dump only live objects; if not specified,
all objects in the heap are dumped.
format=b binary format
file=<file> dump heap to <file>
Example: jmap -dump:live,format=b,file=heap.bin <pid>
-F force. Use with -dump:<dump-options> <pid> or -histo
to force a heap dump or histogram when <pid> does not
respond. The "live" suboption is not supported
in this mode.
-h | -help to print this help message
-J<flag> to pass <flag> directly to the runtime system
<
The simplest option here is -histo, to print out a histogram of the objects on the heap. Let's run that against our JRuby instance.
~/projects/jruby ➔ jmap -histo:live 52857
num #instances #bytes class name
----------------------------------------------
1: 22677 3192816 <constMethodKlass>
2: 22677 1816952 <methodKlass>
3: 35089 1492992 <symbolKlass>
4: 2860 1389352 <instanceKlassKlass>
5: 2860 1193536 <constantPoolKlass>
6: 2798 739264 <constantPoolCacheKlass>
7: 5861 465408 [B
8: 5399 298120 [C
9: 3042 292032 java.lang.Class
10: 4037 261712 [S
11: 10002 240048 org.jruby.RubyObject
12: 3994 179928 [[I
13: 5474 131376 java.lang.String
14: 1661 95912 [I
...
The resulting output is a listing of literally every object in the system...not just Ruby objects even! The value of this should be apparent; not only can you start to investigate the memory overhead of code you've written, you'll also be able to investigate the memory overhead of every library and every piece of code running in the same process, right down to byte arrays (the "[B" above) and "native" Java strings ("java.lang.String" above). And so far we haven't had to do anything special to JRuby. Nice, eh?
So, back to the matter at hand: the Foo class from our example. Where is it?
Well, the answer is that it's right there; 10000 of those 10002 org.jruby.RubyObject instances are our Foo objects; the other two are probably objects constructed for JRuby runtime purposes. But obviously, there's nothing in this output that tells us how to find our Foo instances. This is what I'm remedying in JRuby 1.6.
On JRuby master, there's now a flag you can pass that will stand up a JVM class for every user-created Ruby class. Among the many benefits of doing this, we also get a more useful profile. Let's see how to use the flag (which will either be default or very easy to access by the time we release JRuby 1.6).
~/projects/jruby ➔ jruby -J-Djruby.reify.classes=true foo_heap_example.rb
ready for analysis!
If we run jmap against this new instance, we see a more interesting result.
num #instances #bytes class name
----------------------------------------------
1: 22677 3192816 <constMethodKlass>
2: 22677 1816952 <methodKlass>
3: 35089 1492992 <symbolKlass>
4: 2860 1389352 <instanceKlassKlass>
5: 2860 1193536 <constantPoolKlass>
6: 2798 739264 <constantPoolCacheKlass>
7: 5863 465456 [B
8: 5401 298208 [C
9: 3042 292032 java.lang.Class
10: 4037 261712 [S
11: 10000 240000 ruby.Foo
12: 3994 179928 [[I
13: 5476 131424 java.lang.String
14: 1661 95912 [I
A-ha! There's our Foo instances! The "reify classes" option generates a JVM class of the same name as the Ruby class, prefixed by "ruby." to separate it from other JVM classes. Now we can start to see the real power of the tools, and we're just at the beginning. Let's see what a simple Rails application looks like.
~/projects/jruby ➔ jmap -histo:live 52926 | grep " ruby."
29: 11685 280440 ruby.TZInfo.TimezoneTransitionInfo
97: 970 23280 ruby.Gem.Version
98: 914 21936 ruby.Gem.Requirement
122: 592 14208 ruby.TZInfo.TimezoneOffsetInfo
138: 382 9168 ruby.Gem.Dependency
159: 265 6360 ruby.Gem.Specification
201: 142 3408 ruby.ActiveSupport.TimeZone
205: 118 2832 ruby.TZInfo.DataTimezoneInfo
206: 118 2832 ruby.TZInfo.DataTimezone
273: 41 984 ruby.Gem.Platform
383: 14 336 ruby.Mime.Type
403: 13 312 ruby.Set
467: 8 192 ruby.ActionController.MiddlewareStack.Middleware
476: 8 192 ruby.ActionView.Template
487: 7 168 ruby.ActionController.Routing.DividerSegment
508: 6 144 ruby.TZInfo.LinkedTimezoneInfo
523: 6 144 ruby.TZInfo.LinkedTimezone
810: 4 96 ruby.ActionController.Routing.DynamicSegment
2291: 2 48 ruby.ActionController.Routing.Route
2292: 2 48 ruby.I18n.Config
2293: 2 48 ruby.ActiveSupport.Deprecation.DeprecatedConstantProxy
2298: 2 48 ruby.ActionController.Routing.ControllerSegment
...
This time I've opted to grep out just the "ruby." items in the histogram, and the results are pretty impressive! We can see the baffling fact that there's 970 instance of Gem::Version, using at least 23280 bytes of memory. We can see the even more depressing fact that there's 11685 live instances of TZInfo::TimezoneTransitionInfo, using at least 280440 bytes.
Now that we're getting useful data, let's look at the first of our tools in more detail: jmap and jhat.
jmap and jhat
As you might guess, I do a lot of profiling in the process of developing JRuby. I've used probably a dozen different tools at different times. But the first tool I always reach for is the jmap/jhat combination.
You've seen the simple case of using jmap above, generating a histogram of the live heap. Let's take a look at an offline heap dump.
~/projects/jruby ➔ jmap -dump:live,format=b,file=heap.bin 52926
Dumping heap to /Users/headius/projects/jruby/heap.bin ...
Heap dump file created
That's how easy it is! The binary dump in heap.bin is supported by several tools: jhat (obviously), VisualVM, the Eclipse Memory Analysis Tool, and others. It's not officially a "standard" format, but it hasn't changed in a long time. Let's have a look at jhat options.
~/projects/jruby ➔ jhat
ERROR: No arguments supplied
Usage: jhat [-stack <bool>] [-refs <bool>] [-port <port>] [-baseline <file>] [-debug <int>] [-version] [-h|-help] <file>
-J<flag> Pass <flag> directly to the runtime system. For
example, -J-mx512m to use a maximum heap size of 512MB
-stack false: Turn off tracking object allocation call stack.
-refs false: Turn off tracking of references to objects
-port <port>: Set the port for the HTTP server. Defaults to 7000
-exclude <file>: Specify a file that lists data members that should
be excluded from the reachableFrom query.
-baseline <file>: Specify a baseline object dump. Objects in
both heap dumps with the same ID and same class will
be marked as not being "new".
-debug <int>: Set debug level.
0: No debug output
1: Debug hprof file parsing
2: Debug hprof file parsing, no server
-version Report version number
-h|-help Print this help and exit
<file> The file to read
For a dump file that contains multiple heap dumps,
you may specify which dump in the file
by appending "#<number>" to the file name, i.e. "foo.hprof#3".
All boolean options default to "true"
Generally you can just point jhat at a heap dump and away it goes. Occasionally if the heap is large, you may need to use the -J option to increase the maximum heap size of the JVM jhat runs in. Since we're running a Rails app, we'll bump the heap up a little bit.
~/projects/jruby ➔ jhat -J-Xmx200M heap.bin
Reading from heap.bin...
Dump file created Fri Jul 09 02:07:46 CDT 2010
Snapshot read, resolving...
Resolving 604115 objects...
[much verbose logging elided for brevity]
Chasing references, expect 120 dots........................................................................................................................
Eliminating duplicate references........................................................................................................................
Snapshot resolved.
Started HTTP server on port 7000
Server is ready.
"Server is ready"? Damn you Java people! Does everything have to be a server with you?
In this case, it's actually an incredibly useful tool. jhat starts up a small web application on port 7000 that allows you to click through the dump file. Let's see what that looks like.

Here's the front page of the tool. We see a listing of all JVM classes in the system. If you scroll to the bottom, there's a few more general functions.

Let's go with what we know and view the heap histogram again.

Here we can see that there's lots of objects taking up memory, and they're a mix of JVM-native types, JRuby implementation classes, and actual Ruby classes. In fact, here we can see our friend TZInfo::TimezoneTransitionInfo again. Let's click through.

Pretty mundane stuff so far; basically just information about the class itself. But you see at the bottom of this screenshot that we can go from here to viewing all instances of TimezoneTransitionInfo. Let's try that.

Ahh, that's more like it! Now we can see that there's a heck of a lot of these things floating around. Let's investigate a bit more and click through the first instance.

Now this is some cool stuff!
We can see that the JVM class generated for TimezoneTransitionInfo has three fields: metaClass, which points at the Ruby Class object; varTable, which is an array of Object references used for instance variables and other "internal" variables; and a flags field containing runtime flags for the object, like whether it's frozen, tainted, and so on. We can see that this object has no special flags set, and we can dig deeper into those fields if we like. We'll skip that today.
Moving further down, we see a few more amazing links. First, there's a list of all references to this object. Ahh, now we can start to investigate why they're staying in memory, even though we're not using them. We can even have jhat show us the full chains of references keeping these objects alive; a series of objects leading all the way back to one "rooted" by a thread or by global JVM state. And we can explore the other direction as well, walking all objects reachable from this one.
This is only a small part of what you can do with jmap and jhat, and they're so simple to use it feels almost criminal. But what if we want to inspect an application while it's running? Dumping heaps and analyzing them offline can tell you much of the story, but sometimes you just want to see the objects coming and going yourself. Let's move on to VisualVM.
VisualVM
VisualVM spawned out of the NetBeans profiling tools. One of the biggest complaints about the JVMs of old were that all the built-in tooling seemed to be designed for JVM engineers alone. Because Sun had the foresight to build and own their own IDE and related modules, it eventually became a natural fit to pull out the profiling tools for use by everyone. And so VisualVM was born.
On most systems with Java 6 installed, you should have a "jvisualvm" command. Let's run it now.

When you start up VisualVM, you're presented with a list of running JVMs, similar to using the 'jps' command. You can also connect to remote machines, browse offline heap and core dump files, and look through memory and CPU profiling snapshots from previous runs. Today, we'll just open up our running Rails app and see what we can see.

VisualVM connects to the running process and brings up a basic information pane with process information, JVM information, and so on. We're interested in monitoring heap usage, so let's move to the "Monitor" tab.

Already we're getting some useful information. This view shows CPU usage (currently zero, since it's an idle Rails app), Heap usage over time, and the number of JVM classes and threads that are active. We can trigger a full GC, if we'd like to tidy things up before we start poking around. But most importantly, we can do the jmap/jhat dance in one step, by clicking the Heap Dump button. Tantalizing, isn't it?

Initially, we see a basic summary of the heap: total size, number of classes and GC roots, and so on. We're looking for our friend TimezoneTransitionInfo, so let's look for it in the "Classes" pane.

Ahh, there it is, just a little ways down the list. The counts are as we expect, so let's double-click and dig a bit deeper.

Here we have a lot of the same information about object instances that we did with jhat, but presented in a much richer format. Almost everything is active; you can jump around the heap and do analysis that would take a lot of manual work very easily. Let's try another tool: the Retained Size calculator.
Because our JVM tools see all objects equally, the reported size for a Ruby object on the heap is only part of the story. There's also the variable table, the object's instance variables, and objects they reference to consider. Let's jump to a different object now, Gem::Version.
We don't want to have to scroll through the list of classes to find ruby.Gem.Version, so let's make use of the Object Query Language console. With the OQL console, you can write SQL-like queries to retrieve listings of objects in the heap. We'll search for all instances of ruby.Gem.Version.

The query runs and we get a listing of Gem::Version objects. Let's dig deeper and see how much retained memory each Version object is keeping alive.

Clicking on the "Compute Retained Sizes" link in the "Instances" pane prompts us with this dialog. We're tough...we can take it.

Reticulating splines...

So it looks like each of the Version objects take from 125 to 190 bytes for a total of 19400 bytes, most of which is from the variable table. What's in there?

Ahh...looks like there's a String and an Array. And of course we can poke around the heap ad infinatum, into and out of "native" JRuby and JVM classes, and truly get a complete picture of what our running applications look like. Now you're playing with power.
Your Turn
This is obviously only the tip of the iceberg. Tools like Eclipse Memory Analysis Tool include features for detecting leaks; VisualVM and NetBeans both allow you to turn on allocation tracing, to show where in your code all those objects are being created. There's tools for monitoring live GC behavior, and many of these tools even allow you to dig into a running heap and modify live objects. If you can dream it, there's a tool that can do it. And you get all that for free by using JRuby.
If you'd like to play with this, it all works with JRuby 1.5.1 but you won't get the nice JVM classes for Ruby classes. For that, you can pull and build JRuby master, download a 1.6.0.dev snapshot, or just wait for JRuby 1.6. And if you do play with these or other tools, I hope you'll let us know and blog about your experience!
In the future, I'll try to show some of the other tools plus some of the CPU profiling capabilities they bring to the table. For now, rest assured that if you're using JRuby, you really do have the best tools available to you.
Subscribe to:
Posts (Atom)