Wednesday, October 7, 2015

CDO 4.4.1 Is Available: New User Interface and Documentation

The new CDO Explorer with its new user interface was already released with Mars in June 2015 and I've already blogged about it: Collaborative Modeling with Papyrus and CDO (Reloaded)

Now with CDO 4.4.1 the documentation has been augmented with a beautiful User's Guide and an extensive Operator's Guide.

Please browse the release notes to see what else has changed.

Again, special thanks go to CEA for generously funding the following:
  • Branching and interactive merging
  • Support for offline checkouts
  • Interactive conflict resolution
  • Documentation
Download CDO 4.4.1 and enjoy the best CDO ever!

Thursday, July 23, 2015

A Good Thread Pool

The java.uti.concurrency package comes with a whole bunch of classes that can be extremely useful in concurrent Java applications. This article is about the ThreadPoolExecutor class, how it behaved unexpectedly for me and what I did to make it do what I want.

As the name suggests a thread pool is an Executor in Java. It's even an ExecutorService but that's irrelevant for the understanding of the fundamental behavior. The only important operational method of a thread pool is void execute(Runnable task). You pass in your task and the pool will eventually execute it on one of its worker threads. A thread pool is made up of the following components:

When you create a thread pool you must pass in a BlockingQueue instance that will become the work queue of the thread pool. You can optionally pass in a thread factory and a rejection handler. You cannot control the implementation class of the internal worker pool but you can influence it's behavior with the following important parameters:

  1. The corePoolSize defines kind of a minimum number of worker threads to keep in the internal worker pool. The reason it's not called minPoolSize is probably that directly after the creation of the thread pool the internal worker pool starts with zero worker threads. Initial workers are then created as needed but they're only ever removed from the worker pool if there are more of them than corePoolSize.
  2. The maxPoolSize defines a strict upper bound for the number of work threads in the internal worker pool.
  3. The keepAliveTime defines the time that an idle worker thread may stay in the internal worker pool if there are more than corePoolSize workers in the pool.

The Javadoc of the ThreadPoolExecutor class recommends to use the Executors.newCachedThreadPool() factory method to create a thread pool. The result is an unbounded thread pool with automatic thread reclamation. A look at the code of the factory method reveals that corePoolSize=0 and maxPoolSize=Integer.MAX_VALUE. The work queue is a SynchronousQueue, which has no internal capacity; it basically functions as a direct pipe to the next idle or newly created worker thread.

When you hammer this thread pool with lots of tasks the work queue will never grow; tasks will never be rejected because the worker pool is unbounded. Your JVM will soon become unresponsive because the pool will create thousands of worker threads!

What I really wanted is a thread pool with, let's say, maxPoolSize=100 and a work queue that temporarily keeps all the tasks that are scheduled while all of the 100 threads are busy. So I instantiated a ThreadPoolExecutor directly (without the recommended factory method), passed in corePoolSize=10, maxPoolSize=100, and a LinkedBlockingQueue to be used as the work queue. And here comes the big surprise: This thread pool never creates more than corePoolSize worker threads! Instead the work queue will grow and grow and grow. The tasks in it will always compete for the 10 core workers. Why is that?

To understand you need to know how the execute() method works. Of course it's all Javadoc'ed, but that doesn't mean it's expectation-compliant (well, I know that expectations can be subjective). The following flow diagram illustrates what the execute() method does:

There are only three different outcomes, the task can be enqueued in the work queue, a new worker thread can be created, or the task can be rejected. Three conditions are checked to determine the outcome at a specific point in time. The first condition is only relevant in the warm-up phase of the pool, but then it becomes interesting:

enqueue is always preferred over newWorker!

That means that, with an unbounded work queue, no more than corePoolSize workers will  ever be created; maxPoolSize becomes completely irrelevant. Now we have seen one pool configuration that only ever creates new workers (the default) and one that only ever enqueues tasks. Between these two evils is probably the a thread pool with both a bounded worker pool and a bounded work queue, but obviously such a thread pool will reject tasks when hammered enough.

That's all not what I wanted, but wait:

I control the work queue implementation!

Peeking again at the code of the execute() method shows that the only interaction between the thread pool and the work queue here is the call workQueue.offer(task) and per contract this method returns whether it accepted the offer or not. So, the simple solution to my problem is a BlockingQueue implementation with an offer() method overridden to accept the offered task only if the worker pool contains less than maxPoolSize threads.

Subclassing LinkedBlockingQueue would do that trick but there's a small problem remaining now: The three conditions (see above) are checked in the execute() method of the thread pool without any synchronization. That means, if my work queue does not accept a task because there are still less than maxPoolSize workers allocated the third condition is not necessarily true a nanosecond later. The task would be completely rejected from the pool rather than be enqueued. The solution to this problem is a custom rejection handler that takes the rejected task and puts it back at the beginning of the work queue. And now it becomes clear why subclassing LinkedBlockingDequeue is a better alternative: It provides the needed addFirst(Runnable task) method.

If you try to implement these ideas you'll likely discover a few technical complications, such as the LinkedBlockingDeque class not being available in Java 1.5. If you're interested in my concrete solution please have a look at the source code of my good thread pool. Enjoy...

Sunday, June 21, 2015

Oomph Workshop: Eclipse the Way You Want It

Our Oomph Workshop at EclipseCon France is next Wednesday morning and Ed and I hope to see you there. We'll not only show you how to use Oomph's Eclipse Installer to provision ready-to-use IDEs and workspaces, we'll also teach you how to create setups for your own projects:

By submitting a functional setup for an Eclipse project
every EclipseCon participant can

All you need is an Eclipse IDE with Oomph and our Author's Guide.

The challenge, of course, becomes easier if you attend our workshop. If you plan to do so, please download the following zip file and unzip it to an empty folder on your local disk:

The zip file is giant (2.5 GB) because we've designed it to allow you to exercise the tutorial without network access, i.e., it includes some mirrors of p2 and Git repositories, as well as preconfigured installer executables for all platforms. To bootstrap the tutorial IDE follow these simple steps:

  1. Go to the "installers" folder and launch the installer for your platform. If you are on Linux, please "chmod +x" your installer binary first!

  2. If you run the installer the first time ever it might come up in simple mode. In this case please switch it to advanced mode:

  3. In the advanced mode pick the "Eclipse IDE for Eclipse Committers" product and select the "Mars" version and click Next:

  4. On the second installer page double-click the "Oomph Tutorial" project and verify that it's been added to the table at the bottom:

  5. Confirm all following installer pages with Next or Finish. The tutorial IDE will be installed and started:

Now you're ready to participate. We're looking forward to meeting you in Toulouse!

Tuesday, April 28, 2015

Collaborative Modeling with Papyrus and CDO (Reloaded)

Since the beginning of this year I've been working on fundamental improvements to the user interface of CDO and its integration with Papyrus. In particular CEA has generously funded the following:
  • Branching and interactive merging
  • Support for offline checkouts
  • Interactive conflict resolution
Most of the new functionality has been implemented directly in CDO and is available for other modeling tools, too. Please enjoy a brief tour of what's in the pipe for the Mars release:

The following screencast shows how Papyrus will integrate with this new CDO user interface:

I hope you like the new concepts and workflows. Feedback is welcome, of course. And I'd like to thank CEA, Kenn Hussey and Christian Damus for their help to make this happen!

Friday, December 12, 2014

Oomph 1.0.0 is Available

I'm very happy and a little proud to announce the very first release of Eclipse Oomph. The new installers are now available for your platform:

You can also install Oomph into an existing IDE via the update site or the site archive. Our help center is still work in progress but you may already find answers to your questions there. Our wiki may provide additional information.

This 1.0.0 release includes:

I'd like to thank our committers, especially my friend Ed Merks, our contributors and early users for their great contributions, valuable feedback, and concise bug reports. Working with you has been and will continue to be an absolutely pleasant and rewarding experience for me.

Tuesday, December 9, 2014

When You Change a Method Return Type...

... strange effects can result under certain circumstances! Recently some Oomph users reported mysterious NoSuchMethodErrors at runtime and I spent quite some time to hunt down the problem. What I found is kind of scary. Consider the following, simple program:

  public class Util
    public static void run()
      // Run it...

  public class Main
    public static void main(String[] args)

The bytecode of the compiled main() method is as simple as:

  invokestatic Util/run()V

Notice the uppercase "V" at the end of the run() method call. It indicates that the return type of the called run() method is "void" and is part of the bytecode of the caller! Now change the declaration of the called run() method to return a boolean value:

  public class Util
    public static boolean run()
      // Run it...
      return true;

  public class Main
    public static void main(String[] args)

Recompile both classes and look at the bytecode of the main() method again:

  invokestatic Util/run()Z

Notice that the bytecode has changed even though the source code of the Main class has not changed the least bit. The old run() method with no return type would not be considered a valid call target anymore!

Interesting, but when can this become a real problem?

Well, in our case the calling and the called method are in different OSGi plugins and we use Maven/Tycho to build them and our Oomph users use p2 to install or update them. The following steps turned out to be tragic:

  • I changed the return type of the called method from void to boolean.
  • Maven/Tycho has built both the calling and the called plugin.
    • The called plugin got a new version (build qualifier) because it was really changed.
    • The calling plugin did not get a different version because its source code wasn't changed.
  • A user updated his Oomph installation to the new build.
    • The called plugin was updated because a new version was found.
    • The calling plugin was not updated because there was no new version available. To be clear, there was a plugin with different content in the new build, but it had the same version as in the previous build.
As a result this user was faced with an evil exception at runtime:


Now that I know why this happened I can easily fix the nasty problem by applying a fake change to the calling plugin's source code to cause Tycho to assign a new version number to it; one that is consistent with the bytecode of the called plugin.

The fact that this can happen so easily leaves me kind of scared. After all, I'll probably never ever try to change a method return type again.

Monday, November 3, 2014

Better Late Than Never

I wanted to remind you about the Call for Papers for the EclipseCon North America 2015 earlier, but the EclipseCon Europe, which just ended, has kept me too busy.

For San Francisco in March 2015 I hope that we can put together an interesting and funny program, too. And we need your help to make that possible. There are still two weeks left to submit your proposal.

If you submit now your proposal has the chance to be among the early bird picks!