Wednesday, October 7, 2015

CDO 4.4.1 Is Available: New User Interface and Documentation

The new CDO Explorer with its new user interface was already released with Mars in June 2015 and I've already blogged about it: Collaborative Modeling with Papyrus and CDO (Reloaded)

Now with CDO 4.4.1 the documentation has been augmented with a beautiful User's Guide and an extensive Operator's Guide.

Please browse the release notes to see what else has changed.

Again, special thanks go to CEA for generously funding the following:
  • Branching and interactive merging
  • Support for offline checkouts
  • Interactive conflict resolution
  • Documentation
Download CDO 4.4.1 and enjoy the best CDO ever!

Thursday, July 23, 2015

A Good Thread Pool

The java.uti.concurrency package comes with a whole bunch of classes that can be extremely useful in concurrent Java applications. This article is about the ThreadPoolExecutor class, how it behaved unexpectedly for me and what I did to make it do what I want.

As the name suggests a thread pool is an Executor in Java. It's even an ExecutorService but that's irrelevant for the understanding of the fundamental behavior. The only important operational method of a thread pool is void execute(Runnable task). You pass in your task and the pool will eventually execute it on one of its worker threads. A thread pool is made up of the following components:


When you create a thread pool you must pass in a BlockingQueue instance that will become the work queue of the thread pool. You can optionally pass in a thread factory and a rejection handler. You cannot control the implementation class of the internal worker pool but you can influence it's behavior with the following important parameters:

  1. The corePoolSize defines kind of a minimum number of worker threads to keep in the internal worker pool. The reason it's not called minPoolSize is probably that directly after the creation of the thread pool the internal worker pool starts with zero worker threads. Initial workers are then created as needed but they're only ever removed from the worker pool if there are more of them than corePoolSize.
  2. The maxPoolSize defines a strict upper bound for the number of work threads in the internal worker pool.
  3. The keepAliveTime defines the time that an idle worker thread may stay in the internal worker pool if there are more than corePoolSize workers in the pool.

The Javadoc of the ThreadPoolExecutor class recommends to use the Executors.newCachedThreadPool() factory method to create a thread pool. The result is an unbounded thread pool with automatic thread reclamation. A look at the code of the factory method reveals that corePoolSize=0 and maxPoolSize=Integer.MAX_VALUE. The work queue is a SynchronousQueue, which has no internal capacity; it basically functions as a direct pipe to the next idle or newly created worker thread.

When you hammer this thread pool with lots of tasks the work queue will never grow; tasks will never be rejected because the worker pool is unbounded. Your JVM will soon become unresponsive because the pool will create thousands of worker threads!

What I really wanted is a thread pool with, let's say, maxPoolSize=100 and a work queue that temporarily keeps all the tasks that are scheduled while all of the 100 threads are busy. So I instantiated a ThreadPoolExecutor directly (without the recommended factory method), passed in corePoolSize=10, maxPoolSize=100, and a LinkedBlockingQueue to be used as the work queue. And here comes the big surprise: This thread pool never creates more than corePoolSize worker threads! Instead the work queue will grow and grow and grow. The tasks in it will always compete for the 10 core workers. Why is that?

To understand you need to know how the execute() method works. Of course it's all Javadoc'ed, but that doesn't mean it's expectation-compliant (well, I know that expectations can be subjective). The following flow diagram illustrates what the execute() method does:


There are only three different outcomes, the task can be enqueued in the work queue, a new worker thread can be created, or the task can be rejected. Three conditions are checked to determine the outcome at a specific point in time. The first condition is only relevant in the warm-up phase of the pool, but then it becomes interesting:

enqueue is always preferred over newWorker!

That means that, with an unbounded work queue, no more than corePoolSize workers will  ever be created; maxPoolSize becomes completely irrelevant. Now we have seen one pool configuration that only ever creates new workers (the default) and one that only ever enqueues tasks. Between these two evils is probably the a thread pool with both a bounded worker pool and a bounded work queue, but obviously such a thread pool will reject tasks when hammered enough.

That's all not what I wanted, but wait:

I control the work queue implementation!

Peeking again at the code of the execute() method shows that the only interaction between the thread pool and the work queue here is the call workQueue.offer(task) and per contract this method returns whether it accepted the offer or not. So, the simple solution to my problem is a BlockingQueue implementation with an offer() method overridden to accept the offered task only if the worker pool contains less than maxPoolSize threads.

Subclassing LinkedBlockingQueue would do that trick but there's a small problem remaining now: The three conditions (see above) are checked in the execute() method of the thread pool without any synchronization. That means, if my work queue does not accept a task because there are still less than maxPoolSize workers allocated the third condition is not necessarily true a nanosecond later. The task would be completely rejected from the pool rather than be enqueued. The solution to this problem is a custom rejection handler that takes the rejected task and puts it back at the beginning of the work queue. And now it becomes clear why subclassing LinkedBlockingDequeue is a better alternative: It provides the needed addFirst(Runnable task) method.

If you try to implement these ideas you'll likely discover a few technical complications, such as the LinkedBlockingDeque class not being available in Java 1.5. If you're interested in my concrete solution please have a look at the source code of my good thread pool. Enjoy...

Sunday, June 21, 2015

Oomph Workshop: Eclipse the Way You Want It

Our Oomph Workshop at EclipseCon France is next Wednesday morning and Ed and I hope to see you there. We'll not only show you how to use Oomph's Eclipse Installer to provision ready-to-use IDEs and workspaces, we'll also teach you how to create setups for your own projects:


By submitting a functional setup for an Eclipse project
every EclipseCon participant can



All you need is an Eclipse IDE with Oomph and our Author's Guide.

The challenge, of course, becomes easier if you attend our workshop. If you plan to do so, please download the following zip file and unzip it to an empty folder on your local disk:



The zip file is giant (2.5 GB) because we've designed it to allow you to exercise the tutorial without network access, i.e., it includes some mirrors of p2 and Git repositories, as well as preconfigured installer executables for all platforms. To bootstrap the tutorial IDE follow these simple steps:

  1. Go to the "installers" folder and launch the installer for your platform. If you are on Linux, please "chmod +x" your installer binary first!

  2. If you run the installer the first time ever it might come up in simple mode. In this case please switch it to advanced mode:

  3. In the advanced mode pick the "Eclipse IDE for Eclipse Committers" product and select the "Mars" version and click Next:


      
  4. On the second installer page double-click the "Oomph Tutorial" project and verify that it's been added to the table at the bottom:



  5. Confirm all following installer pages with Next or Finish. The tutorial IDE will be installed and started:

Now you're ready to participate. We're looking forward to meeting you in Toulouse!

Tuesday, April 28, 2015

Collaborative Modeling with Papyrus and CDO (Reloaded)

Since the beginning of this year I've been working on fundamental improvements to the user interface of CDO and its integration with Papyrus. In particular CEA has generously funded the following:
  • Branching and interactive merging
  • Support for offline checkouts
  • Interactive conflict resolution
Most of the new functionality has been implemented directly in CDO and is available for other modeling tools, too. Please enjoy a brief tour of what's in the pipe for the Mars release:


The following screencast shows how Papyrus will integrate with this new CDO user interface:


I hope you like the new concepts and workflows. Feedback is welcome, of course. And I'd like to thank CEA, Kenn Hussey and Christian Damus for their help to make this happen!