Each FJTaskRunner keeps FJTasks in a double-ended queue (DEQ). Double-ended queues support stack-based operations push and pop, as well as queue-based operations put and take. Normally, threads run their own tasks. But they may also steal tasks from each others DEQs.
The two most important capabilities are:
Push task onto DEQ
If DEQ is not empty, Pop a task and run it. Else if any other DEQ is not empty, Take ("steal") a task from it and run it. Else if the entry queue for our group is not empty, Take a task from it and run it. Else if current thread is otherwise idling If all threads are idling Wait for a task to be put on group entry queue Else Yield or Sleep for a while, and then retry
Implementations of the underlying representations and operations are geared for use on JVMs operating on multiple CPUs (although they should of course work fine on single CPUs as well).
A possible snapshot of a FJTaskRunner's DEQ is:
0 1 2 3 4 5 6 ... +-----+-----+-----+-----+-----+-----+-----+-- | | t | t | t | t | | | ... deq array +-----+-----+-----+-----+-----+-----+-----+-- ^ ^ base top (incremented (incremented on take, on push decremented decremented on put) on pop)
FJTasks are held in elements of the DEQ. They are maintained in a bounded array that works similarly to a circular bounded buffer. To ensure visibility of stolen FJTasks across threads, the array elements must be
volatile. Using volatile rather than synchronizing suffices here since each task accessed by a thread is either one that it created or one that has never seen before. Thus we cannot encounter any staleness problems executing run methods, although FJTask programmers must be still sure to either synch or use volatile for shared data within their run methods.
However, since there is no way to declare an array of volatiles in Java, the DEQ elements actually hold VolatileTaskRef objects, each of which in turn holds a volatile reference to a FJTask. Even with the double-indirection overhead of volatile refs, using an array for the DEQ works out better than linking them since fewer shared memory locations need to be touched or modified by the threads while using the DEQ. Further, the double indirection may alleviate cache-line sharing effects (which cannot otherwise be directly dealt with in Java).
The indices for the
top of the DEQ are declared as volatile. The main contention point with multiple FJTaskRunner threads occurs when one thread is trying to pop its own stack while another is trying to steal from it. This is handled via a specialization of Dekker's algorithm, in which the popping thread pre-decrements
top, and then checks it against
base. To be conservative in the face of JVMs that only partially honor the specification for volatile, the pop proceeds without synchronization only if there are apparently enough items for both a simultaneous pop and take to succeed. It otherwise enters a synchronized lock to check if the DEQ is actually empty, if so failing. The stealing thread does almost the opposite, but is set up to be less likely to win in cases of contention: Steals always run under synchronized locks in order to avoid conflicts with other ongoing steals. They pre-increment
base, and then check against
top. They back out (resetting the base index and failing to steal) if the DEQ is empty or is about to become empty by an ongoing pop.
A push operation can normally run concurrently with a steal. A push enters a synch lock only if the DEQ appears full so must either be resized or have indices adjusted due to wrap-around of the bounded DEQ. The put operation always requires synchronization.
When a FJTaskRunner thread has no tasks of its own to run, it tries to be a good citizen. Threads run at lower priority while scanning for work.
If the task is currently waiting via yield, the thread alternates scans (starting at a randomly chosen victim) with Thread.yields. This is well-behaved so long as the JVM handles Thread.yield in a sensible fashion. (It need not. Thread.yield is so underspecified that it is legal for a JVM to treat it as a no-op.) This also keeps things well-behaved even if we are running on a uniprocessor JVM using a simple cooperative threading model.
If a thread needing work is is otherwise idle (which occurs only in the main runloop), and there are no available tasks to steal or poll, it instead enters into a sleep-based (actually timed wait(msec)) phase in which it progressively sleeps for longer durations (up to a maximum of FJTaskRunnerGroup.MAX_SLEEP_TIME, currently 100ms) between scans. If all threads in the group are idling, they further progress to a hard wait phase, suspending until a new task is entered into the FJTaskRunnerGroup entry queue. A sleeping FJTaskRunner thread may be awakened by a new task being put into the group entry queue or by another FJTaskRunner becoming active, but not merely by some DEQ becoming non-empty. Thus the MAX_SLEEP_TIME provides a bound for sleep durations in cases where all but one worker thread start sleeping even though there will eventually be work produced by a thread that is taking a long time to place tasks in DEQ. These sleep mechanics are handled in the FJTaskRunnerGroup class.
Composite operations such as taskJoin include heavy manual inlining of the most time-critical operations (mainly FJTask.invoke). This opens up a few opportunities for further hand-optimizations. Until Java compilers get a lot smarter, these tweaks improve performance significantly enough for task-intensive programs to be worth the poorer maintainability and code duplication.
Because they are so fragile and performance-sensitive, nearly all methods are declared as final. However, nearly all fields and methods are also declared as protected, so it is possible, with much care, to extend functionality in subclasses. (Normally you would also need to subclass FJTaskRunnerGroup.)
None of the normal java.lang.Thread class methods should ever be called on FJTaskRunners. For this reason, it might have been nicer to declare FJTaskRunner as a Runnable to run within a Thread. However, this would have complicated many minor logistics. And since no FJTaskRunner methods should normally be called from outside the FJTask and FJTaskRunnerGroup classes either, this decision doesn't impact usage.
You might think that layering this kind of framework on top of Java threads, which are already several levels removed from raw CPU scheduling on most systems, would lead to very poor performance. But on the platforms tested, the performance is quite good.
Public Member Functions
|FJTaskRunner (IFJTaskRunnerGroup g)|
|final void||push (final FJTask r)|
|boolean||active = false|
Protected Member Functions
|final void||coInvoke (FJTask tasks)|
|final void||coInvoke (final FJTask w, final FJTask v)|
|final synchronized FJTask||confirmPop (int provisionalTop)|
|FJTask||confirmTake (int oldBase)|
|final IFJTaskRunnerGroup||getGroup ()|
|final FJTask||pop ()|
|final synchronized void||put (final FJTask r)|
|void||scan (final FJTask waitingFor)|
|void||setRunPriority (int pri)|
|void||setScanPriority (int pri)|
|void||slowCoInvoke (FJTask tasks)|
|void||slowCoInvoke (final FJTask w, final FJTask v)|
|synchronized void||slowPush (final FJTask r)|
|final synchronized FJTask||take ()|
|final void||taskJoin (final FJTask w)|
|final void||taskYield ()|
|final Object||barrier = new Object()|
|volatile int||base = 0|
|VolatileTaskRef||deq = VolatileTaskRef.newArray(INITIAL_CAPACITY)|
|int||runs = 0|
|int||scanPriority = Thread.MIN_PRIORITY + 1|
|int||scans = 0|
|int||steals = 0|
|boolean||suspending = false|
|volatile int||top = 0|
Static Protected Attributes
|static final int||INITIAL_CAPACITY = 4096|
|static final int||MAX_CAPACITY = 1 << 30|
|synchronized void||setSuspending (boolean susp)|
Static Package Attributes
|static final boolean||COLLECT_STATS = true|