Home | History | Annotate | Download | only in concurrent
      1 /*
      2  * Written by Doug Lea, Bill Scherer, and Michael Scott with
      3  * assistance from members of JCP JSR-166 Expert Group and released to
      4  * the public domain, as explained at
      5  * http://creativecommons.org/licenses/publicdomain
      6  */
      7 
      8 package java.util.concurrent;
      9 
     10 import java.util.concurrent.atomic.AtomicInteger;
     11 import java.util.concurrent.atomic.AtomicReference;
     12 import java.util.concurrent.locks.LockSupport;
     13 
     14 /**
     15  * A synchronization point at which threads can pair and swap elements
     16  * within pairs.  Each thread presents some object on entry to the
     17  * {@link #exchange exchange} method, matches with a partner thread,
     18  * and receives its partner's object on return.  An Exchanger may be
     19  * viewed as a bidirectional form of a {@link SynchronousQueue}.
     20  * Exchangers may be useful in applications such as genetic algorithms
     21  * and pipeline designs.
     22  *
     23  * <p><b>Sample Usage:</b>
     24  * Here are the highlights of a class that uses an {@code Exchanger}
     25  * to swap buffers between threads so that the thread filling the
     26  * buffer gets a freshly emptied one when it needs it, handing off the
     27  * filled one to the thread emptying the buffer.
     28  * <pre>{@code
     29  * class FillAndEmpty {
     30  *   Exchanger<DataBuffer> exchanger = new Exchanger<DataBuffer>();
     31  *   DataBuffer initialEmptyBuffer = ... a made-up type
     32  *   DataBuffer initialFullBuffer = ...
     33  *
     34  *   class FillingLoop implements Runnable {
     35  *     public void run() {
     36  *       DataBuffer currentBuffer = initialEmptyBuffer;
     37  *       try {
     38  *         while (currentBuffer != null) {
     39  *           addToBuffer(currentBuffer);
     40  *           if (currentBuffer.isFull())
     41  *             currentBuffer = exchanger.exchange(currentBuffer);
     42  *         }
     43  *       } catch (InterruptedException ex) { ... handle ... }
     44  *     }
     45  *   }
     46  *
     47  *   class EmptyingLoop implements Runnable {
     48  *     public void run() {
     49  *       DataBuffer currentBuffer = initialFullBuffer;
     50  *       try {
     51  *         while (currentBuffer != null) {
     52  *           takeFromBuffer(currentBuffer);
     53  *           if (currentBuffer.isEmpty())
     54  *             currentBuffer = exchanger.exchange(currentBuffer);
     55  *         }
     56  *       } catch (InterruptedException ex) { ... handle ...}
     57  *     }
     58  *   }
     59  *
     60  *   void start() {
     61  *     new Thread(new FillingLoop()).start();
     62  *     new Thread(new EmptyingLoop()).start();
     63  *   }
     64  * }
     65  * }</pre>
     66  *
     67  * <p>Memory consistency effects: For each pair of threads that
     68  * successfully exchange objects via an {@code Exchanger}, actions
     69  * prior to the {@code exchange()} in each thread
     70  * <a href="package-summary.html#MemoryVisibility"><i>happen-before</i></a>
     71  * those subsequent to a return from the corresponding {@code exchange()}
     72  * in the other thread.
     73  *
     74  * @since 1.5
     75  * @author Doug Lea and Bill Scherer and Michael Scott
     76  * @param <V> The type of objects that may be exchanged
     77  */
     78 public class Exchanger<V> {
     79     /*
     80      * Algorithm Description:
     81      *
     82      * The basic idea is to maintain a "slot", which is a reference to
     83      * a Node containing both an Item to offer and a "hole" waiting to
     84      * get filled in.  If an incoming "occupying" thread sees that the
     85      * slot is null, it CAS'es (compareAndSets) a Node there and waits
     86      * for another to invoke exchange.  That second "fulfilling" thread
     87      * sees that the slot is non-null, and so CASes it back to null,
     88      * also exchanging items by CASing the hole, plus waking up the
     89      * occupying thread if it is blocked.  In each case CAS'es may
     90      * fail because a slot at first appears non-null but is null upon
     91      * CAS, or vice-versa.  So threads may need to retry these
     92      * actions.
     93      *
     94      * This simple approach works great when there are only a few
     95      * threads using an Exchanger, but performance rapidly
     96      * deteriorates due to CAS contention on the single slot when
     97      * there are lots of threads using an exchanger.  So instead we use
     98      * an "arena"; basically a kind of hash table with a dynamically
     99      * varying number of slots, any one of which can be used by
    100      * threads performing an exchange.  Incoming threads pick slots
    101      * based on a hash of their Thread ids.  If an incoming thread
    102      * fails to CAS in its chosen slot, it picks an alternative slot
    103      * instead.  And similarly from there.  If a thread successfully
    104      * CASes into a slot but no other thread arrives, it tries
    105      * another, heading toward the zero slot, which always exists even
    106      * if the table shrinks.  The particular mechanics controlling this
    107      * are as follows:
    108      *
    109      * Waiting: Slot zero is special in that it is the only slot that
    110      * exists when there is no contention.  A thread occupying slot
    111      * zero will block if no thread fulfills it after a short spin.
    112      * In other cases, occupying threads eventually give up and try
    113      * another slot.  Waiting threads spin for a while (a period that
    114      * should be a little less than a typical context-switch time)
    115      * before either blocking (if slot zero) or giving up (if other
    116      * slots) and restarting.  There is no reason for threads to block
    117      * unless there are unlikely to be any other threads present.
    118      * Occupants are mainly avoiding memory contention so sit there
    119      * quietly polling for a shorter period than it would take to
    120      * block and then unblock them.  Non-slot-zero waits that elapse
    121      * because of lack of other threads waste around one extra
    122      * context-switch time per try, which is still on average much
    123      * faster than alternative approaches.
    124      *
    125      * Sizing: Usually, using only a few slots suffices to reduce
    126      * contention.  Especially with small numbers of threads, using
    127      * too many slots can lead to just as poor performance as using
    128      * too few of them, and there's not much room for error.  The
    129      * variable "max" maintains the number of slots actually in
    130      * use.  It is increased when a thread sees too many CAS
    131      * failures.  (This is analogous to resizing a regular hash table
    132      * based on a target load factor, except here, growth steps are
    133      * just one-by-one rather than proportional.)  Growth requires
    134      * contention failures in each of three tried slots.  Requiring
    135      * multiple failures for expansion copes with the fact that some
    136      * failed CASes are not due to contention but instead to simple
    137      * races between two threads or thread pre-emptions occurring
    138      * between reading and CASing.  Also, very transient peak
    139      * contention can be much higher than the average sustainable
    140      * levels.  The max limit is decreased on average 50% of the times
    141      * that a non-slot-zero wait elapses without being fulfilled.
    142      * Threads experiencing elapsed waits move closer to zero, so
    143      * eventually find existing (or future) threads even if the table
    144      * has been shrunk due to inactivity.  The chosen mechanics and
    145      * thresholds for growing and shrinking are intrinsically
    146      * entangled with indexing and hashing inside the exchange code,
    147      * and can't be nicely abstracted out.
    148      *
    149      * Hashing: Each thread picks its initial slot to use in accord
    150      * with a simple hashcode.  The sequence is the same on each
    151      * encounter by any given thread, but effectively random across
    152      * threads.  Using arenas encounters the classic cost vs quality
    153      * tradeoffs of all hash tables.  Here, we use a one-step FNV-1a
    154      * hash code based on the current thread's Thread.getId(), along
    155      * with a cheap approximation to a mod operation to select an
    156      * index.  The downside of optimizing index selection in this way
    157      * is that the code is hardwired to use a maximum table size of
    158      * 32.  But this value more than suffices for known platforms and
    159      * applications.
    160      *
    161      * Probing: On sensed contention of a selected slot, we probe
    162      * sequentially through the table, analogously to linear probing
    163      * after collision in a hash table.  (We move circularly, in
    164      * reverse order, to mesh best with table growth and shrinkage
    165      * rules.)  Except that to minimize the effects of false-alarms
    166      * and cache thrashing, we try the first selected slot twice
    167      * before moving.
    168      *
    169      * Padding: Even with contention management, slots are heavily
    170      * contended, so use cache-padding to avoid poor memory
    171      * performance.  Because of this, slots are lazily constructed
    172      * only when used, to avoid wasting this space unnecessarily.
    173      * While isolation of locations is not much of an issue at first
    174      * in an application, as time goes on and garbage-collectors
    175      * perform compaction, slots are very likely to be moved adjacent
    176      * to each other, which can cause much thrashing of cache lines on
    177      * MPs unless padding is employed.
    178      *
    179      * This is an improvement of the algorithm described in the paper
    180      * "A Scalable Elimination-based Exchange Channel" by William
    181      * Scherer, Doug Lea, and Michael Scott in Proceedings of SCOOL05
    182      * workshop.  Available at: http://hdl.handle.net/1802/2104
    183      */
    184 
    185     /** The number of CPUs, for sizing and spin control */
    186     private static final int NCPU = Runtime.getRuntime().availableProcessors();
    187 
    188     /**
    189      * The capacity of the arena.  Set to a value that provides more
    190      * than enough space to handle contention.  On small machines
    191      * most slots won't be used, but it is still not wasted because
    192      * the extra space provides some machine-level address padding
    193      * to minimize interference with heavily CAS'ed Slot locations.
    194      * And on very large machines, performance eventually becomes
    195      * bounded by memory bandwidth, not numbers of threads/CPUs.
    196      * This constant cannot be changed without also modifying
    197      * indexing and hashing algorithms.
    198      */
    199     private static final int CAPACITY = 32;
    200 
    201     /**
    202      * The value of "max" that will hold all threads without
    203      * contention.  When this value is less than CAPACITY, some
    204      * otherwise wasted expansion can be avoided.
    205      */
    206     private static final int FULL =
    207         Math.max(0, Math.min(CAPACITY, NCPU / 2) - 1);
    208 
    209     /**
    210      * The number of times to spin (doing nothing except polling a
    211      * memory location) before blocking or giving up while waiting to
    212      * be fulfilled.  Should be zero on uniprocessors.  On
    213      * multiprocessors, this value should be large enough so that two
    214      * threads exchanging items as fast as possible block only when
    215      * one of them is stalled (due to GC or preemption), but not much
    216      * longer, to avoid wasting CPU resources.  Seen differently, this
    217      * value is a little over half the number of cycles of an average
    218      * context switch time on most systems.  The value here is
    219      * approximately the average of those across a range of tested
    220      * systems.
    221      */
    222     private static final int SPINS = (NCPU == 1) ? 0 : 2000;
    223 
    224     /**
    225      * The number of times to spin before blocking in timed waits.
    226      * Timed waits spin more slowly because checking the time takes
    227      * time.  The best value relies mainly on the relative rate of
    228      * System.nanoTime vs memory accesses.  The value is empirically
    229      * derived to work well across a variety of systems.
    230      */
    231     private static final int TIMED_SPINS = SPINS / 20;
    232 
    233     /**
    234      * Sentinel item representing cancellation of a wait due to
    235      * interruption, timeout, or elapsed spin-waits.  This value is
    236      * placed in holes on cancellation, and used as a return value
    237      * from waiting methods to indicate failure to set or get hole.
    238      */
    239     private static final Object CANCEL = new Object();
    240 
    241     /**
    242      * Value representing null arguments/returns from public
    243      * methods.  This disambiguates from internal requirement that
    244      * holes start out as null to mean they are not yet set.
    245      */
    246     private static final Object NULL_ITEM = new Object();
    247 
    248     /**
    249      * Nodes hold partially exchanged data.  This class
    250      * opportunistically subclasses AtomicReference to represent the
    251      * hole.  So get() returns hole, and compareAndSet CAS'es value
    252      * into hole.  This class cannot be parameterized as "V" because
    253      * of the use of non-V CANCEL sentinels.
    254      */
    255     private static final class Node extends AtomicReference<Object> {
    256         /** The element offered by the Thread creating this node. */
    257         public final Object item;
    258 
    259         /** The Thread waiting to be signalled; null until waiting. */
    260         public volatile Thread waiter;
    261 
    262         /**
    263          * Creates node with given item and empty hole.
    264          * @param item the item
    265          */
    266         public Node(Object item) {
    267             this.item = item;
    268         }
    269     }
    270 
    271     /**
    272      * A Slot is an AtomicReference with heuristic padding to lessen
    273      * cache effects of this heavily CAS'ed location.  While the
    274      * padding adds noticeable space, all slots are created only on
    275      * demand, and there will be more than one of them only when it
    276      * would improve throughput more than enough to outweigh using
    277      * extra space.
    278      */
    279     private static final class Slot extends AtomicReference<Object> {
    280         // Improve likelihood of isolation on <= 64 byte cache lines
    281         long q0, q1, q2, q3, q4, q5, q6, q7, q8, q9, qa, qb, qc, qd, qe;
    282     }
    283 
    284     /**
    285      * Slot array.  Elements are lazily initialized when needed.
    286      * Declared volatile to enable double-checked lazy construction.
    287      */
    288     private volatile Slot[] arena = new Slot[CAPACITY];
    289 
    290     /**
    291      * The maximum slot index being used.  The value sometimes
    292      * increases when a thread experiences too many CAS contentions,
    293      * and sometimes decreases when a spin-wait elapses.  Changes
    294      * are performed only via compareAndSet, to avoid stale values
    295      * when a thread happens to stall right before setting.
    296      */
    297     private final AtomicInteger max = new AtomicInteger();
    298 
    299     /**
    300      * Main exchange function, handling the different policy variants.
    301      * Uses Object, not "V" as argument and return value to simplify
    302      * handling of sentinel values.  Callers from public methods decode
    303      * and cast accordingly.
    304      *
    305      * @param item the (non-null) item to exchange
    306      * @param timed true if the wait is timed
    307      * @param nanos if timed, the maximum wait time
    308      * @return the other thread's item, or CANCEL if interrupted or timed out
    309      */
    310     private Object doExchange(Object item, boolean timed, long nanos) {
    311         Node me = new Node(item);                 // Create in case occupying
    312         int index = hashIndex();                  // Index of current slot
    313         int fails = 0;                            // Number of CAS failures
    314 
    315         for (;;) {
    316             Object y;                             // Contents of current slot
    317             Slot slot = arena[index];
    318             if (slot == null)                     // Lazily initialize slots
    319                 createSlot(index);                // Continue loop to reread
    320             else if ((y = slot.get()) != null &&  // Try to fulfill
    321                      slot.compareAndSet(y, null)) {
    322                 Node you = (Node)y;               // Transfer item
    323                 if (you.compareAndSet(null, item)) {
    324                     LockSupport.unpark(you.waiter);
    325                     return you.item;
    326                 }                                 // Else cancelled; continue
    327             }
    328             else if (y == null &&                 // Try to occupy
    329                      slot.compareAndSet(null, me)) {
    330                 if (index == 0)                   // Blocking wait for slot 0
    331                     return timed? awaitNanos(me, slot, nanos): await(me, slot);
    332                 Object v = spinWait(me, slot);    // Spin wait for non-0
    333                 if (v != CANCEL)
    334                     return v;
    335                 me = new Node(item);              // Throw away cancelled node
    336                 int m = max.get();
    337                 if (m > (index >>>= 1))           // Decrease index
    338                     max.compareAndSet(m, m - 1);  // Maybe shrink table
    339             }
    340             else if (++fails > 1) {               // Allow 2 fails on 1st slot
    341                 int m = max.get();
    342                 if (fails > 3 && m < FULL && max.compareAndSet(m, m + 1))
    343                     index = m + 1;                // Grow on 3rd failed slot
    344                 else if (--index < 0)
    345                     index = m;                    // Circularly traverse
    346             }
    347         }
    348     }
    349 
    350     /**
    351      * Returns a hash index for the current thread.  Uses a one-step
    352      * FNV-1a hash code (http://www.isthe.com/chongo/tech/comp/fnv/)
    353      * based on the current thread's Thread.getId().  These hash codes
    354      * have more uniform distribution properties with respect to small
    355      * moduli (here 1-31) than do other simple hashing functions.
    356      *
    357      * <p>To return an index between 0 and max, we use a cheap
    358      * approximation to a mod operation, that also corrects for bias
    359      * due to non-power-of-2 remaindering (see {@link
    360      * java.util.Random#nextInt}).  Bits of the hashcode are masked
    361      * with "nbits", the ceiling power of two of table size (looked up
    362      * in a table packed into three ints).  If too large, this is
    363      * retried after rotating the hash by nbits bits, while forcing new
    364      * top bit to 0, which guarantees eventual termination (although
    365      * with a non-random-bias).  This requires an average of less than
    366      * 2 tries for all table sizes, and has a maximum 2% difference
    367      * from perfectly uniform slot probabilities when applied to all
    368      * possible hash codes for sizes less than 32.
    369      *
    370      * @return a per-thread-random index, 0 <= index < max
    371      */
    372     private final int hashIndex() {
    373         long id = Thread.currentThread().getId();
    374         int hash = (((int)(id ^ (id >>> 32))) ^ 0x811c9dc5) * 0x01000193;
    375 
    376         int m = max.get();
    377         int nbits = (((0xfffffc00  >> m) & 4) | // Compute ceil(log2(m+1))
    378                      ((0x000001f8 >>> m) & 2) | // The constants hold
    379                      ((0xffff00f2 >>> m) & 1)); // a lookup table
    380         int index;
    381         while ((index = hash & ((1 << nbits) - 1)) > m)       // May retry on
    382             hash = (hash >>> nbits) | (hash << (33 - nbits)); // non-power-2 m
    383         return index;
    384     }
    385 
    386     /**
    387      * Creates a new slot at given index.  Called only when the slot
    388      * appears to be null.  Relies on double-check using builtin
    389      * locks, since they rarely contend.  This in turn relies on the
    390      * arena array being declared volatile.
    391      *
    392      * @param index the index to add slot at
    393      */
    394     private void createSlot(int index) {
    395         // Create slot outside of lock to narrow sync region
    396         Slot newSlot = new Slot();
    397         Slot[] a = arena;
    398         synchronized (a) {
    399             if (a[index] == null)
    400                 a[index] = newSlot;
    401         }
    402     }
    403 
    404     /**
    405      * Tries to cancel a wait for the given node waiting in the given
    406      * slot, if so, helping clear the node from its slot to avoid
    407      * garbage retention.
    408      *
    409      * @param node the waiting node
    410      * @param slot the slot it is waiting in
    411      * @return true if successfully cancelled
    412      */
    413     private static boolean tryCancel(Node node, Slot slot) {
    414         if (!node.compareAndSet(null, CANCEL))
    415             return false;
    416         if (slot.get() == node) // pre-check to minimize contention
    417             slot.compareAndSet(node, null);
    418         return true;
    419     }
    420 
    421     // Three forms of waiting. Each just different enough not to merge
    422     // code with others.
    423 
    424     /**
    425      * Spin-waits for hole for a non-0 slot.  Fails if spin elapses
    426      * before hole filled.  Does not check interrupt, relying on check
    427      * in public exchange method to abort if interrupted on entry.
    428      *
    429      * @param node the waiting node
    430      * @return on success, the hole; on failure, CANCEL
    431      */
    432     private static Object spinWait(Node node, Slot slot) {
    433         int spins = SPINS;
    434         for (;;) {
    435             Object v = node.get();
    436             if (v != null)
    437                 return v;
    438             else if (spins > 0)
    439                 --spins;
    440             else
    441                 tryCancel(node, slot);
    442         }
    443     }
    444 
    445     /**
    446      * Waits for (by spinning and/or blocking) and gets the hole
    447      * filled in by another thread.  Fails if interrupted before
    448      * hole filled.
    449      *
    450      * When a node/thread is about to block, it sets its waiter field
    451      * and then rechecks state at least one more time before actually
    452      * parking, thus covering race vs fulfiller noticing that waiter
    453      * is non-null so should be woken.
    454      *
    455      * Thread interruption status is checked only surrounding calls to
    456      * park.  The caller is assumed to have checked interrupt status
    457      * on entry.
    458      *
    459      * @param node the waiting node
    460      * @return on success, the hole; on failure, CANCEL
    461      */
    462     private static Object await(Node node, Slot slot) {
    463         Thread w = Thread.currentThread();
    464         int spins = SPINS;
    465         for (;;) {
    466             Object v = node.get();
    467             if (v != null)
    468                 return v;
    469             else if (spins > 0)                 // Spin-wait phase
    470                 --spins;
    471             else if (node.waiter == null)       // Set up to block next
    472                 node.waiter = w;
    473             else if (w.isInterrupted())         // Abort on interrupt
    474                 tryCancel(node, slot);
    475             else                                // Block
    476                 LockSupport.park(node);
    477         }
    478     }
    479 
    480     /**
    481      * Waits for (at index 0) and gets the hole filled in by another
    482      * thread.  Fails if timed out or interrupted before hole filled.
    483      * Same basic logic as untimed version, but a bit messier.
    484      *
    485      * @param node the waiting node
    486      * @param nanos the wait time
    487      * @return on success, the hole; on failure, CANCEL
    488      */
    489     private Object awaitNanos(Node node, Slot slot, long nanos) {
    490         int spins = TIMED_SPINS;
    491         long lastTime = 0;
    492         Thread w = null;
    493         for (;;) {
    494             Object v = node.get();
    495             if (v != null)
    496                 return v;
    497             long now = System.nanoTime();
    498             if (w == null)
    499                 w = Thread.currentThread();
    500             else
    501                 nanos -= now - lastTime;
    502             lastTime = now;
    503             if (nanos > 0) {
    504                 if (spins > 0)
    505                     --spins;
    506                 else if (node.waiter == null)
    507                     node.waiter = w;
    508                 else if (w.isInterrupted())
    509                     tryCancel(node, slot);
    510                 else
    511                     LockSupport.parkNanos(node, nanos);
    512             }
    513             else if (tryCancel(node, slot) && !w.isInterrupted())
    514                 return scanOnTimeout(node);
    515         }
    516     }
    517 
    518     /**
    519      * Sweeps through arena checking for any waiting threads.  Called
    520      * only upon return from timeout while waiting in slot 0.  When a
    521      * thread gives up on a timed wait, it is possible that a
    522      * previously-entered thread is still waiting in some other
    523      * slot.  So we scan to check for any.  This is almost always
    524      * overkill, but decreases the likelihood of timeouts when there
    525      * are other threads present to far less than that in lock-based
    526      * exchangers in which earlier-arriving threads may still be
    527      * waiting on entry locks.
    528      *
    529      * @param node the waiting node
    530      * @return another thread's item, or CANCEL
    531      */
    532     private Object scanOnTimeout(Node node) {
    533         Object y;
    534         for (int j = arena.length - 1; j >= 0; --j) {
    535             Slot slot = arena[j];
    536             if (slot != null) {
    537                 while ((y = slot.get()) != null) {
    538                     if (slot.compareAndSet(y, null)) {
    539                         Node you = (Node)y;
    540                         if (you.compareAndSet(null, node.item)) {
    541                             LockSupport.unpark(you.waiter);
    542                             return you.item;
    543                         }
    544                     }
    545                 }
    546             }
    547         }
    548         return CANCEL;
    549     }
    550 
    551     /**
    552      * Creates a new Exchanger.
    553      */
    554     public Exchanger() {
    555     }
    556 
    557     /**
    558      * Waits for another thread to arrive at this exchange point (unless
    559      * the current thread is {@linkplain Thread#interrupt interrupted}),
    560      * and then transfers the given object to it, receiving its object
    561      * in return.
    562      *
    563      * <p>If another thread is already waiting at the exchange point then
    564      * it is resumed for thread scheduling purposes and receives the object
    565      * passed in by the current thread.  The current thread returns immediately,
    566      * receiving the object passed to the exchange by that other thread.
    567      *
    568      * <p>If no other thread is already waiting at the exchange then the
    569      * current thread is disabled for thread scheduling purposes and lies
    570      * dormant until one of two things happens:
    571      * <ul>
    572      * <li>Some other thread enters the exchange; or
    573      * <li>Some other thread {@linkplain Thread#interrupt interrupts} the current
    574      * thread.
    575      * </ul>
    576      * <p>If the current thread:
    577      * <ul>
    578      * <li>has its interrupted status set on entry to this method; or
    579      * <li>is {@linkplain Thread#interrupt interrupted} while waiting
    580      * for the exchange,
    581      * </ul>
    582      * then {@link InterruptedException} is thrown and the current thread's
    583      * interrupted status is cleared.
    584      *
    585      * @param x the object to exchange
    586      * @return the object provided by the other thread
    587      * @throws InterruptedException if the current thread was
    588      *         interrupted while waiting
    589      */
    590     public V exchange(V x) throws InterruptedException {
    591         if (!Thread.interrupted()) {
    592             Object v = doExchange(x == null? NULL_ITEM : x, false, 0);
    593             if (v == NULL_ITEM)
    594                 return null;
    595             if (v != CANCEL)
    596                 return (V)v;
    597             Thread.interrupted(); // Clear interrupt status on IE throw
    598         }
    599         throw new InterruptedException();
    600     }
    601 
    602     /**
    603      * Waits for another thread to arrive at this exchange point (unless
    604      * the current thread is {@linkplain Thread#interrupt interrupted} or
    605      * the specified waiting time elapses), and then transfers the given
    606      * object to it, receiving its object in return.
    607      *
    608      * <p>If another thread is already waiting at the exchange point then
    609      * it is resumed for thread scheduling purposes and receives the object
    610      * passed in by the current thread.  The current thread returns immediately,
    611      * receiving the object passed to the exchange by that other thread.
    612      *
    613      * <p>If no other thread is already waiting at the exchange then the
    614      * current thread is disabled for thread scheduling purposes and lies
    615      * dormant until one of three things happens:
    616      * <ul>
    617      * <li>Some other thread enters the exchange; or
    618      * <li>Some other thread {@linkplain Thread#interrupt interrupts}
    619      * the current thread; or
    620      * <li>The specified waiting time elapses.
    621      * </ul>
    622      * <p>If the current thread:
    623      * <ul>
    624      * <li>has its interrupted status set on entry to this method; or
    625      * <li>is {@linkplain Thread#interrupt interrupted} while waiting
    626      * for the exchange,
    627      * </ul>
    628      * then {@link InterruptedException} is thrown and the current thread's
    629      * interrupted status is cleared.
    630      *
    631      * <p>If the specified waiting time elapses then {@link
    632      * TimeoutException} is thrown.  If the time is less than or equal
    633      * to zero, the method will not wait at all.
    634      *
    635      * @param x the object to exchange
    636      * @param timeout the maximum time to wait
    637      * @param unit the time unit of the <tt>timeout</tt> argument
    638      * @return the object provided by the other thread
    639      * @throws InterruptedException if the current thread was
    640      *         interrupted while waiting
    641      * @throws TimeoutException if the specified waiting time elapses
    642      *         before another thread enters the exchange
    643      */
    644     public V exchange(V x, long timeout, TimeUnit unit)
    645         throws InterruptedException, TimeoutException {
    646         if (!Thread.interrupted()) {
    647             Object v = doExchange(x == null? NULL_ITEM : x,
    648                                   true, unit.toNanos(timeout));
    649             if (v == NULL_ITEM)
    650                 return null;
    651             if (v != CANCEL)
    652                 return (V)v;
    653             if (!Thread.interrupted())
    654                 throw new TimeoutException();
    655         }
    656         throw new InterruptedException();
    657     }
    658 }
    659