Home | History | Annotate | Download | only in helgrind

Lines Matching refs:SO

71 // from our mappings, so that the associated SO can be freed up
151 so it can participate in lock sets in the usual way. */
228 acquired, so as to produce better lock-order error messages. */
284 acquired, so as to produce better lock-order error messages. */
704 found. So assert it is non-null; that in effect asserts that we
940 // is there a transition to ShM. So what we want to do is note the
942 // the lockset, so we can present it later if there should be a
945 // So this function finds such transitions. For each, it associates
949 // initialised or first locked. ExeContexts are permanent so keeping
953 // respect, so we first remove that from the pre/post locksets.
977 // just confuses the logic, so remove it from the locksets we're
982 // /* The post-transition lock set is not empty. So we are not
1007 // empty one. So lset_old must be the set of locks lost. Record
1106 return both of them. Also update 'thr' so it references the new
1144 invalid states. Requests to do so are bugs in libpthread, since
1167 /* So the lock is already held. If held as a r-lock then
1177 /* So the lock is held in w-mode. If it's held by some other
1189 /* So the lock is already held in w-mode by 'thr'. That means this
1202 /* So we are recursively re-locking a lock we already w-hold. */
1239 invalid states. Requests to do so are bugs in libpthread, since
1264 /* So the lock is already held. If held as a w-lock then
1320 the client is trying to unlock it. So complain, then ignore
1359 attempt will fail. So just complain and do nothing
1381 /* We still hold the lock. So either it's a recursive lock
1620 /* Record where the parent is so we can later refer to this in
1627 glibc sources confirms this. So we ask for a snapshot to be
1654 exit and so we have to pretty much treat it as if it was still
1658 finished, and so we need to consider the possibility that it
1664 sync event. So in any case, just let the thread exit. On NPTL,
1698 threaded programs), so we have to clean up map_threads to remove
1733 SO* so;
1757 so = libhb_so_alloc();
1758 tl_assert(so);
1760 doesn't actually exist any more, so we don't want _so_send to
1762 libhb_so_send(hbthr_q, so, True/*strong_send*/);
1763 libhb_so_recv(hbthr_s, so, True/*strong_recv*/);
1764 libhb_so_dealloc(so);
2034 // only called if the real library call succeeded - so mutex is sane
2066 // only called if the real library call succeeded - so mutex is sane
2105 /* it's held. So do the normal pre-unlock actions, as copied
2155 /* A mapping from CV to (the SO associated with it, plus some
2157 signalled/broadcasted upon, we do a 'send' into the SO, and when a
2158 wait on it completes, we do a 'recv' from the SO. This is believed
2163 /* .so is the SO for this CV.
2173 SO* so; /* libhb-allocated SO */
2198 SO* so = libhb_so_alloc();
2200 cvi->so = so;
2214 tl_assert(cvi->so);
2215 libhb_so_dealloc(cvi->so);
2224 cond to a SO if it is not already so bound, and 'send' on the
2225 SO. This is later used by other thread(s) which successfully
2227 from the SO, thereby acquiring a dependency on this signalling
2242 tl_assert(cvi->so);
2276 libhb_so_send( thr->hbthr, cvi->so, True/*strong_send*/ );
2332 tl_assert(cvi->so);
2352 the SO for this cond, and 'recv' from it so as to acquire a
2369 tl_assert(cvi->so);
2372 if (!libhb_so_everSent(cvi->so)) {
2382 libhb_so_recv( thr->hbthr, cvi->so, True/*strong_recv*/ );
2391 associated with the CV, so as to avoid any possible resource
2495 // only called if the real library call succeeded - so mutex is sane
2529 // only called if the real library call succeeded - so mutex is sane
2549 operation is done on a semaphore (unlocking, essentially), a new SO
2552 SO), and the SO is pushed on the semaphore's stack.
2555 semaphore, we pop a SO off the semaphore's stack (which should be
2578 /* sem_t* -> XArray* SO* */
2589 static void push_SO_for_sem ( void* sem, SO* so ) {
2592 tl_assert(so);
2598 VG_(addToXA)( xa, &so );
2600 xa = VG_(newXA)( HG_(zalloc), "hg.pSfs.1", HG_(free), sizeof(SO*) );
2601 VG_(addToXA)( xa, &so );
2606 static SO* mb_pop_SO_for_sem ( void* sem ) {
2609 SO* so;
2620 so = *(SO**)VG_(indexXA)( xa, sz-1 );
2621 tl_assert(so);
2623 return so;
2633 SO* so;
2641 /* Empty out the semaphore's SO stack. This way of doing it is
2644 so = mb_pop_SO_for_sem( sem );
2645 if (!so) break;
2646 libhb_so_dealloc(so);
2661 SO* so;
2671 /* Empty out the semaphore's SO stack. This way of doing it is
2674 so = mb_pop_SO_for_sem( sem );
2675 if (!so) break;
2676 libhb_so_dealloc(so);
2693 so = libhb_so_alloc();
2694 libhb_so_send( hbthr, so, True/*strong send*/ );
2695 push_SO_for_sem( sem, so );
2701 /* 'tid' has posted on 'sem'. Create a new SO, do a strong send to
2702 it (iow, write our VC into it, then tick ours), and push the SO
2710 SO* so;
2725 so = libhb_so_alloc();
2726 libhb_so_send( hbthr, so, True/*strong send*/ );
2727 push_SO_for_sem( sem, so );
2732 /* A sem_wait(sem) completed successfully. Pop the posting-SO for
2733 the 'sem' from this semaphore's SO-stack, and do a strong recv
2738 SO* so;
2750 so = mb_pop_SO_for_sem( sem );
2752 if (so) {
2756 libhb_so_recv( hbthr, so, True/*strong recv*/ );
2757 libhb_so_dealloc(so);
2898 associated with the barrier, so as to avoid any possible
2923 /* Maybe we shouldn't do this; just let it persist, so that when it
2939 receive from it back to all threads, so that their VCs are a copy
2945 SO* so = libhb_so_alloc();
2954 libhb_so_send( hbthr, so, False/*weak send*/ );
2960 libhb_so_recv( hbthr, so, True/*strong recv*/ );
2967 SO would be better? */
2968 libhb_so_dealloc(so);
2991 thread is currently in this function and so has not yet arrived
3008 here our data structures so as to indicate that the threads have
3102 moving on from the barrier in this situation, so just note
3108 the barrier, so need to mess with dep edges in the same way
3125 /* A mapping from arbitrary UWord tag to the SO associated with it.
3131 /* UWord -> SO* */
3142 static SO* map_usertag_to_SO_lookup_or_alloc ( UWord usertag ) {
3147 return (SO*)val;
3149 SO* so = libhb_so_alloc();
3150 VG_(addToFM)( map_usertag_to_SO, usertag, (UWord)so );
3151 return so;
3160 // SO* so = (SO*)valW;
3162 // tl_assert(so);
3163 // libhb_so_dealloc(so);
3173 USERTAG. Bind USERTAG to a real SO if it is not already so
3174 bound, and do a 'strong send' on the SO. This is later used by
3175 other thread(s) which successfully 'receive' from the SO,
3178 SO* so;
3187 so = map_usertag_to_SO_lookup_or_alloc( usertag );
3188 tl_assert(so);
3190 libhb_so_send( thr->hbthr, so, True/*strong_send*/ );
3198 USERTAG. Bind USERTAG to a real SO if it is not already so
3199 bound. If the SO has at some point in the past been 'sent' on,
3203 SO* so;
3212 so = map_usertag_to_SO_lookup_or_alloc( usertag );
3213 tl_assert(so);
3215 /* Acquire a dependency on it. If the SO has never so far been
3216 sent on, then libhb_so_recv will do nothing. So we're safe
3217 regardless of SO's history. */
3218 libhb_so_recv( thr->hbthr, so, True/*strong_recv*/ );
3229 The graph is structured so that if L1 --*--> L2 then L1 must be
3263 where that edge was created, so that we can show the user later if
3342 Also, we need to know whether the edge was already present so as
3344 can compute presentF and presentR essentially for free, so may
3525 'src' :-), so don't bother to try */
3565 complaint if so. Also, update the ordering graph appropriately.
3595 /* So we managed to find a path lk --*--> other in the graph,
3601 points for this edge, so we can show the user. */
3675 we're deleting stuff. So their acquired_at fields may
4008 /* So the effective address is in 'addr' now. */
4084 if so return True. Otherwise (and in case of any doubt) return
4174 lot of races which we just expensively suppress, so
4348 /* Anything that gets past the above check is one of ours, so we
4408 binding between that and the associated Thread*, so we can
4426 /* So now we know that (pthread_t)args[1] is associated with
4630 /* record_error_Misc strdup's buf, so this is safe: */
4636 /* UWord arbitrary-SO-tag */
4641 /* UWord arbitrary-SO-tag */