Home | History | Annotate | Download | only in helgrind

Lines Matching refs:so

75 // from our mappings, so that the associated SO can be freed up
248 acquired, so as to produce better lock-order error messages. */
304 acquired, so as to produce better lock-order error messages. */
750 found. So assert it is non-null; that in effect asserts that we
1032 return both of them. Also update 'thr' so it references the new
1070 invalid states. Requests to do so are bugs in libpthread, since
1093 /* So the lock is already held. If held as a r-lock then
1103 /* So the lock is held in w-mode. If it's held by some other
1115 /* So the lock is already held in w-mode by 'thr'. That means this
1128 /* So we are recursively re-locking a lock we already w-hold. */
1167 invalid states. Requests to do so are bugs in libpthread, since
1192 /* So the lock is already held. If held as a w-lock then
1250 the client is trying to unlock it. So complain, then ignore
1289 attempt will fail. So just complain and do nothing
1311 /* We still hold the lock. So either it's a recursive lock
1500 // (VTS) leaks in libhb. So force them to NoAccess, so that all
1568 /* Record where the parent is so we can later refer to this in
1575 glibc sources confirms this. So we ask for a snapshot to be
1602 exit and so we have to pretty much treat it as if it was still
1606 finished, and so we need to consider the possibility that it
1612 sync event. So in any case, just let the thread exit. On NPTL,
1646 threaded programs), so we have to clean up map_threads to remove
1681 SO* so;
1705 so = libhb_so_alloc();
1706 tl_assert(so);
1708 doesn't actually exist any more, so we don't want _so_send to
1710 libhb_so_send(hbthr_q, so, True/*strong_send*//*?!? wrt comment above*/);
1711 libhb_so_recv(hbthr_s, so, True/*strong_recv*/);
1712 libhb_so_dealloc(so);
1933 Assume it never got used, and so we don't need to do anything
2016 // only called if the real library call succeeded - so mutex is sane
2048 // only called if the real library call succeeded - so mutex is sane
2087 /* it's held. So do the normal pre-unlock actions, as copied
2137 /* A mapping from CV to (the SO associated with it, plus some
2139 signalled/broadcasted upon, we do a 'send' into the SO, and when a
2140 wait on it completes, we do a 'recv' from the SO. This is believed
2145 /* .so is the SO for this CV.
2155 SO* so; /* libhb-allocated SO */
2180 SO* so = libhb_so_alloc();
2182 cvi->so = so;
2213 tl_assert(cvi->so);
2224 libhb_so_dealloc(cvi->so);
2228 /* We have no record of this CV. So complain about it
2242 cond to a SO if it is not already so bound, and 'send' on the
2243 SO. This is later used by other thread(s) which successfully
2245 from the SO, thereby acquiring a dependency on this signalling
2260 tl_assert(cvi->so);
2277 // as soon as the signalling is done, and so there needs to be
2308 // So just keep quiet in this circumstance.
2315 libhb_so_send( thr->hbthr, cvi->so, True/*strong_send*/ );
2371 tl_assert(cvi->so);
2392 the SO for this cond, and 'recv' from it so as to acquire a
2419 tl_assert(cvi->so);
2422 if (!timeout && !libhb_so_everSent(cvi->so)) {
2432 libhb_so_recv( thr->hbthr, cvi->so, True/*strong_recv*/ );
2449 tl_assert (cvi->so);
2457 associated with the CV, so as to avoid any possible resource
2562 // only called if the real library call succeeded - so mutex is sane
2596 // only called if the real library call succeeded - so mutex is sane
2616 operation is done on a semaphore (unlocking, essentially), a new SO
2619 SO), and the SO is pushed on the semaphore's stack.
2622 semaphore, we pop a SO off the semaphore's stack (which should be
2645 /* sem_t* -> XArray* SO* */
2656 static void push_SO_for_sem ( void* sem, SO* so ) {
2659 tl_assert(so);
2665 VG_(addToXA)( xa, &so );
2667 xa = VG_(newXA)( HG_(zalloc), "hg.pSfs.1", HG_(free), sizeof(SO*) );
2668 VG_(addToXA)( xa, &so );
2673 static SO* mb_pop_SO_for_sem ( void* sem ) {
2676 SO* so;
2687 so = *(SO**)VG_(indexXA)( xa, sz-1 );
2688 tl_assert(so);
2690 return so;
2700 SO* so;
2708 /* Empty out the semaphore's SO stack. This way of doing it is
2711 so = mb_pop_SO_for_sem( sem );
2712 if (!so) break;
2713 libhb_so_dealloc(so);
2728 SO* so;
2738 /* Empty out the semaphore's SO stack. This way of doing it is
2741 so = mb_pop_SO_for_sem( sem );
2742 if (!so) break;
2743 libhb_so_dealloc(so);
2760 so = libhb_so_alloc();
2761 libhb_so_send( hbthr, so, True/*strong send*/ );
2762 push_SO_for_sem( sem, so );
2768 /* 'tid' has posted on 'sem'. Create a new SO, do a strong send to
2769 it (iow, write our VC into it, then tick ours), and push the SO
2777 SO* so;
2792 so = libhb_so_alloc();
2793 libhb_so_send( hbthr, so, True/*strong send*/ );
2794 push_SO_for_sem( sem, so );
2799 /* A sem_wait(sem) completed successfully. Pop the posting-SO for
2800 the 'sem' from this semaphore's SO-stack, and do a strong recv
2805 SO* so;
2817 so = mb_pop_SO_for_sem( sem );
2819 if (so) {
2823 libhb_so_recv( hbthr, so, True/*strong recv*/ );
2824 libhb_so_dealloc(so);
2965 associated with the barrier, so as to avoid any possible
2990 /* Maybe we shouldn't do this; just let it persist, so that when it
3006 receive from it back to all threads, so that their VCs are a copy
3012 SO* so = libhb_so_alloc();
3021 libhb_so_send( hbthr, so, False/*weak send*/ );
3027 libhb_so_recv( hbthr, so, True/*strong recv*/ );
3034 SO would be better? */
3035 libhb_so_dealloc(so);
3058 thread is currently in this function and so has not yet arrived
3075 here our data structures so as to indicate that the threads have
3169 moving on from the barrier in this situation, so just note
3175 the barrier, so need to mess with dep edges in the same way
3192 /* A mapping from arbitrary UWord tag to the SO associated with it.
3198 /* UWord -> SO* */
3209 static SO* map_usertag_to_SO_lookup_or_alloc ( UWord usertag ) {
3214 return (SO*)val;
3216 SO* so = libhb_so_alloc();
3217 VG_(addToFM)( map_usertag_to_SO, usertag, (UWord)so );
3218 return so;
3226 SO* so = (SO*)valW;
3228 tl_assert(so);
3229 libhb_so_dealloc(so);
3239 USERTAG. Bind USERTAG to a real SO if it is not already so
3240 bound, and do a 'weak send' on the SO. This joins the vector
3242 in the SO. The resulting SO vector clocks are later used by
3243 other thread(s) which successfully 'receive' from the SO,
3245 previously signalled on this SO. */
3247 SO* so;
3256 so = map_usertag_to_SO_lookup_or_alloc( usertag );
3257 tl_assert(so);
3259 libhb_so_send( thr->hbthr, so, False/*!strong_send*/ );
3267 USERTAG. Bind USERTAG to a real SO if it is not already so
3268 bound. If the SO has at some point in the past been 'sent' on,
3272 SO* so;
3281 so = map_usertag_to_SO_lookup_or_alloc( usertag );
3282 tl_assert(so);
3284 /* Acquire a dependency on it. If the SO has never so far been
3285 sent on, then libhb_so_recv will do nothing. So we're safe
3286 regardless of SO's history. */
3287 libhb_so_recv( thr->hbthr, so, True/*strong_recv*/ );
3295 SO is associated with USERTAG, then the assocation is removed
3296 and all resources associated with SO are freed. Importantly,
3297 that frees up any VTSs stored in SO. */
3313 The graph is structured so that if L1 --*--> L2 then L1 must be
3347 where that edge was created, so that we can show the user later if
3474 // to page out, and so the garbage collected version was much faster.
3506 Also, we need to know whether the edge was already present so as
3508 can compute presentF and presentR essentially for free, so may
3616 /* deleting edges can increase nr of of WS so check for gc. */
3711 'src' :-), so don't bother to try */
3751 complaint if so. Also, update the ordering graph appropriately.
3778 /* So we managed to find a path lk --*--> other in the graph,
3784 points for this edge, so we can show the user. */
3835 So, there is no laog_exposition (fCA, fBC) as no thread ever
3921 we're deleting stuff. So their acquired_at fields may
4308 /* So the effective address is in 'addr' now. */
4389 so can't possibly be a heap access, and so can be skipped.
4449 if so return True. Otherwise (and in case of any doubt) return
4543 lot of races which we just expensively suppress, so
4815 /* Anything that gets past the above check is one of ours, so we
4875 binding between that and the associated Thread*, so we can
4893 /* So now we know that (pthread_t)args[1] is associated with
5106 /* record_error_Misc strdup's buf, so this is safe: */
5112 /* UWord arbitrary-SO-tag */
5117 /* UWord arbitrary-SO-tag */
5122 /* UWord arbitrary-SO-tag */