Home | History | Annotate | Download | only in helgrind

Lines Matching refs:SO

75 // from our mappings, so that the associated SO can be freed up
260 acquired, so as to produce better lock-order error messages. */
316 acquired, so as to produce better lock-order error messages. */
788 found. So assert it is non-null; that in effect asserts that we
1070 return both of them. Also update 'thr' so it references the new
1108 invalid states. Requests to do so are bugs in libpthread, since
1131 /* So the lock is already held. If held as a r-lock then
1141 /* So the lock is held in w-mode. If it's held by some other
1153 /* So the lock is already held in w-mode by 'thr'. That means this
1166 /* So we are recursively re-locking a lock we already w-hold. */
1205 invalid states. Requests to do so are bugs in libpthread, since
1230 /* So the lock is already held. If held as a w-lock then
1288 the client is trying to unlock it. So complain, then ignore
1327 attempt will fail. So just complain and do nothing
1349 /* We still hold the lock. So either it's a recursive lock
1550 // (VTS) leaks in libhb. So force them to NoAccess, so that all
1620 /* Record where the parent is so we can later refer to this in
1627 glibc sources confirms this. So we ask for a snapshot to be
1663 exit and so we have to pretty much treat it as if it was still
1667 finished, and so we need to consider the possibility that it
1673 sync event. So in any case, just let the thread exit. On NPTL,
1707 threaded programs), so we have to clean up map_threads to remove
1738 SO* so;
1743 so = libhb_so_alloc();
1744 tl_assert(so);
1746 doesn't actually exist any more, so we don't want _so_send to
1748 libhb_so_send(hbthr_q, so, True/*strong_send*//*?!? wrt comment above*/);
1749 libhb_so_recv(hbthr_s, so, True/*strong_recv*/);
1750 libhb_so_dealloc(so);
1888 the freed memory, and so marking no access is in theory useless.
2035 Assume it never got used, and so we don't need to do anything
2118 // only called if the real library call succeeded - so mutex is sane
2150 // only called if the real library call succeeded - so mutex is sane
2189 /* it's held. So do the normal pre-unlock actions, as copied
2239 /* A mapping from CV to (the SO associated with it, plus some
2241 signalled/broadcasted upon, we do a 'send' into the SO, and when a
2242 wait on it completes, we do a 'recv' from the SO. This is believed
2247 /* .so is the SO for this CV.
2257 SO* so; /* libhb-allocated SO */
2281 SO* so = libhb_so_alloc();
2283 cvi->so = so;
2314 tl_assert(cvi->so);
2325 libhb_so_dealloc(cvi->so);
2329 /* We have no record of this CV. So complain about it
2343 cond to a SO if it is not already so bound, and 'send' on the
2344 SO. This is later used by other thread(s) which successfully
2346 from the SO, thereby acquiring a dependency on this signalling
2361 tl_assert(cvi->so);
2378 // as soon as the signalling is done, and so there needs to be
2409 // So just keep quiet in this circumstance.
2416 libhb_so_send( thr->hbthr, cvi->so, True/*strong_send*/ );
2472 tl_assert(cvi->so);
2493 the SO for this cond, and 'recv' from it so as to acquire a
2520 tl_assert(cvi->so);
2523 if (!timeout && !libhb_so_everSent(cvi->so)) {
2533 libhb_so_recv( thr->hbthr, cvi->so, True/*strong_recv*/ );
2550 tl_assert (cvi->so);
2558 associated with the CV, so as to avoid any possible resource
2663 // only called if the real library call succeeded - so mutex is sane
2697 // only called if the real library call succeeded - so mutex is sane
2717 operation is done on a semaphore (unlocking, essentially), a new SO
2720 SO), and the SO is pushed on the semaphore's stack.
2723 semaphore, we pop a SO off the semaphore's stack (which should be
2746 /* sem_t* -> XArray* SO* */
2756 static void push_SO_for_sem ( void* sem, SO* so ) {
2759 tl_assert(so);
2765 VG_(addToXA)( xa, &so );
2767 xa = VG_(newXA)( HG_(zalloc), "hg.pSfs.1", HG_(free), sizeof(SO*) );
2768 VG_(addToXA)( xa, &so );
2773 static SO* mb_pop_SO_for_sem ( void* sem ) {
2776 SO* so;
2787 so = *(SO**)VG_(indexXA)( xa, sz-1 );
2788 tl_assert(so);
2790 return so;
2800 SO* so;
2808 /* Empty out the semaphore's SO stack. This way of doing it is
2811 so = mb_pop_SO_for_sem( sem );
2812 if (!so) break;
2813 libhb_so_dealloc(so);
2828 SO* so;
2838 /* Empty out the semaphore's SO stack. This way of doing it is
2841 so = mb_pop_SO_for_sem( sem );
2842 if (!so) break;
2843 libhb_so_dealloc(so);
2860 so = libhb_so_alloc();
2861 libhb_so_send( hbthr, so, True/*strong send*/ );
2862 push_SO_for_sem( sem, so );
2868 /* 'tid' has posted on 'sem'. Create a new SO, do a strong send to
2869 it (iow, write our VC into it, then tick ours), and push the SO
2877 SO* so;
2892 so = libhb_so_alloc();
2893 libhb_so_send( hbthr, so, True/*strong send*/ );
2894 push_SO_for_sem( sem, so );
2899 /* A sem_wait(sem) completed successfully. Pop the posting-SO for
2900 the 'sem' from this semaphore's SO-stack, and do a strong recv
2905 SO* so;
2917 so = mb_pop_SO_for_sem( sem );
2919 if (so) {
2923 libhb_so_recv( hbthr, so, True/*strong recv*/ );
2924 libhb_so_dealloc(so);
3062 associated with the barrier, so as to avoid any possible
3087 /* Maybe we shouldn't do this; just let it persist, so that when it
3103 receive from it back to all threads, so that their VCs are a copy
3109 SO* so = libhb_so_alloc();
3118 libhb_so_send( hbthr, so, False/*weak send*/ );
3124 libhb_so_recv( hbthr, so, True/*strong recv*/ );
3131 SO would be better? */
3132 libhb_so_dealloc(so);
3155 thread is currently in this function and so has not yet arrived
3172 here our data structures so as to indicate that the threads have
3266 moving on from the barrier in this situation, so just note
3272 the barrier, so need to mess with dep edges in the same way
3289 /* A mapping from arbitrary UWord tag to the SO associated with it.
3295 /* UWord -> SO* */
3305 static SO* map_usertag_to_SO_lookup_or_alloc ( UWord usertag ) {
3310 return (SO*)val;
3312 SO* so = libhb_so_alloc();
3313 VG_(addToFM)( map_usertag_to_SO, usertag, (UWord)so );
3314 return so;
3322 SO* so = (SO*)valW;
3324 tl_assert(so);
3325 libhb_so_dealloc(so);
3335 USERTAG. Bind USERTAG to a real SO if it is not already so
3336 bound, and do a 'weak send' on the SO. This joins the vector
3338 in the SO. The resulting SO vector clocks are later used by
3339 other thread(s) which successfully 'receive' from the SO,
3341 previously signalled on this SO. */
3343 SO* so;
3352 so = map_usertag_to_SO_lookup_or_alloc( usertag );
3353 tl_assert(so);
3355 libhb_so_send( thr->hbthr, so, False/*!strong_send*/ );
3363 USERTAG. Bind USERTAG to a real SO if it is not already so
3364 bound. If the SO has at some point in the past been 'sent' on,
3368 SO* so;
3377 so = map_usertag_to_SO_lookup_or_alloc( usertag );
3378 tl_assert(so);
3380 /* Acquire a dependency on it. If the SO has never so far been
3381 sent on, then libhb_so_recv will do nothing. So we're safe
3382 regardless of SO's history. */
3383 libhb_so_recv( thr->hbthr, so, True/*strong_recv*/ );
3391 SO is associated with USERTAG, then the association is removed
3392 and all resources associated with SO are freed. Importantly,
3393 that frees up any VTSs stored in SO. */
3455 The graph is structured so that if L1 --*--> L2 then L1 must be
3489 where that edge was created, so that we can show the user later if
3609 // to page out, and so the garbage collected version was much faster.
3641 Also, we need to know whether the edge was already present so as
3643 can compute presentF and presentR essentially for free, so may
3751 /* deleting edges can increase nr of of WS so check for gc. */
3846 'src' :-), so don't bother to try */
3886 complaint if so. Also, update the ordering graph appropriately.
3913 /* So we managed to find a path lk --*--> other in the graph,
3919 points for this edge, so we can show the user. */
3970 So, there is no laog_exposition (fCA, fBC) as no thread ever
4056 we're deleting stuff. So their acquired_at fields may
4445 /* So the effective address is in 'addr' now. */
4526 so can't possibly be a heap access, and so can be skipped.
4586 if so return True. Otherwise (and in case of any doubt) return
4685 lot of races which we just expensively suppress, so
5013 /* Anything that gets past the above check is one of ours, so we
5088 binding between that and the associated Thread*, so we can
5106 /* So now we know that (pthread_t)args[1] is associated with
5420 /* record_error_Misc strdup's buf, so this is safe: */
5426 /* UWord arbitrary-SO-tag */
5431 /* UWord arbitrary-SO-tag */
5436 /* UWord arbitrary-SO-tag */