Home | History | Annotate | Download | only in callgrind

Lines Matching defs:to

27    along with this program; if not, write to the Free Software
56 /* min of L1 and LL cache line sizes. This only gets set to a
134 (jump-to-register style ones). */
201 the sense that no IR has yet been generated to do the relevant
202 helper calls. The BB is scanned top to bottom and memory events
203 are added to the end of the list, merging with the most recent
212 At various points the list will need to be flushed, that is, IR
215 when there is no space to add a new event.
217 If we require the simulation statistics to be up to date with
218 respect to possible memory exceptions, then the list would have to
222 Flushing the list consists of walking it start to end and emitting
224 appear. It may be possible to emit a single call for two adjacent
225 events in order to reduce the number of helper function calls made.
226 For example, it could well be profitable to handle two adjacent Ir
240 Ev_Bi, // branch indirect (to unknown destination)
299 /* Up to this many unnotified events are allowed. Number is
300 arbitrary. Larger numbers allow more event merging to occur, but
301 potentially induce more spilling due to extending live ranges of
307 Mostly to avoid passing loads of parameters everywhere. */
440 /* generate IR to notify event i and possibly the ones
455 /* Decide on helper fn to call and args to pass it, and advance
467 Ev_Ir, and so these Dr must pertain to the
468 immediately preceding Ir. Same applies to analogous
551 /* Branch to an unknown destination */
647 /* Is it possible to merge this write with the preceding read? */
687 rare, this is not thought likely to cause any noticeable
856 /* First pass over a BB to instrument, counting instructions and jumps
857 * This is needed for the size of the BB struct to allocate
872 // Ist_Exit has to be ignored in preamble code, before first IMark:
874 // nothing to do with client code
899 /* if the last instructions of BB conditionally jumps to next instruction
917 /* add helper call to setup_bbcc, with pointer to BB struct as argument
921 * - current_bbcc has a pointer to the BBCC of the last executed BB
934 * set current_bbcc to BBCC that gets the costs for this BB execution
1033 // If Vex fails to decode an instruction, the size will be zero.
1123 was introduced, since prior to that point, the Vex
1146 /* flush events before LL, should help SC to succeed */
1154 be attributed to the LL or the SC, but it doesn't
1155 really matter since they always have to be used in
1167 * As Callgrind counts (conditional) jumps, it has to correct
1172 * (2) inversion is assumed if the branch jumps to the address of
1191 /* Stuff to widen the guard expression to a host word, so
1192 we can pass it to the branch predictor simulation
1222 /* We may never reach the next statement, so need to flush
1271 /* Deal with branches to unknown destinations. Except ignore ones
1278 break; /* boring - branch to known address */
1280 /* looks like an indirect branch (branch to unknown) */
1285 flattened, should only have tmp and const cases to
1295 * As CLG_(current_state).jmps_passed is reset to 0 in setup_bbcc,
1369 // any reason at all: to free up space, because the guest code was
1402 /* reset call counters to current for active calls */
1480 BBCC *from, *to;
1517 to = ce->jcc->to;
1518 VG_(gdb_printf)("function-%d-%d: %s\n",t, i, to->cxt->fn[0]->name );
1606 /* internal interface to callgrind_control */
1614 // Status information to be improved ...
1671 CLG_DEBUG(2, "Client Request: toggled collection state to %s\n",
1958 /* throttle calls to CLG_(run_thread) by number of BBs executed */
1990 "=> resetting it back to 0\n");
1996 "=> resetting it back to 0\n");