Home | History | Annotate | Download | only in doc

Lines Matching full:each

46 regularly sampling the current registers on each CPU (from an interrupt handler, the
56 the task's list of mapped memory areas. Each PC value is thus converted into a tuple
64 reflect reality. In common operation, the time between each sample interrupt is regulated
74 increment once per each event, and generate an interrupt on reaching some pre-defined
137 counters. For example, on x86, one numbered directory for each hardware
138 performance counter is added, with files in each for the event type,
184 they collate a subset of the available sample files, load and process each one
247 for each model type that programs the counter registers correctly
277 enable each configured counter. Again, the details of how this is
290 is forked for each CPU, which creates a perfmon context and sets the
352 each CPU on the system. Only this buffer is altered when we actually log
363 details of which task each logged sample is for, as described in the
415 the return address for each stack frame. This will only work if the code
432 At some point, we have to process the data in each CPU buffer and enter
454 that for each CPU buffer, as we only read from the tail iterator (whilst
457 intact). Then, we process each CPU buffer in turn. A CPU switch
469 iteration across the tasks' list of mapped areas. Each sample is then
552 as we process each entry; it is passed into the routines in
585 The absolute PC value is matched against each address range, and
651 Each specific sample file is a hashed collection, where the key is
677 list, we need to locate the binary file for each sample file, and then
696 each sample file is encoded in its filename. This is a low-tech
757 need to classify each sample file, and we may also need to "invert"
768 to classify each sample file into classes - the classes correspond
769 with each <command>opreport</command> column. The function that handles
770 this is <function>arrange_profiles()</function>. Each sample file
772 its class, a template is generated from the sample file. Each template
773 describes a particular class (thus, in our example above, each template
774 will have a different thread ID, and this uniquely identifies each
779 Each class has a list of "profile sets" matching that class's template.
782 the profile sets belonging to the classes, we have to name each class and
784 <function>identify_classes()</function>; each class is checked to ensure
801 re-arrangement, these dependent binary images would be opened each
802 time we need to process sample files for each program.
810 performance problem, as now we only need to open each dependent image
825 each inverted profile and make something of the data. The entry point
833 binary image (remember each inverted profile set is only for one binary
894 for each entry, the selected fields (as set by the
912 Each feature defines a set of callback handlers which can be enabled or
924 Each extended feature has an entry in the <varname>ext_feature_table</varname>
925 in <filename>opd_extended.cpp</filename>. Each entry contains a feature name,
927 used to identify a feature in the table. Each feature provides a set
941 Each feature is enabled using the OProfile daemon (oprofiled) command-line
955 Each feature is responsible for providing its own set of handlers.
1061 Then, it stores each event in the IBS virtual-counter table
1084 Unlike traditional performance events, each IBS sample can be derived into
1085 multiple IBS performance events. For each event that the user specifies,
1172 there would be two profile classes, one for each event. Or if we're on
1174 <command>opreport</command> to show results for each CPU side-by-side,
1175 there would be a profile class for each CPU.