Searched
full:accesses (Results
276 -
300 of
2582) sorted by null
<<11121314151617181920>>
/prebuilts/gcc/linux-x86/host/x86_64-linux-glibc2.15-4.8/x86_64-linux/include/c++/4.8/ext/ |
atomicity.h | 108 // that the compiler doesn't reorder memory accesses across the
|
/prebuilts/gcc/linux-x86/host/x86_64-w64-mingw32-4.8/x86_64-w64-mingw32/include/c++/4.8.3/ext/ |
atomicity.h | 108 // that the compiler doesn't reorder memory accesses across the
|
/prebuilts/go/darwin-x86/doc/articles/ |
race_detector.html | 10 A data race occurs when two goroutines access the same variable concurrently and at least one of the accesses is a write. 52 The report contains stack traces for conflicting accesses, as well as stacks where the involved goroutines were created. 294 To make the code safe, protect the accesses with a mutex: 346 non-atomicity of the memory accesses,
|
/prebuilts/go/darwin-x86/test/ |
recover4.go | 9 // sees the most recent value of the variables it accesses.
|
/prebuilts/go/linux-x86/doc/articles/ |
race_detector.html | 10 A data race occurs when two goroutines access the same variable concurrently and at least one of the accesses is a write. 52 The report contains stack traces for conflicting accesses, as well as stacks where the involved goroutines were created. 294 To make the code safe, protect the accesses with a mutex: 346 non-atomicity of the memory accesses,
|
/prebuilts/go/linux-x86/test/ |
recover4.go | 9 // sees the most recent value of the variables it accesses.
|
/prebuilts/ndk/r16/sources/third_party/vulkan/src/layers/ |
unique_objects.h | 121 static std::mutex global_lock; // Protect map accesses and unique_id increments
|
/prebuilts/python/linux-x86/2.7.5/lib/python2.7/site-packages/sepolgen/ |
audit.py | 152 access - list of accesses that were allowed or denied 179 self.accesses = [] 187 # position one beyond the open brace. It then adds the accesses until it finds 198 self.accesses.append(recs[i]) 248 access_tuple = tuple( self.accesses) 254 self.type, self.data = audit2why.analyze(scontext, tcontext, self.tclass, self.accesses); 264 raise ValueError("Invalid permission %s\n" % " ".join(self.accesses)) 526 avc.accesses, avc, avc_type=avc.type, data=avc.data) 529 avc.accesses, avc, avc_type=avc.type, data=avc.data)
|
/system/core/property_service/libpropertyinfoserializer/ |
trie_node_arena.h | 44 // any pointers. Therefore we return an ArenaObjectPointer, which always accesses elements via
|
/system/sepolicy/prebuilts/api/28.0/private/ |
perfetto.te | 3 # This command line client accesses the privileged socket of the traced
|
/system/sepolicy/private/ |
perfetto.te | 3 # This command line client accesses the privileged socket of the traced
|
/toolchain/binutils/binutils-2.27/gas/testsuite/gas/iq2000/ |
load-hazards.exp | 61 set warnpattern "instruction implicitly accesses R31 of previous load"
|
/external/llvm/include/llvm/Transforms/Utils/ |
MemorySSA.h | 119 // \brief The base for all memory accesses. All memory accesses in a block are 242 /// \brief Represents read-only accesses to memory 310 // For debugging only. This gets used to give memory accesses pretty numbers 322 /// \brief Represents phi nodes for memory accesses. 468 /// For debugging only. This gets used to give memory accesses pretty numbers 493 /// accesses. 579 /// \brief Given two memory accesses in the same basic block, determine 694 /// disambiguate memory accesses, or they may want the nearest dominating 706 /// the instruction accesses (by skipping any def which AA can prove does no [all...] |
/art/compiler/optimizing/ |
load_store_analysis.cc | 166 // Without heap stores, this pass would act mostly as GVN on heap accesses. 171 // Don't do load/store elimination if the method has volatile field accesses or
|
/art/tools/veridex/ |
flow_analysis.h | 212 const std::map<MethodReference, std::vector<ReflectAccessInfo>>& accesses) 213 : VeriFlowAnalysis(resolver, it), accesses_(accesses) {}
|
precise_hidden_api_finder.cc | 58 void PreciseHiddenApiFinder::AddUsesAt(const std::vector<ReflectAccessInfo>& accesses, 60 for (const ReflectAccessInfo& info : accesses) {
|
/bionic/libc/arch-arm64/generic/bionic/ |
memcmp.S | 31 * ARMv8-a, AArch64, unaligned accesses. 55 accesses. */
|
/device/linaro/bootloader/arm-trusted-firmware/lib/locks/bakery/ |
bakery_lock_coherent.c | 24 * expect that accesses to the lock have the specific type required by the 34 * accesses regardless of status of address translation.
|
bakery_lock_normal.c | 24 * expect that accesses to the lock have the specific type required by the 32 * accesses regardless of status of address translation.
|
/device/linaro/bootloader/edk2/MdePkg/Include/Protocol/ |
CpuIo2.h | 112 /// Service for read and write accesses.
127 /// accesses to devices in a system.
|
/external/arm-neon-tests/ |
InitCache.s | 33 ;ORR r0, r0, #(0x1 << 4) ;Enables speculative accesses on AXI 34 ORR r0, r0, #(0x1 << 4) ;Enables speculative accesses on AXI
|
/external/compiler-rt/lib/tsan/rtl/ |
tsan_interface_java.h | 16 // For plain memory accesses and function entry/exit a JVM is intended to use 19 // For volatile memory accesses and atomic operations JVM is intended to use
|
/external/elfutils/ |
TODO | 42 All accesses to the debug sections should make sure the offsets are 43 valid. This is currently especially a problem with leb128 accesses.
|
/external/kernel-headers/original/uapi/linux/ |
prctl.h | 19 # define PR_UNALIGN_NOPRINT 1 /* silently fix up unaligned user accesses */ 30 # define PR_FPEMU_NOPRINT 1 /* silently emulate fp operations accesses */
|
/external/libcap/libcap/include/uapi/linux/ |
prctl.h | 16 # define PR_UNALIGN_NOPRINT 1 /* silently fix up unaligned user accesses */ 27 # define PR_FPEMU_NOPRINT 1 /* silently emulate fp operations accesses */
|
Completed in 2550 milliseconds
<<11121314151617181920>>