Home | History | Annotate | Download | only in ARM

Lines Matching defs:Load

96     setOperationAction(ISD::LOAD, VT, Promote);
97 AddPromotedToType (ISD::LOAD, VT, PromotedLdStVT);
571 setTargetDAGCombine(ISD::LOAD);
641 // ARM does not have i1 sign extending load.
645 // ARM supports all 4 flavors of integer indexed load / store.
1151 // load / store 4 to 8 consecutive D registers.
1198 // Load are scheduled for latency even if there instruction itinerary
1603 SDValue Load = DAG.getLoad(PtrVT, dl, Chain, AddArg,
1607 MemOpChains.push_back(Load.getValue(1));
1608 RegsToPass.push_back(std::make_pair(j, Load));
2817 // Create load node to retrieve arguments from the stack.
3095 // Create load nodes to retrieve arguments from the stack.
5131 // to the default expansion, which will generate a load from the constant
5163 // on the stack followed by a load for everything else.
5789 /// SkipLoadExtensionForVMULL - return a load of the original vector size that
5791 /// than 64 bits, an appropriate extension will be added after the load to
5793 /// because ARM does not have a sign/zero extending load for vectors.
5797 // The load already has the right type.
5804 // We need to create a zextload/sextload. We cannot just create a load
5814 /// extending load, or BUILD_VECTOR with extended elements, return the
6215 // Monotonic load/store is legal for all targets
6219 // Acquire/Release load/store is not legal for targets without a
6401 // Load the address of the dispatch MBB into the jump buffer.
6894 /// Return the load opcode for a given load size. If load size >= 8,
6932 /// Emit a post-increment load operation with given size. The instructions
6939 assert(LdOpc != 0 && "Should have a load opcode");
6945 // load + update AddrIn
7038 // Select the correct opcode and register class for unit size load/store
7115 // Load an immediate to varEnd.
8475 // vmovrrd(load f64) -> (load i32), (load i32)
8557 // Load i64 elements as f64 values so that type legalization does not split
8672 // Bitcast an i64 load inserted into a vector to f64.
8748 /// NEON load/store intrinsics, and generic vector load/stores, to merge
8750 /// For generic load/stores, the memory type is assumed to be a vector.
8770 // Check that the add is independent of the load/store. Otherwise, folding
8775 // Find the new opcode for the updating load/store.
8820 case ISD::LOAD: NewOpc = ARMISD::VLD1_UPD;
8827 // Find the size of memory referenced by the load/store.
8834 assert(isStore && "Node has to be a load, a store, or an intrinsic!");
8860 // If this is a less-than-standard-aligned load/store, change the type to
8869 // - generic load/store instructions: the alignment is specified as an
8879 assert(NumVecs == 1 && "Unexpected multi-element generic load/store.");
8880 assert(!isLaneOp && "Unexpected generic load/store lane.");
8884 // Don't set an explicit alignment on regular load/stores that we want
8886 // This matches the behavior of regular load/stores, which only get an
8894 // Create the new updating load/store node.
8940 // If this is an non-standard-aligned LOAD, the first result is the loaded
8942 if (AlignedVecTy != VecTy && N->getOpcode() == ISD::LOAD) {
8996 // numbers match the load.
9081 // If this is a legal vector load, try to combine it into a VLD1_UPD.
9825 case ISD::LOAD: return PerformLOADCombine(N, DCI);
9859 return (VT == MVT::f32) && (Opc == ISD::LOAD || Opc == ISD::STORE);
9938 if (Val.getOpcode() != ISD::LOAD)
10053 /// as the offset of the target addressing mode for load / store of the
10126 /// by AM is legal for this target, for a load/store of the specified type.
10133 // Can never fold addr of global into load/store.
10270 // FIXME: Use VLDM / VSTM to emulate indexed FP load / store.
10301 /// can be legally represented as pre-indexed load / store address.
10340 /// combined with a load / store to form a post-indexed load / store.
10374 // Swap base ptr and offset to catch more post-index load / store when
10380 // Post-indexed load / store update the base pointer.
10865 /// materialize the FP immediate as a load from a constant pool.
10876 /// getTgtMemIntrinsic - Represent NEON load and store intrinsics as
10986 /// \brief Returns true if it is beneficial to convert a load of a constant