Home | History | Annotate | Download | only in CodeGen

Lines Matching defs:Load

135     /// multiple load/stores of the same address.
624 // %val = load %ptr'
633 // %val = load %ptr'
1048 // ScalarizeMaskedLoad() translates masked load intrinsic, like
1049 // <16 x i32 > @llvm.masked.load( <16 x i32>* %addr, i32 align,
1057 // br i1 %3, label %cond.load, label %else
1059 //cond.load: ; preds = %0
1061 // %5 = load i32* %4
1065 //else: ; preds = %0, %cond.load
1066 // %res.phi.else = phi <16 x i32> [ %6, %cond.load ], [ undef, %0 ]
1073 // %10 = load i32* %9
1090 assert(VecType && "Unexpected return type of masked load intrinsic");
1121 // br i1 %to_load, label %cond.load, label %else
1138 // %Elt = load i32* %EltAddr
1141 CondBlock = IfBlock->splitBasicBlock(InsertPt, "cond.load");
1146 LoadInst* Load = Builder.CreateLoad(Gep, false);
1147 VResult = Builder.CreateInsertElement(VResult, Load, Builder.getInt32(Idx));
1349 // Scalarize unsupported vector masked load
2085 /// Match - Find the maximal addressing mode that a load/store of V can fold,
2177 /// addressing computation involving I might be folded into a load/store
3018 /// mode of the machine to fold the specified instruction into a load or store
3021 /// into the load. For example, consider this code:
3027 /// load Z
3029 /// In this case, Y has multiple uses, and can be folded into the load of Z
3030 /// (yielding load [X+2]). However, doing this will cause both "X" and "X+1" to
3031 /// be live at the use(Y) line. If we don't fold Y into load Z, we use one
3036 /// X was live across 'load Z' for other reasons, we actually *would* want to
3066 // If all uses of this instruction are ultimately load/store/inlineasm's,
3130 /// OptimizeMemoryInst - Load and Store Instructions often have
3132 /// instruction selection will try to get the load or store to do as much
3137 /// This method is used to optimize both load/store and inline asms with memory
3247 // done this for some other load/store instr in this block. If so, reuse the
3359 // SDAG consecutive load/store merging.
3553 /// load instruction.
3554 /// If an ext(load) can be formed, it is returned via \p LI for the load
3560 /// \return true when promoting was necessary to expose the ext(load)
3565 /// %ld = load i32* %addr
3571 /// %ld = load i32* %addr
3575 /// Thanks to the promotion, we can match zext(load i32*) to i64.
3580 // Iterate over all the extensions to see if one form an ext(load).
3582 // Check if we directly have ext(load).
3609 // We would be able to merge only one extension in a load.
3625 // Check if it exposes an ext(load).
3630 // the load, otherwise we may degrade the code quality.
3634 // If this does not help to expose an ext(load) then, rollback.
3637 // None of the extension can form an ext(load).
3643 /// MoveExtToFormExtLoad - Move a zext or sext fed by a load into the same
3644 /// basic block as the load, unless conditions are unfavorable. This allows
3645 /// SelectionDAG to fold the extend into the load.
3651 // an extended load.
3657 // Look for a load being extended.
3662 assert(!HasPromoted && !LI && "If we did not match any load instruction "
3677 // If the load has other users and the truncate is not free, this probably
3701 // Move the extend into the same block as the load, so that SelectionDAG
3747 // reloads just before load / store instructions.
3793 // avoid stalls on the load from memory. If the compare has more than one use