Lines Matching defs:in
5 * you may not use this file except in compliance with the License.
10 * Unless required by applicable law or agreed to in writing, software
170 // restored and then used in regular non SlowPath code as D register.
351 // Live registers will be restored in the catch block if caught.
438 // Live registers will be restored in the catch block if caught.
491 // In the unlucky case that the `temp` is R0, we preserve the address in `out` across
514 // The class entry address was preserved in `entry_address` thanks to kSaveEverything.
571 // In the unlucky case that the `temp` is R0, we preserve the address in `out` across
590 // The string entry address was preserved in `entry_address` thanks to kSaveEverything.
744 // location; in the latter case, the read barrier marking runtime
774 // and output in R0):
808 // barrier. The field `obj.field` in the object `obj` holding this
816 // reference (different from `ref`) in `obj.field`).
820 // location; in the latter case, the read barrier marking runtime
839 << "Unexpected instruction in read barrier marking slow path: "
854 // read barrier). The field `obj.field` in the object `obj` holding
864 // reference (different from `ref`) in `obj.field`).
868 // location; in the latter case, the read barrier marking runtime
909 << "Unexpected instruction in read barrier marking slow path: "
918 // not be IP, as we may use it to emit the reference load (in the
920 // word to still be in `temp_` after the reference load.
926 // inserted after the original load. However, in fast path based
940 // Note: the original implementation in ReadBarrier::Barrier is
988 // The offset, index and scale factor to access the reference in `obj_`.
1004 // the field `obj.field` in the object `obj` holding this reference
1012 // another object reference (different from `ref`) in `obj.field`).
1016 // location; in the latter case, the read barrier marking runtime
1057 << "Unexpected instruction in read barrier marking and field updating slow path: "
1067 // not be IP, as we may use it to emit the reference load (in the
1069 // word to still be in `temp1_` after the reference load.
1133 // update the field in the holder (`*(obj_ + field_offset)`).
1136 // another thread had concurrently changed it. In that case, the
1137 // LDREX/SUBS/ITNE sequence of instructions in the compare-and-set
1159 vixl32::Register tmp = temp2_; // Value in memory.
1217 // The offset, index and scale factor to access the reference in `obj_`.
1227 // A temporary register used in the implementation of the CAS, to
1257 // In that case, we have lost the information about the original
1275 << "Unexpected instruction in read barrier for heap reference slow path: "
1293 // Compute the actual memory offset and store it in `index`.
1324 // The initial register stored in `index_` has already been
1325 // saved in the call to art::SlowPathCode::SaveLiveRegisters
1329 // Shifting the index value contained in `index_reg` by the scale
1330 // factor (2) cannot overflow in practice, as the runtime is
1339 // In the case of the UnsafeGetObject/UnsafeGetObjectVolatile
1341 // (as in the case of ArrayGet), as it is actually an offset
1435 << "Unexpected instruction in read barrier for GC root slow path: "
1503 // "Meaning (floating-point)" column in the table A8-1 of the ARMv7 reference manual.
1548 // Saves the register in the stack. Returns the size taken on stack.
1580 const Operand in = kind == HInstruction::kAnd
1584 __ Mov(out, in);
1756 // 0.0 is the only immediate that can be encoded directly in
1760 // specify that in a floating-point comparison, positive zero
2038 // is in a low register (the other half is read outside an IT block), and
2039 // the constant fits in an 8-bit unsigned integer, so that a 16-bit CMP
2046 // TODO(VIXL): The rest of the checks are there to keep the backend in sync with
2332 vixl32::Register in = InputRegisterAt(cond, 0);
2347 if (out.IsLow() && out.Is(in)) {
2367 codegen->GenerateConditionWithZero(condition, out, in);
2385 in = RegisterFrom(right);
2391 __ Subs(out, in, operand);
2401 __ Sub(out, in, operand);
2446 // For constants, we also check that the output is in one or two low registers,
2447 // and that the constants fit in an 8-bit unsigned integer, so that a 16-bit
2556 // When doing BX to address we need to have lower bit set to 1 in T32.
2626 // but in the range.
2680 // do this in HCurrentMethod, as the instruction might have been removed
2681 // in the SSA graph.
2885 // TODO(VIXL): Maybe refactor to have the 'move' implementation here and use it in
2886 // `ParallelMoveResolverARMVIXL::EmitMove`, as is done in the `arm64` backend.
2910 // blx in T32 has only 16bit encoding that's why a stricter check for the scope is used.
3319 // MaybeRecordNativeDebugInfo is already called implicitly in CodeGenerator::Compile.
3327 // callers may not specify it, in which case the method will use a scratch
3331 vixl32::Register in,
3337 if (!temp.IsValid() || (out.IsLow() && !out.Is(in))) {
3341 // Avoid 32-bit instructions if possible; note that `in` and `temp` must be
3343 if (in.IsLow() && temp.IsLow() && !in.Is(temp)) {
3344 // temp = - in; only 0 sets the carry flag.
3345 __ Rsbs(temp, in, 0);
3347 if (out.Is(in)) {
3348 std::swap(in, temp);
3351 // out = - in + in + carry = carry
3352 __ Adc(out, temp, in);
3354 // If `in` is 0, then it has 32 leading zeros, and less than that otherwise.
3355 __ Clz(out, in);
3356 // Any number less than 32 logically shifted right by 5 bits results in 0;
3367 if (out.Is(in)) {
3368 if (!temp.IsValid() || in.Is(temp)) {
3375 // temp = in - 1; only 0 does not set the carry flag.
3376 __ Subs(temp, in, 1);
3377 // out = in + ~temp + carry = in + (-(in - 1) - 1) + carry = in - in + 1 - 1 + carry = carry
3378 __ Sbc(out, in, temp);
3382 __ Mvn(out, in);
3383 in = out;
3387 __ Lsr(out, in, 31);
3406 // Handle the long/FP comparisons made in instruction simplification.
3746 // However this is not required in practice, as this is an
3750 // concurrent copying collector may not in the future).
3765 // Set the hidden (in r12) argument. It is done here, right before a BLX to prevent other
3784 // blx in T32 has only 16bit encoding that's why a stricter check for the scope is used.
3832 Location in = locations->InAt(0);
3839 // out.lo = 0 - in.lo (and update the carry/borrow (C) flag)
3840 __ Rsbs(LowRegisterFrom(out), LowRegisterFrom(in), 0);
3842 // instruction here, as it does not exist in the Thumb-2
3848 // out.hi = out.hi - in.hi
3849 __ Sub(HighRegisterFrom(out), HighRegisterFrom(out), HighRegisterFrom(in));
4082 Location in = locations->InAt(0);
4091 __ Sbfx(OutputRegister(conversion), LowRegisterFrom(in), 0, 8);
4112 __ Sbfx(OutputRegister(conversion), LowRegisterFrom(in), 0, 16);
4134 if (in.IsRegisterPair()) {
4135 __ Mov(OutputRegister(conversion), LowRegisterFrom(in));
4136 } else if (in.IsDoubleStackSlot()) {
4140 in.GetStackIndex());
4142 DCHECK(in.IsConstant());
4143 DCHECK(in.GetConstant()->IsLongConstant());
4144 int64_t value = in.GetConstant()->AsLongConstant()->GetValue();
4160 __ Vcvt(S32, F64, temp_s, DRegisterFrom(in));
4181 DCHECK(in.IsRegister());
4209 __ Ubfx(OutputRegister(conversion), LowRegisterFrom(in), 0, 16);
4248 __ Vcvt(F32, F64, OutputSRegister(conversion), DRegisterFrom(in));
4273 vixl32::Register low = LowRegisterFrom(in);
4274 vixl32::Register high = HighRegisterFrom(in);
4751 // Most remainders are implemented in the runtime.
4800 // The runtime helper puts the output in R2,R3.
4932 vixl32::Register in = InputRegisterAt(ror, 0);
4938 // so map all rotations to a +ve. equivalent in that range.
4939 // (e.g. left *or* right by -2 bits == 30 bits in the same direction.)
4944 __ Ror(out, in, rot);
4945 } else if (!out.Is(in)) {
4946 __ Mov(out, in);
4949 __ Ror(out, in, RegisterFrom(rhs));
4956 // rotations as sub-word sized rotations in the other direction) as appropriate.
4969 // For rotates over a word in size, 'pre-rotate' by 32-bits to keep rotate
5322 // blx in T32 has only 16bit encoding that's why a stricter check for the scope is used.
5391 Location in = locations->InAt(0);
5398 __ Mvn(LowRegisterFrom(out), LowRegisterFrom(in));
5399 __ Mvn(HighRegisterFrom(out), HighRegisterFrom(in));
5482 // To branch on the FP compare result we transfer FPSCR to APSR (encoded as PC in VMRS).
5573 // We need a load followed by store. (The address used in a STREX instruction must
5574 // be the same as the address in the most recently executed LDREX instruction.)
5662 // Note that in the case where `value` is a null reference,
5724 // Longs and doubles are handled in the switch.
5726 // TODO(VIXL): Here and for other calls to `MaybeRecordImplicitNullCheck` in this method, we
5764 // The output overlaps in case of volatile long: we don't want the
5766 // object's location. Likewise, in the case of an object field get
5789 // path in CodeGeneratorARMVIXL::GenerateFieldLoadWithBakerReadBarrier.
5921 // Note that a potential implicit null check is handled in this
5977 // Potential implicit null checks, in the case of reference or
5978 // double fields, are handled in the previous switch statement.
5981 // TODO(VIXL): Here and for other calls to `MaybeRecordImplicitNullCheck` in this method, we
5991 // Memory barriers, in the case of references, are also handled
5992 // in the previous switch statement.
6261 // The output overlaps in the case of an object array get with
6270 // path in CodeGeneratorARMVIXL::GenerateArrayLoadWithBakerReadBarrier.
6361 // input instruction has done it already. See the comment in
6405 // Note that a potential implicit null check is handled in this
6427 // TODO(VIXL): Here and for other calls to `MaybeRecordImplicitNullCheck` in this method,
6443 // input instruction has done it already. See the comment in
6516 // Potential implicit null checks, in the case of reference
6517 // arrays, are handled in the previous switch statement.
6584 // input instruction has done it already. See the comment in
6602 // See the comment in instruction_simplifier_shared.cc.
6663 // are performed without read barriers. This is fine, even in
6664 // the case where a class object is in the from-space after
6667 // negative, in which case we would take the ArraySet slow
6709 // Note that in the case where `value` is a null reference,
6803 // Objects are handled in the switch.
7441 // Even if the initialized flag is set, we may be in a situation where caches are not synced
7692 // we check that the output is in a low register, so that a 16-bit MOV
7761 // we check that the output is in a low register, so that a 16-bit MOV
7776 // IT block), and it has the same condition, `eq`, so in that case the MOV
7819 // we check that the output is in a low register, so that a 16-bit MOV
7909 LocationSummary::kNoCall; // In fact, call on a fatal (non-returning) slow path.
8007 // If the class reference currently in `temp` is null, jump to the slow path to throw the
8039 // If the class reference currently in `temp` is null, jump to the slow path to throw the
8304 // TODO(VIXL): Remove optimizations in the helper when they are implemented in vixl.
8329 // TODO(VIXL): Remove optimizations in the helper when they are implemented in vixl.
8352 // TODO(VIXL): Remove optimizations in the helper when they are implemented in vixl.
8487 // in the following move operation, as we will need it for the
8547 // Query `art::Thread::Current()->GetIsGcMarking()` (stored in
8583 // Query `art::Thread::Current()->GetIsGcMarking()` (stored in
8651 // Query `art::Thread::Current()->GetIsGcMarking()` (stored in the
8653 // path to mark the reference. Then, in the slow path, check the
8654 // gray bit in the lock word of the reference's holder (`obj`) to
8744 // Query `art::Thread::Current()->GetIsGcMarking()` (stored in the
8746 // path to mark the reference. Then, in the slow path, check the
8747 // gray bit in the lock word of the reference's holder (`obj`) to
8814 // Query `art::Thread::Current()->GetIsGcMarking()` (stored in the
8816 // path to mark the reference. Then, in the slow path, check the
8817 // gray bit in the lock word of the reference's holder (`obj`) to
8859 // Query `art::Thread::Current()->GetIsGcMarking()` (stored in the
8861 // path to update the reference field within `obj`. Then, in the
8862 // slow path, check the gray bit in the lock word of the reference's
9059 Location callee_method = temp; // For all kinds except kRecursive, callee will be in temp.
9115 // blx in T32 has only 16bit encoding that's why a stricter check for the scope is used.
9136 // intrinsics may have put the receiver in a different register. In the intrinsics
9153 // However this is not required in practice, as this is an
9157 // concurrent copying collector may not in the future).
9168 // blx in T32 has only 16bit encoding that's why a stricter check for the scope is used.
9377 // and they can be encoded in the instruction without making use of IP register.
9416 // Check whether the value is in the table, jump to default block if not.
9456 // TODO: Consider pairs in the parallel move resolver, then this could be nicely merged