Home | History | Annotate | Download | only in s390

Lines Matching refs:And

336   // only set during incremental collection, and then it's also guaranteed that
396 // catch stores of smis and stores into the young generation.
461 // catch stores of Smis and stores into young gen.
472 // Save caller-saved registers. js_function and code_entry are in the
493 // Restore caller-saved registers (including js_function and code_entry).
515 // Store pointer to buffer and increment buffer top.
589 // Push and pop all registers that can hold pointers.
954 // The following instructions must remain together and unmodified
1004 // Drop the execution stack down to the frame pointer and restore
1005 // the caller frame pointer, return address and constant pool pointer.
1027 // we reserve a slot for LK and push the previous SP which is captured
1052 // Reserve room for saved entry sp and code object.
1061 // Save the frame pointer and the context in top.
1073 // since the sp slot and code slot were pushed after the fp.
1078 // Allocate and align the frame preparing for calling the runtime
1126 // Calculate the stack location of the saved doubles and restore them.
1137 // Restore current context from top and clear it in debug mode.
1147 // Tear down the exit frame, pop the arguments, and return.
1203 // Restore caller's frame pointer and return address now as they will be
1208 // callee arguments corruption (source and destination areas could overlap).
1210 // Both src_reg and dst_reg are pointing to the word after the one to copy,
1240 // Check whether the expected and actual arguments count match. If not,
1247 // up actual and expected registers according to the contract if values are
1264 // like we have a match between expected and actual number of
1423 // Get the function and setup the context.
1488 // Pop the Next Handler into r3 and store it into Handler Address reference.
1531 // Read the first word and compare to the native_context_map.
1577 // ComputeIntegerHash in utils.h and KeyedLoadGenericStub in
1636 // t0 - holds the untagged key on entry and holds the hash once computed.
1652 // Use t2 for index calculations and keep the hash intact in t0.
1690 // Get the value at the masked, scaled index and return.
1720 // Check relative positions of allocation top and limit addresses.
1739 // Load allocation top into result and allocation limit into ip.
1774 // Calculate new top and bail out if new space is exhausted. Use result
1812 // |object_size| and |result_end| may overlap if the DOUBLE_ALIGNMENT flag
1818 // Check relative positions of allocation top and limit addresses.
1827 // Set up allocation top address and allocation limit registers.
1835 // Load allocation top into result and allocation limit into alloc_limit..
1870 // Calculate new top and bail out if new space is exhausted. Use result
1902 // |object_size| and |result_end| may overlap if the DOUBLE_ALIGNMENT flag
2017 // Set the map, length and hash field.
2038 // Set the map, length and hash field.
2191 // C = A+B; C overflows if A/B have same sign and C has diff sign than A
2234 // C = A-B; C overflows if A/B have diff signs and C has diff sign than A
2346 // If the prototype or initial map is the hole, don't return it and
2381 // cached in the hash field and the number of bits reserved for it does not
2443 // convert back and compare
2572 // We rotate by kSmiShift amount, and extract the num_least_bits
2597 // should remove this need and make the runtime routine entry code
2851 // otherwise, the UnTag operation will kill the CC and we cannot
3008 // Test that both first and second are sequential one-byte strings.
3044 // is full and a scavenge is needed.
3050 // Allocate an object in the heap for the heap number and tag it as a heap
3257 // Make stack end at alignment and make room for stack arguments
3447 // Since both black and grey have a 1 in the first position and white does
3506 // In 0-255 range, round and truncate.
3527 And(dst, Operand(Map::EnumLengthBits::kMask));
3599 // and only use the generic version when we require a fixed sequence
3670 // object sits on the page boundary as no memento can follow and we cannot
3777 // have to make sure the src and dst are reg pairs
4149 // In scenario where we have dst = src - dst, we need to swap and negate
4166 // In scenario where we have dst = src - dst, we need to swap and negate
4184 // In scenario where we have dst = src - dst, we need to swap and negate
4275 // AND 32-bit - dst = dst & src
4276 void MacroAssembler::And(Register dst, Register src) { nr(dst, src); }
4278 // AND Pointer Size - dst = dst & src
4281 // Non-clobbering AND 32-bit - dst = src1 & src1
4282 void MacroAssembler::And(Register dst, Register src1, Register src2) {
4295 And(dst, src2);
4298 // Non-clobbering AND pointer size - dst = src1 & src1
4315 // AND 32-bit (Reg - Mem)
4316 void MacroAssembler::And(Register dst, const MemOperand& opnd) {
4324 // AND Pointer Size (Reg - Mem)
4330 And(dst, opnd);
4334 // AND 32-bit - dst = dst & imm
4335 void MacroAssembler::And(Register dst, const Operand& opnd) { nilf(dst, opnd); }
4337 // AND Pointer Size - dst = dst & imm
4347 And(dst, opnd);
4351 // AND 32-bit - dst = src & imm
4352 void MacroAssembler::And
4357 // AND Pointer Size - dst = src & imm
4392 // If we are &'ing zero, we can just whack the dst register and skip copy
4747 // Branch On Count. Decrement R1, and branch if R1 != 0.
4976 // Load 32-bits and sign extend if necessary.
4985 // Load 32-bits and sign extend if necessary.
5011 // Load 32-bits and zero extend if necessary.
5087 // Load And Test (Reg <- Reg)
5092 // Load And Test
5104 // Load And Test Pointer Sized (Reg <- Reg)
5113 // Load And Test 32-bit (Reg <- Mem)
5118 // Load And Test Pointer Sized (Reg <- Mem)
5129 // for 32bit and 64bit we all use 64bit floating point regs
5148 // and convert to Double Precision (64-bit)
5174 // and store resulting Float32 to memory
5215 // Loads 16-bits half-word value from memory and sign extends to pointer
5396 // S390 AND instr clobbers source. Make a copy if necessary