Lines Matching full:half
5068 getting back a 64-bit value, the lower half of which
5069 is the FPROUND value to store, and the upper half of
7492 zeroes the top half erroneously when doing btl due to
7961 //.. dst is ireg and sz==4, zero out top half of it. */
8157 /* We can only do a 64-bit memory read, so the upper half of the
9437 half xmm */
9531 /* 0F 2D = CVTPS2PI -- convert 2 x F32 in mem/low half xmm to 2 x
9533 /* 0F 2C = CVTTPS2PI -- convert 2 x F32 in mem/low half xmm to 2 x
9674 lower half of which is the SSEROUND value to store, and the
9675 upper half of which is the emulation-warning token which may
9791 /* 0F 16 = MOVHPS -- move from mem to high half of XMM. */
9792 /* 0F 16 = MOVLHPS -- move from lo half to hi half of XMM. */
9814 /* 0F 17 = MOVHPS -- move from high half of XMM to mem. */
9832 /* 0F 12 = MOVLPS -- move from mem to low half of XMM. */
9833 /* OF 12 = MOVHLPS -- from from hi half to lo half of XMM. */
9856 /* 0F 13 = MOVLPS -- move from low half of XMM to mem. */
9885 the 32-bit half is written. However, testing on a Core2
9888 of the default rule that says "if the lower half of a 32-bit
9889 register is written, the upper half is zeroed". By using
10092 /* 0F C4 = PINSRW -- get 16 bits from E(mem or low half ireg) and
10205 /* 0F E4 = PMULUH -- 16x4 hi-half of unsigned widening multiply */
10580 /* F3 0F E6 = CVTDQ2PD -- convert 2 x I32 in mem/lo half xmm to 2 x
10652 lo half xmm(G), and zero upper half, rounding towards zero */
10654 lo half xmm(G), according to prevailing rounding mode, and zero
10655 upper half */
10760 lo half xmm(G), rounding according to prevailing SSE rounding
10761 mode, and zero upper half */
10898 /* 0F 5A = CVTPS2PD -- convert 2 x F32 in low half mem/xmm to 2 x
10931 when sz==4 -- convert F64 in mem/low half xmm to I32 in ireg,
10933 when sz==8 -- convert F64 in mem/low half xmm to I64 in ireg,
10937 when sz==4 -- convert F64 in mem/low half xmm to I32 in ireg,
10939 when sz==8 -- convert F64 in mem/low half xmm to I64 in ireg,
10983 /* F2 0F 5A = CVTSD2SS -- convert F64 in mem/low half xmm to F32 in
11015 when sz==4 -- convert I32 in mem/ireg to F64 in low half xmm
11016 when sz==8 -- convert I64 in mem/ireg to F64 in low half xmm
11069 low half xmm(G) */
11334 /* F2 0F D6 = MOVDQ2Q -- move from E (lo half xmm, not mem) to G (mmx). */
11352 /* 66 0F 16 = MOVHPD -- move from mem to high half of XMM. */
11370 /* 66 0F 17 = MOVHPD -- move from high half of XMM to mem. */
11387 /* 66 0F 12 = MOVLPD -- move from mem to low half of XMM. */
11405 /* 66 0F 13 = MOVLPD -- move from low half of XMM to mem. */
11533 /* 66 0F D6 = MOVQ -- move 64 bits from G (lo half xmm) to E (mem
11534 or lo half xmm). */
11541 /* dst: lo half copied, hi half zeroed */
11552 /* F3 0F D6 = MOVQ2DQ -- move from E (mmx) to G (lo half xmm, zero
11553 hi half). */
11571 /* F3 0F 7E = MOVQ -- move 64 bits from E (mem or lo half xmm) to
11572 G (lo half xmm). Upper half of G is zeroed out. */
11573 /* F2 0F 10 = MOVSD -- move 64 bits from E (mem or lo half xmm) to
11574 G (lo half xmm). If E is mem, upper half of G is zeroed out.
11575 If E is reg, upper half of G is unchanged. */
11607 /* F2 0F 11 = MOVSD -- move 64 bits from G (lo half xmm) to E (mem
11608 or lo half xmm). */
11999 /* 66 0F C4 = PINSRW -- get 16 bits from E(mem or low half ireg) and
12139 /* 66 0F E4 = PMULHUW -- 16x8 hi-half of unsigned widening multiply */
12147 /* 66 0F E5 = PMULHW -- 16x8 hi-half of signed widening multiply */
12198 0 to form lower 64-bit half and lanes 2 x 2 to form upper 64-bit
12199 half */
12331 /* F3 0F 70 = PSHUFHW -- rearrange upper half 4x16 from E(xmm or
12332 mem) to G(xmm), and copy lower half */
12377 /* F2 0F 70 = PSHUFLW -- rearrange lower half 4x16 from E(xmm or
12378 mem) to G(xmm), and copy upper half */
13915 /* And the same for the lower half of the result. What fun. */
15802 [a OR b, a OR b], from which we simply take the lower half.