Home | History | Annotate | Download | only in priv

Lines Matching full:half

450    register low half has the safe guest state offset as a reference to
2428 /* new eflags in hi half r64; new value in lo half r64 */
3974 getting back a 64-bit value, the lower half of which
3975 is the FPROUND value to store, and the upper half of
6753 dst is ireg and sz==4, zero out top half of it. */
6938 /* We can only do a 64-bit memory read, so the upper half of the
8317 half xmm */
8389 /* 0F 2D = CVTPS2PI -- convert 2 x F32 in mem/low half xmm to 2 x
8391 /* 0F 2C = CVTTPS2PI -- convert 2 x F32 in mem/low half xmm to 2 x
8517 lower half of which is the SSEROUND value to store, and the
8518 upper half of which is the emulation-warning token which may
8624 /* 0F 16 = MOVHPS -- move from mem to high half of XMM. */
8625 /* 0F 16 = MOVLHPS -- move from lo half to hi half of XMM. */
8645 /* 0F 17 = MOVHPS -- move from high half of XMM to mem. */
8661 /* 0F 12 = MOVLPS -- move from mem to low half of XMM. */
8662 /* OF 12 = MOVHLPS -- from from hi half to lo half of XMM. */
8683 /* 0F 13 = MOVLPS -- move from low half of XMM to mem. */
8884 /* 0F C4 = PINSRW -- get 16 bits from E(mem or low half ireg) and
8990 /* 0F E4 = PMULUH -- 16x4 hi-half of unsigned widening multiply */
9372 /* F3 0F E6 = CVTDQ2PD -- convert 2 x I32 in mem/lo half xmm to 2 x
9443 lo half xmm(G), and zero upper half */
9537 lo half xmm(G), and zero upper half */
9661 /* 0F 5A = CVTPS2PD -- convert 2 x F32 in low half mem/xmm to 2 x
9692 /* F2 0F 2D = CVTSD2SI -- convert F64 in mem/low half xmm to
9694 /* F2 0F 2C = CVTTSD2SI -- convert F64 in mem/low half xmm to
9731 /* F2 0F 5A = CVTSD2SS -- convert F64 in mem/low half xmm to F32 in
9762 half xmm */
9789 low half xmm(G) */
9815 lo half xmm(G), and zero upper half, rounding towards zero */
10104 /* F2 0F D6 = MOVDQ2Q -- move from E (lo half xmm, not mem) to G (mmx). */
10121 /* 66 0F 16 = MOVHPD -- move from mem to high half of XMM. */
10139 /* 66 0F 17 = MOVHPD -- move from high half of XMM to mem. */
10156 /* 66 0F 12 = MOVLPD -- move from mem to low half of XMM. */
10173 /* 66 0F 13 = MOVLPD -- move from low half of XMM to mem. */
10290 /* 66 0F D6 = MOVQ -- move 64 bits from G (lo half xmm) to E (mem
10291 or lo half xmm). */
10296 /* dst: lo half copied, hi half zeroed */
10307 /* F3 0F D6 = MOVQ2DQ -- move from E (mmx) to G (lo half xmm, zero
10308 hi half). */
10325 /* F3 0F 7E = MOVQ -- move 64 bits from E (mem or lo half xmm) to
10326 G (lo half xmm). Upper half of G is zeroed out. */
10327 /* F2 0F 10 = MOVSD -- move 64 bits from E (mem or lo half xmm) to
10328 G (lo half xmm). If E is mem, upper half of G is zeroed out.
10329 If E is reg, upper half of G is unchanged. */
10358 /* F2 0F 11 = MOVSD -- move 64 bits from G (lo half xmm) to E (mem
10359 or lo half xmm). */
10712 /* 66 0F C4 = PINSRW -- get 16 bits from E(mem or low half ireg) and
10842 /* 66 0F E4 = PMULHUW -- 16x8 hi-half of unsigned widening multiply */
10849 /* 66 0F E5 = PMULHW -- 16x8 hi-half of signed widening multiply */
10897 0 to form lower 64-bit half and lanes 2 x 2 to form upper 64-bit
10898 half */
11025 /* F3 0F 70 = PSHUFHW -- rearrange upper half 4x16 from E(xmm or
11026 mem) to G(xmm), and copy lower half */
11069 /* F2 0F 70 = PSHUFLW -- rearrange lower half 4x16 from E(xmm or
11070 mem) to G(xmm), and copy upper half */
12557 /* And the same for the lower half of the result. What fun. */