Home | History | Annotate | Download | only in asm
      1 #!/usr/bin/env perl
      2 #
      3 # ====================================================================
      4 # Written by Andy Polyakov <appro (at] openssl.org> for the OpenSSL
      5 # project. The module is, however, dual licensed under OpenSSL and
      6 # CRYPTOGAMS licenses depending on where you obtain it. For further
      7 # details see http://www.openssl.org/~appro/cryptogams/.
      8 # ====================================================================
      9 #
     10 # March, May, June 2010
     11 #
     12 # The module implements "4-bit" GCM GHASH function and underlying
     13 # single multiplication operation in GF(2^128). "4-bit" means that it
     14 # uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two
     15 # code paths: vanilla x86 and vanilla MMX. Former will be executed on
     16 # 486 and Pentium, latter on all others. MMX GHASH features so called
     17 # "528B" variant of "4-bit" method utilizing additional 256+16 bytes
     18 # of per-key storage [+512 bytes shared table]. Performance results
     19 # are for streamed GHASH subroutine and are expressed in cycles per
     20 # processed byte, less is better:
     21 #
     22 #		gcc 2.95.3(*)	MMX assembler	x86 assembler
     23 #
     24 # Pentium	105/111(**)	-		50
     25 # PIII		68 /75		12.2		24
     26 # P4		125/125		17.8		84(***)
     27 # Opteron	66 /70		10.1		30
     28 # Core2		54 /67		8.4		18
     29 #
     30 # (*)	gcc 3.4.x was observed to generate few percent slower code,
     31 #	which is one of reasons why 2.95.3 results were chosen,
     32 #	another reason is lack of 3.4.x results for older CPUs;
     33 #	comparison with MMX results is not completely fair, because C
     34 #	results are for vanilla "256B" implementation, while
     35 #	assembler results are for "528B";-)
     36 # (**)	second number is result for code compiled with -fPIC flag,
     37 #	which is actually more relevant, because assembler code is
     38 #	position-independent;
     39 # (***)	see comment in non-MMX routine for further details;
     40 #
     41 # To summarize, it's >2-5 times faster than gcc-generated code. To
     42 # anchor it to something else SHA1 assembler processes one byte in
     43 # 11-13 cycles on contemporary x86 cores. As for choice of MMX in
     44 # particular, see comment at the end of the file...
     45 
     46 # May 2010
     47 #
     48 # Add PCLMULQDQ version performing at 2.10 cycles per processed byte.
     49 # The question is how close is it to theoretical limit? The pclmulqdq
     50 # instruction latency appears to be 14 cycles and there can't be more
     51 # than 2 of them executing at any given time. This means that single
     52 # Karatsuba multiplication would take 28 cycles *plus* few cycles for
     53 # pre- and post-processing. Then multiplication has to be followed by
     54 # modulo-reduction. Given that aggregated reduction method [see
     55 # "Carry-less Multiplication and Its Usage for Computing the GCM Mode"
     56 # white paper by Intel] allows you to perform reduction only once in
     57 # a while we can assume that asymptotic performance can be estimated
     58 # as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction
     59 # and Naggr is the aggregation factor.
     60 #
     61 # Before we proceed to this implementation let's have closer look at
     62 # the best-performing code suggested by Intel in their white paper.
     63 # By tracing inter-register dependencies Tmod is estimated as ~19
     64 # cycles and Naggr chosen by Intel is 4, resulting in 2.05 cycles per
     65 # processed byte. As implied, this is quite optimistic estimate,
     66 # because it does not account for Karatsuba pre- and post-processing,
     67 # which for a single multiplication is ~5 cycles. Unfortunately Intel
     68 # does not provide performance data for GHASH alone. But benchmarking
     69 # AES_GCM_encrypt ripped out of Fig. 15 of the white paper with aadt
     70 # alone resulted in 2.46 cycles per byte of out 16KB buffer. Note that
     71 # the result accounts even for pre-computing of degrees of the hash
     72 # key H, but its portion is negligible at 16KB buffer size.
     73 #
     74 # Moving on to the implementation in question. Tmod is estimated as
     75 # ~13 cycles and Naggr is 2, giving asymptotic performance of ...
     76 # 2.16. How is it possible that measured performance is better than
     77 # optimistic theoretical estimate? There is one thing Intel failed
     78 # to recognize. By serializing GHASH with CTR in same subroutine
     79 # former's performance is really limited to above (Tmul + Tmod/Naggr)
     80 # equation. But if GHASH procedure is detached, the modulo-reduction
     81 # can be interleaved with Naggr-1 multiplications at instruction level
     82 # and under ideal conditions even disappear from the equation. So that
     83 # optimistic theoretical estimate for this implementation is ...
     84 # 28/16=1.75, and not 2.16. Well, it's probably way too optimistic,
     85 # at least for such small Naggr. I'd argue that (28+Tproc/Naggr),
     86 # where Tproc is time required for Karatsuba pre- and post-processing,
     87 # is more realistic estimate. In this case it gives ... 1.91 cycles.
     88 # Or in other words, depending on how well we can interleave reduction
     89 # and one of the two multiplications the performance should be betwen
     90 # 1.91 and 2.16. As already mentioned, this implementation processes
     91 # one byte out of 8KB buffer in 2.10 cycles, while x86_64 counterpart
     92 # - in 2.02. x86_64 performance is better, because larger register
     93 # bank allows to interleave reduction and multiplication better.
     94 #
     95 # Does it make sense to increase Naggr? To start with it's virtually
     96 # impossible in 32-bit mode, because of limited register bank
     97 # capacity. Otherwise improvement has to be weighed agiainst slower
     98 # setup, as well as code size and complexity increase. As even
     99 # optimistic estimate doesn't promise 30% performance improvement,
    100 # there are currently no plans to increase Naggr.
    101 #
    102 # Special thanks to David Woodhouse <dwmw2 (at] infradead.org> for
    103 # providing access to a Westmere-based system on behalf of Intel
    104 # Open Source Technology Centre.
    105 
    106 # January 2010
    107 #
    108 # Tweaked to optimize transitions between integer and FP operations
    109 # on same XMM register, PCLMULQDQ subroutine was measured to process
    110 # one byte in 2.07 cycles on Sandy Bridge, and in 2.12 - on Westmere.
    111 # The minor regression on Westmere is outweighed by ~15% improvement
    112 # on Sandy Bridge. Strangely enough attempt to modify 64-bit code in
    113 # similar manner resulted in almost 20% degradation on Sandy Bridge,
    114 # where original 64-bit code processes one byte in 1.95 cycles.
    115 
    116 $0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1;
    117 push(@INC,"${dir}","${dir}../../perlasm");
    118 require "x86asm.pl";
    119 
    120 &asm_init($ARGV[0],"ghash-x86.pl",$x86only = $ARGV[$#ARGV] eq "386");
    121 
    122 $sse2=0;
    123 for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); }
    124 
    125 ($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx");
    126 $inp  = "edi";
    127 $Htbl = "esi";
    128 
    130 $unroll = 0;	# Affects x86 loop. Folded loop performs ~7% worse
    131 		# than unrolled, which has to be weighted against
    132 		# 2.5x x86-specific code size reduction.
    133 
    134 sub x86_loop {
    135     my $off = shift;
    136     my $rem = "eax";
    137 
    138 	&mov	($Zhh,&DWP(4,$Htbl,$Zll));
    139 	&mov	($Zhl,&DWP(0,$Htbl,$Zll));
    140 	&mov	($Zlh,&DWP(12,$Htbl,$Zll));
    141 	&mov	($Zll,&DWP(8,$Htbl,$Zll));
    142 	&xor	($rem,$rem);	# avoid partial register stalls on PIII
    143 
    144 	# shrd practically kills P4, 2.5x deterioration, but P4 has
    145 	# MMX code-path to execute. shrd runs tad faster [than twice
    146 	# the shifts, move's and or's] on pre-MMX Pentium (as well as
    147 	# PIII and Core2), *but* minimizes code size, spares register
    148 	# and thus allows to fold the loop...
    149 	if (!$unroll) {
    150 	my $cnt = $inp;
    151 	&mov	($cnt,15);
    152 	&jmp	(&label("x86_loop"));
    153 	&set_label("x86_loop",16);
    154 	    for($i=1;$i<=2;$i++) {
    155 		&mov	(&LB($rem),&LB($Zll));
    156 		&shrd	($Zll,$Zlh,4);
    157 		&and	(&LB($rem),0xf);
    158 		&shrd	($Zlh,$Zhl,4);
    159 		&shrd	($Zhl,$Zhh,4);
    160 		&shr	($Zhh,4);
    161 		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
    162 
    163 		&mov	(&LB($rem),&BP($off,"esp",$cnt));
    164 		if ($i&1) {
    165 			&and	(&LB($rem),0xf0);
    166 		} else {
    167 			&shl	(&LB($rem),4);
    168 		}
    169 
    170 		&xor	($Zll,&DWP(8,$Htbl,$rem));
    171 		&xor	($Zlh,&DWP(12,$Htbl,$rem));
    172 		&xor	($Zhl,&DWP(0,$Htbl,$rem));
    173 		&xor	($Zhh,&DWP(4,$Htbl,$rem));
    174 
    175 		if ($i&1) {
    176 			&dec	($cnt);
    177 			&js	(&label("x86_break"));
    178 		} else {
    179 			&jmp	(&label("x86_loop"));
    180 		}
    181 	    }
    182 	&set_label("x86_break",16);
    183 	} else {
    184 	    for($i=1;$i<32;$i++) {
    185 		&comment($i);
    186 		&mov	(&LB($rem),&LB($Zll));
    187 		&shrd	($Zll,$Zlh,4);
    188 		&and	(&LB($rem),0xf);
    189 		&shrd	($Zlh,$Zhl,4);
    190 		&shrd	($Zhl,$Zhh,4);
    191 		&shr	($Zhh,4);
    192 		&xor	($Zhh,&DWP($off+16,"esp",$rem,4));
    193 
    194 		if ($i&1) {
    195 			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
    196 			&and	(&LB($rem),0xf0);
    197 		} else {
    198 			&mov	(&LB($rem),&BP($off+15-($i>>1),"esp"));
    199 			&shl	(&LB($rem),4);
    200 		}
    201 
    202 		&xor	($Zll,&DWP(8,$Htbl,$rem));
    203 		&xor	($Zlh,&DWP(12,$Htbl,$rem));
    204 		&xor	($Zhl,&DWP(0,$Htbl,$rem));
    205 		&xor	($Zhh,&DWP(4,$Htbl,$rem));
    206 	    }
    207 	}
    208 	&bswap	($Zll);
    209 	&bswap	($Zlh);
    210 	&bswap	($Zhl);
    211 	if (!$x86only) {
    212 		&bswap	($Zhh);
    213 	} else {
    214 		&mov	("eax",$Zhh);
    215 		&bswap	("eax");
    216 		&mov	($Zhh,"eax");
    217 	}
    218 }
    219 
    220 if ($unroll) {
    221     &function_begin_B("_x86_gmult_4bit_inner");
    222 	&x86_loop(4);
    223 	&ret	();
    224     &function_end_B("_x86_gmult_4bit_inner");
    225 }
    226 
    227 sub deposit_rem_4bit {
    228     my $bias = shift;
    229 
    230 	&mov	(&DWP($bias+0, "esp"),0x0000<<16);
    231 	&mov	(&DWP($bias+4, "esp"),0x1C20<<16);
    232 	&mov	(&DWP($bias+8, "esp"),0x3840<<16);
    233 	&mov	(&DWP($bias+12,"esp"),0x2460<<16);
    234 	&mov	(&DWP($bias+16,"esp"),0x7080<<16);
    235 	&mov	(&DWP($bias+20,"esp"),0x6CA0<<16);
    236 	&mov	(&DWP($bias+24,"esp"),0x48C0<<16);
    237 	&mov	(&DWP($bias+28,"esp"),0x54E0<<16);
    238 	&mov	(&DWP($bias+32,"esp"),0xE100<<16);
    239 	&mov	(&DWP($bias+36,"esp"),0xFD20<<16);
    240 	&mov	(&DWP($bias+40,"esp"),0xD940<<16);
    241 	&mov	(&DWP($bias+44,"esp"),0xC560<<16);
    242 	&mov	(&DWP($bias+48,"esp"),0x9180<<16);
    243 	&mov	(&DWP($bias+52,"esp"),0x8DA0<<16);
    244 	&mov	(&DWP($bias+56,"esp"),0xA9C0<<16);
    245 	&mov	(&DWP($bias+60,"esp"),0xB5E0<<16);
    246 }
    247 
    249 $suffix = $x86only ? "" : "_x86";
    250 
    251 &function_begin("gcm_gmult_4bit".$suffix);
    252 	&stack_push(16+4+1);			# +1 for stack alignment
    253 	&mov	($inp,&wparam(0));		# load Xi
    254 	&mov	($Htbl,&wparam(1));		# load Htable
    255 
    256 	&mov	($Zhh,&DWP(0,$inp));		# load Xi[16]
    257 	&mov	($Zhl,&DWP(4,$inp));
    258 	&mov	($Zlh,&DWP(8,$inp));
    259 	&mov	($Zll,&DWP(12,$inp));
    260 
    261 	&deposit_rem_4bit(16);
    262 
    263 	&mov	(&DWP(0,"esp"),$Zhh);		# copy Xi[16] on stack
    264 	&mov	(&DWP(4,"esp"),$Zhl);
    265 	&mov	(&DWP(8,"esp"),$Zlh);
    266 	&mov	(&DWP(12,"esp"),$Zll);
    267 	&shr	($Zll,20);
    268 	&and	($Zll,0xf0);
    269 
    270 	if ($unroll) {
    271 		&call	("_x86_gmult_4bit_inner");
    272 	} else {
    273 		&x86_loop(0);
    274 		&mov	($inp,&wparam(0));
    275 	}
    276 
    277 	&mov	(&DWP(12,$inp),$Zll);
    278 	&mov	(&DWP(8,$inp),$Zlh);
    279 	&mov	(&DWP(4,$inp),$Zhl);
    280 	&mov	(&DWP(0,$inp),$Zhh);
    281 	&stack_pop(16+4+1);
    282 &function_end("gcm_gmult_4bit".$suffix);
    283 
    284 &function_begin("gcm_ghash_4bit".$suffix);
    285 	&stack_push(16+4+1);			# +1 for 64-bit alignment
    286 	&mov	($Zll,&wparam(0));		# load Xi
    287 	&mov	($Htbl,&wparam(1));		# load Htable
    288 	&mov	($inp,&wparam(2));		# load in
    289 	&mov	("ecx",&wparam(3));		# load len
    290 	&add	("ecx",$inp);
    291 	&mov	(&wparam(3),"ecx");
    292 
    293 	&mov	($Zhh,&DWP(0,$Zll));		# load Xi[16]
    294 	&mov	($Zhl,&DWP(4,$Zll));
    295 	&mov	($Zlh,&DWP(8,$Zll));
    296 	&mov	($Zll,&DWP(12,$Zll));
    297 
    298 	&deposit_rem_4bit(16);
    299 
    300     &set_label("x86_outer_loop",16);
    301 	&xor	($Zll,&DWP(12,$inp));		# xor with input
    302 	&xor	($Zlh,&DWP(8,$inp));
    303 	&xor	($Zhl,&DWP(4,$inp));
    304 	&xor	($Zhh,&DWP(0,$inp));
    305 	&mov	(&DWP(12,"esp"),$Zll);		# dump it on stack
    306 	&mov	(&DWP(8,"esp"),$Zlh);
    307 	&mov	(&DWP(4,"esp"),$Zhl);
    308 	&mov	(&DWP(0,"esp"),$Zhh);
    309 
    310 	&shr	($Zll,20);
    311 	&and	($Zll,0xf0);
    312 
    313 	if ($unroll) {
    314 		&call	("_x86_gmult_4bit_inner");
    315 	} else {
    316 		&x86_loop(0);
    317 		&mov	($inp,&wparam(2));
    318 	}
    319 	&lea	($inp,&DWP(16,$inp));
    320 	&cmp	($inp,&wparam(3));
    321 	&mov	(&wparam(2),$inp)	if (!$unroll);
    322 	&jb	(&label("x86_outer_loop"));
    323 
    324 	&mov	($inp,&wparam(0));	# load Xi
    325 	&mov	(&DWP(12,$inp),$Zll);
    326 	&mov	(&DWP(8,$inp),$Zlh);
    327 	&mov	(&DWP(4,$inp),$Zhl);
    328 	&mov	(&DWP(0,$inp),$Zhh);
    329 	&stack_pop(16+4+1);
    330 &function_end("gcm_ghash_4bit".$suffix);
    331 
    333 if (!$x86only) {{{
    334 
    335 &static_label("rem_4bit");
    336 
    337 if (!$sse2) {{	# pure-MMX "May" version...
    338 
    339 $S=12;		# shift factor for rem_4bit
    340 
    341 &function_begin_B("_mmx_gmult_4bit_inner");
    342 # MMX version performs 3.5 times better on P4 (see comment in non-MMX
    343 # routine for further details), 100% better on Opteron, ~70% better
    344 # on Core2 and PIII... In other words effort is considered to be well
    345 # spent... Since initial release the loop was unrolled in order to
    346 # "liberate" register previously used as loop counter. Instead it's
    347 # used to optimize critical path in 'Z.hi ^= rem_4bit[Z.lo&0xf]'.
    348 # The path involves move of Z.lo from MMX to integer register,
    349 # effective address calculation and finally merge of value to Z.hi.
    350 # Reference to rem_4bit is scheduled so late that I had to >>4
    351 # rem_4bit elements. This resulted in 20-45% procent improvement
    352 # on contemporary -archs.
    353 {
    354     my $cnt;
    355     my $rem_4bit = "eax";
    356     my @rem = ($Zhh,$Zll);
    357     my $nhi = $Zhl;
    358     my $nlo = $Zlh;
    359 
    360     my ($Zlo,$Zhi) = ("mm0","mm1");
    361     my $tmp = "mm2";
    362 
    363 	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
    364 	&mov	($nhi,$Zll);
    365 	&mov	(&LB($nlo),&LB($nhi));
    366 	&shl	(&LB($nlo),4);
    367 	&and	($nhi,0xf0);
    368 	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
    369 	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
    370 	&movd	($rem[0],$Zlo);
    371 
    372 	for ($cnt=28;$cnt>=-2;$cnt--) {
    373 	    my $odd = $cnt&1;
    374 	    my $nix = $odd ? $nlo : $nhi;
    375 
    376 		&shl	(&LB($nlo),4)			if ($odd);
    377 		&psrlq	($Zlo,4);
    378 		&movq	($tmp,$Zhi);
    379 		&psrlq	($Zhi,4);
    380 		&pxor	($Zlo,&QWP(8,$Htbl,$nix));
    381 		&mov	(&LB($nlo),&BP($cnt/2,$inp))	if (!$odd && $cnt>=0);
    382 		&psllq	($tmp,60);
    383 		&and	($nhi,0xf0)			if ($odd);
    384 		&pxor	($Zhi,&QWP(0,$rem_4bit,$rem[1],8)) if ($cnt<28);
    385 		&and	($rem[0],0xf);
    386 		&pxor	($Zhi,&QWP(0,$Htbl,$nix));
    387 		&mov	($nhi,$nlo)			if (!$odd && $cnt>=0);
    388 		&movd	($rem[1],$Zlo);
    389 		&pxor	($Zlo,$tmp);
    390 
    391 		push	(@rem,shift(@rem));		# "rotate" registers
    392 	}
    393 
    394 	&mov	($inp,&DWP(4,$rem_4bit,$rem[1],8));	# last rem_4bit[rem]
    395 
    396 	&psrlq	($Zlo,32);	# lower part of Zlo is already there
    397 	&movd	($Zhl,$Zhi);
    398 	&psrlq	($Zhi,32);
    399 	&movd	($Zlh,$Zlo);
    400 	&movd	($Zhh,$Zhi);
    401 	&shl	($inp,4);	# compensate for rem_4bit[i] being >>4
    402 
    403 	&bswap	($Zll);
    404 	&bswap	($Zhl);
    405 	&bswap	($Zlh);
    406 	&xor	($Zhh,$inp);
    407 	&bswap	($Zhh);
    408 
    409 	&ret	();
    410 }
    411 &function_end_B("_mmx_gmult_4bit_inner");
    412 
    413 &function_begin("gcm_gmult_4bit_mmx");
    414 	&mov	($inp,&wparam(0));	# load Xi
    415 	&mov	($Htbl,&wparam(1));	# load Htable
    416 
    417 	&call	(&label("pic_point"));
    418 	&set_label("pic_point");
    419 	&blindpop("eax");
    420 	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
    421 
    422 	&movz	($Zll,&BP(15,$inp));
    423 
    424 	&call	("_mmx_gmult_4bit_inner");
    425 
    426 	&mov	($inp,&wparam(0));	# load Xi
    427 	&emms	();
    428 	&mov	(&DWP(12,$inp),$Zll);
    429 	&mov	(&DWP(4,$inp),$Zhl);
    430 	&mov	(&DWP(8,$inp),$Zlh);
    431 	&mov	(&DWP(0,$inp),$Zhh);
    432 &function_end("gcm_gmult_4bit_mmx");
    433 
    435 # Streamed version performs 20% better on P4, 7% on Opteron,
    436 # 10% on Core2 and PIII...
    437 &function_begin("gcm_ghash_4bit_mmx");
    438 	&mov	($Zhh,&wparam(0));	# load Xi
    439 	&mov	($Htbl,&wparam(1));	# load Htable
    440 	&mov	($inp,&wparam(2));	# load in
    441 	&mov	($Zlh,&wparam(3));	# load len
    442 
    443 	&call	(&label("pic_point"));
    444 	&set_label("pic_point");
    445 	&blindpop("eax");
    446 	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
    447 
    448 	&add	($Zlh,$inp);
    449 	&mov	(&wparam(3),$Zlh);	# len to point at the end of input
    450 	&stack_push(4+1);		# +1 for stack alignment
    451 
    452 	&mov	($Zll,&DWP(12,$Zhh));	# load Xi[16]
    453 	&mov	($Zhl,&DWP(4,$Zhh));
    454 	&mov	($Zlh,&DWP(8,$Zhh));
    455 	&mov	($Zhh,&DWP(0,$Zhh));
    456 	&jmp	(&label("mmx_outer_loop"));
    457 
    458     &set_label("mmx_outer_loop",16);
    459 	&xor	($Zll,&DWP(12,$inp));
    460 	&xor	($Zhl,&DWP(4,$inp));
    461 	&xor	($Zlh,&DWP(8,$inp));
    462 	&xor	($Zhh,&DWP(0,$inp));
    463 	&mov	(&wparam(2),$inp);
    464 	&mov	(&DWP(12,"esp"),$Zll);
    465 	&mov	(&DWP(4,"esp"),$Zhl);
    466 	&mov	(&DWP(8,"esp"),$Zlh);
    467 	&mov	(&DWP(0,"esp"),$Zhh);
    468 
    469 	&mov	($inp,"esp");
    470 	&shr	($Zll,24);
    471 
    472 	&call	("_mmx_gmult_4bit_inner");
    473 
    474 	&mov	($inp,&wparam(2));
    475 	&lea	($inp,&DWP(16,$inp));
    476 	&cmp	($inp,&wparam(3));
    477 	&jb	(&label("mmx_outer_loop"));
    478 
    479 	&mov	($inp,&wparam(0));	# load Xi
    480 	&emms	();
    481 	&mov	(&DWP(12,$inp),$Zll);
    482 	&mov	(&DWP(4,$inp),$Zhl);
    483 	&mov	(&DWP(8,$inp),$Zlh);
    484 	&mov	(&DWP(0,$inp),$Zhh);
    485 
    486 	&stack_pop(4+1);
    487 &function_end("gcm_ghash_4bit_mmx");
    488 
    490 }} else {{	# "June" MMX version...
    491 		# ... has slower "April" gcm_gmult_4bit_mmx with folded
    492 		# loop. This is done to conserve code size...
    493 $S=16;		# shift factor for rem_4bit
    494 
    495 sub mmx_loop() {
    496 # MMX version performs 2.8 times better on P4 (see comment in non-MMX
    497 # routine for further details), 40% better on Opteron and Core2, 50%
    498 # better on PIII... In other words effort is considered to be well
    499 # spent...
    500     my $inp = shift;
    501     my $rem_4bit = shift;
    502     my $cnt = $Zhh;
    503     my $nhi = $Zhl;
    504     my $nlo = $Zlh;
    505     my $rem = $Zll;
    506 
    507     my ($Zlo,$Zhi) = ("mm0","mm1");
    508     my $tmp = "mm2";
    509 
    510 	&xor	($nlo,$nlo);	# avoid partial register stalls on PIII
    511 	&mov	($nhi,$Zll);
    512 	&mov	(&LB($nlo),&LB($nhi));
    513 	&mov	($cnt,14);
    514 	&shl	(&LB($nlo),4);
    515 	&and	($nhi,0xf0);
    516 	&movq	($Zlo,&QWP(8,$Htbl,$nlo));
    517 	&movq	($Zhi,&QWP(0,$Htbl,$nlo));
    518 	&movd	($rem,$Zlo);
    519 	&jmp	(&label("mmx_loop"));
    520 
    521     &set_label("mmx_loop",16);
    522 	&psrlq	($Zlo,4);
    523 	&and	($rem,0xf);
    524 	&movq	($tmp,$Zhi);
    525 	&psrlq	($Zhi,4);
    526 	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
    527 	&mov	(&LB($nlo),&BP(0,$inp,$cnt));
    528 	&psllq	($tmp,60);
    529 	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
    530 	&dec	($cnt);
    531 	&movd	($rem,$Zlo);
    532 	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
    533 	&mov	($nhi,$nlo);
    534 	&pxor	($Zlo,$tmp);
    535 	&js	(&label("mmx_break"));
    536 
    537 	&shl	(&LB($nlo),4);
    538 	&and	($rem,0xf);
    539 	&psrlq	($Zlo,4);
    540 	&and	($nhi,0xf0);
    541 	&movq	($tmp,$Zhi);
    542 	&psrlq	($Zhi,4);
    543 	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
    544 	&psllq	($tmp,60);
    545 	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
    546 	&movd	($rem,$Zlo);
    547 	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
    548 	&pxor	($Zlo,$tmp);
    549 	&jmp	(&label("mmx_loop"));
    550 
    551     &set_label("mmx_break",16);
    552 	&shl	(&LB($nlo),4);
    553 	&and	($rem,0xf);
    554 	&psrlq	($Zlo,4);
    555 	&and	($nhi,0xf0);
    556 	&movq	($tmp,$Zhi);
    557 	&psrlq	($Zhi,4);
    558 	&pxor	($Zlo,&QWP(8,$Htbl,$nlo));
    559 	&psllq	($tmp,60);
    560 	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
    561 	&movd	($rem,$Zlo);
    562 	&pxor	($Zhi,&QWP(0,$Htbl,$nlo));
    563 	&pxor	($Zlo,$tmp);
    564 
    565 	&psrlq	($Zlo,4);
    566 	&and	($rem,0xf);
    567 	&movq	($tmp,$Zhi);
    568 	&psrlq	($Zhi,4);
    569 	&pxor	($Zlo,&QWP(8,$Htbl,$nhi));
    570 	&psllq	($tmp,60);
    571 	&pxor	($Zhi,&QWP(0,$rem_4bit,$rem,8));
    572 	&movd	($rem,$Zlo);
    573 	&pxor	($Zhi,&QWP(0,$Htbl,$nhi));
    574 	&pxor	($Zlo,$tmp);
    575 
    576 	&psrlq	($Zlo,32);	# lower part of Zlo is already there
    577 	&movd	($Zhl,$Zhi);
    578 	&psrlq	($Zhi,32);
    579 	&movd	($Zlh,$Zlo);
    580 	&movd	($Zhh,$Zhi);
    581 
    582 	&bswap	($Zll);
    583 	&bswap	($Zhl);
    584 	&bswap	($Zlh);
    585 	&bswap	($Zhh);
    586 }
    587 
    588 &function_begin("gcm_gmult_4bit_mmx");
    589 	&mov	($inp,&wparam(0));	# load Xi
    590 	&mov	($Htbl,&wparam(1));	# load Htable
    591 
    592 	&call	(&label("pic_point"));
    593 	&set_label("pic_point");
    594 	&blindpop("eax");
    595 	&lea	("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax"));
    596 
    597 	&movz	($Zll,&BP(15,$inp));
    598 
    599 	&mmx_loop($inp,"eax");
    600 
    601 	&emms	();
    602 	&mov	(&DWP(12,$inp),$Zll);
    603 	&mov	(&DWP(4,$inp),$Zhl);
    604 	&mov	(&DWP(8,$inp),$Zlh);
    605 	&mov	(&DWP(0,$inp),$Zhh);
    606 &function_end("gcm_gmult_4bit_mmx");
    607 
    609 ######################################################################
    610 # Below subroutine is "528B" variant of "4-bit" GCM GHASH function
    611 # (see gcm128.c for details). It provides further 20-40% performance
    612 # improvement over above mentioned "May" version.
    613 
    614 &static_label("rem_8bit");
    615 
    616 &function_begin("gcm_ghash_4bit_mmx");
    617 { my ($Zlo,$Zhi) = ("mm7","mm6");
    618   my $rem_8bit = "esi";
    619   my $Htbl = "ebx";
    620 
    621     # parameter block
    622     &mov	("eax",&wparam(0));		# Xi
    623     &mov	("ebx",&wparam(1));		# Htable
    624     &mov	("ecx",&wparam(2));		# inp
    625     &mov	("edx",&wparam(3));		# len
    626     &mov	("ebp","esp");			# original %esp
    627     &call	(&label("pic_point"));
    628     &set_label	("pic_point");
    629     &blindpop	($rem_8bit);
    630     &lea	($rem_8bit,&DWP(&label("rem_8bit")."-".&label("pic_point"),$rem_8bit));
    631 
    632     &sub	("esp",512+16+16);		# allocate stack frame...
    633     &and	("esp",-64);			# ...and align it
    634     &sub	("esp",16);			# place for (u8)(H[]<<4)
    635 
    636     &add	("edx","ecx");			# pointer to the end of input
    637     &mov	(&DWP(528+16+0,"esp"),"eax");	# save Xi
    638     &mov	(&DWP(528+16+8,"esp"),"edx");	# save inp+len
    639     &mov	(&DWP(528+16+12,"esp"),"ebp");	# save original %esp
    640 
    641     { my @lo  = ("mm0","mm1","mm2");
    642       my @hi  = ("mm3","mm4","mm5");
    643       my @tmp = ("mm6","mm7");
    644       my $off1=0,$off2=0,$i;
    645 
    646       &add	($Htbl,128);			# optimize for size
    647       &lea	("edi",&DWP(16+128,"esp"));
    648       &lea	("ebp",&DWP(16+256+128,"esp"));
    649 
    650       # decompose Htable (low and high parts are kept separately),
    651       # generate Htable[]>>4, (u8)(Htable[]<<4), save to stack...
    652       for ($i=0;$i<18;$i++) {
    653 
    654 	&mov	("edx",&DWP(16*$i+8-128,$Htbl))		if ($i<16);
    655 	&movq	($lo[0],&QWP(16*$i+8-128,$Htbl))	if ($i<16);
    656 	&psllq	($tmp[1],60)				if ($i>1);
    657 	&movq	($hi[0],&QWP(16*$i+0-128,$Htbl))	if ($i<16);
    658 	&por	($lo[2],$tmp[1])			if ($i>1);
    659 	&movq	(&QWP($off1-128,"edi"),$lo[1])		if ($i>0 && $i<17);
    660 	&psrlq	($lo[1],4)				if ($i>0 && $i<17);
    661 	&movq	(&QWP($off1,"edi"),$hi[1])		if ($i>0 && $i<17);
    662 	&movq	($tmp[0],$hi[1])			if ($i>0 && $i<17);
    663 	&movq	(&QWP($off2-128,"ebp"),$lo[2])		if ($i>1);
    664 	&psrlq	($hi[1],4)				if ($i>0 && $i<17);
    665 	&movq	(&QWP($off2,"ebp"),$hi[2])		if ($i>1);
    666 	&shl	("edx",4)				if ($i<16);
    667 	&mov	(&BP($i,"esp"),&LB("edx"))		if ($i<16);
    668 
    669 	unshift	(@lo,pop(@lo));			# "rotate" registers
    670 	unshift	(@hi,pop(@hi));
    671 	unshift	(@tmp,pop(@tmp));
    672 	$off1 += 8	if ($i>0);
    673 	$off2 += 8	if ($i>1);
    674       }
    675     }
    676 
    677     &movq	($Zhi,&QWP(0,"eax"));
    678     &mov	("ebx",&DWP(8,"eax"));
    679     &mov	("edx",&DWP(12,"eax"));		# load Xi
    680 
    681 &set_label("outer",16);
    682   { my $nlo = "eax";
    683     my $dat = "edx";
    684     my @nhi = ("edi","ebp");
    685     my @rem = ("ebx","ecx");
    686     my @red = ("mm0","mm1","mm2");
    687     my $tmp = "mm3";
    688 
    689     &xor	($dat,&DWP(12,"ecx"));		# merge input data
    690     &xor	("ebx",&DWP(8,"ecx"));
    691     &pxor	($Zhi,&QWP(0,"ecx"));
    692     &lea	("ecx",&DWP(16,"ecx"));		# inp+=16
    693     #&mov	(&DWP(528+12,"esp"),$dat);	# save inp^Xi
    694     &mov	(&DWP(528+8,"esp"),"ebx");
    695     &movq	(&QWP(528+0,"esp"),$Zhi);
    696     &mov	(&DWP(528+16+4,"esp"),"ecx");	# save inp
    697 
    698     &xor	($nlo,$nlo);
    699     &rol	($dat,8);
    700     &mov	(&LB($nlo),&LB($dat));
    701     &mov	($nhi[1],$nlo);
    702     &and	(&LB($nlo),0x0f);
    703     &shr	($nhi[1],4);
    704     &pxor	($red[0],$red[0]);
    705     &rol	($dat,8);			# next byte
    706     &pxor	($red[1],$red[1]);
    707     &pxor	($red[2],$red[2]);
    708 
    709     # Just like in "May" verson modulo-schedule for critical path in
    710     # 'Z.hi ^= rem_8bit[Z.lo&0xff^((u8)H[nhi]<<4)]<<48'. Final 'pxor'
    711     # is scheduled so late that rem_8bit[] has to be shifted *right*
    712     # by 16, which is why last argument to pinsrw is 2, which
    713     # corresponds to <<32=<<48>>16...
    714     for ($j=11,$i=0;$i<15;$i++) {
    715 
    716       if ($i>0) {
    717 	&pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
    718 	&rol	($dat,8);				# next byte
    719 	&pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
    720 
    721 	&pxor	($Zlo,$tmp);
    722 	&pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
    723 	&xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
    724       } else {
    725 	&movq	($Zlo,&QWP(16,"esp",$nlo,8));
    726 	&movq	($Zhi,&QWP(16+128,"esp",$nlo,8));
    727       }
    728 
    729 	&mov	(&LB($nlo),&LB($dat));
    730 	&mov	($dat,&DWP(528+$j,"esp"))		if (--$j%4==0);
    731 
    732 	&movd	($rem[0],$Zlo);
    733 	&movz	($rem[1],&LB($rem[1]))			if ($i>0);
    734 	&psrlq	($Zlo,8);				# Z>>=8
    735 
    736 	&movq	($tmp,$Zhi);
    737 	&mov	($nhi[0],$nlo);
    738 	&psrlq	($Zhi,8);
    739 
    740 	&pxor	($Zlo,&QWP(16+256+0,"esp",$nhi[1],8));	# Z^=H[nhi]>>4
    741 	&and	(&LB($nlo),0x0f);
    742 	&psllq	($tmp,56);
    743 
    744 	&pxor	($Zhi,$red[1])				if ($i>1);
    745 	&shr	($nhi[0],4);
    746 	&pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2)	if ($i>0);
    747 
    748 	unshift	(@red,pop(@red));			# "rotate" registers
    749 	unshift	(@rem,pop(@rem));
    750 	unshift	(@nhi,pop(@nhi));
    751     }
    752 
    753     &pxor	($Zlo,&QWP(16,"esp",$nlo,8));		# Z^=H[nlo]
    754     &pxor	($Zhi,&QWP(16+128,"esp",$nlo,8));
    755     &xor	(&LB($rem[1]),&BP(0,"esp",$nhi[0]));	# rem^(H[nhi]<<4)
    756 
    757     &pxor	($Zlo,$tmp);
    758     &pxor	($Zhi,&QWP(16+256+128,"esp",$nhi[0],8));
    759     &movz	($rem[1],&LB($rem[1]));
    760 
    761     &pxor	($red[2],$red[2]);			# clear 2nd word
    762     &psllq	($red[1],4);
    763 
    764     &movd	($rem[0],$Zlo);
    765     &psrlq	($Zlo,4);				# Z>>=4
    766 
    767     &movq	($tmp,$Zhi);
    768     &psrlq	($Zhi,4);
    769     &shl	($rem[0],4);				# rem<<4
    770 
    771     &pxor	($Zlo,&QWP(16,"esp",$nhi[1],8));	# Z^=H[nhi]
    772     &psllq	($tmp,60);
    773     &movz	($rem[0],&LB($rem[0]));
    774 
    775     &pxor	($Zlo,$tmp);
    776     &pxor	($Zhi,&QWP(16+128,"esp",$nhi[1],8));
    777 
    778     &pinsrw	($red[0],&WP(0,$rem_8bit,$rem[1],2),2);
    779     &pxor	($Zhi,$red[1]);
    780 
    781     &movd	($dat,$Zlo);
    782     &pinsrw	($red[2],&WP(0,$rem_8bit,$rem[0],2),3);	# last is <<48
    783 
    784     &psllq	($red[0],12);				# correct by <<16>>4
    785     &pxor	($Zhi,$red[0]);
    786     &psrlq	($Zlo,32);
    787     &pxor	($Zhi,$red[2]);
    788 
    789     &mov	("ecx",&DWP(528+16+4,"esp"));	# restore inp
    790     &movd	("ebx",$Zlo);
    791     &movq	($tmp,$Zhi);			# 01234567
    792     &psllw	($Zhi,8);			# 1.3.5.7.
    793     &psrlw	($tmp,8);			# .0.2.4.6
    794     &por	($Zhi,$tmp);			# 10325476
    795     &bswap	($dat);
    796     &pshufw	($Zhi,$Zhi,0b00011011);		# 76543210
    797     &bswap	("ebx");
    798     
    799     &cmp	("ecx",&DWP(528+16+8,"esp"));	# are we done?
    800     &jne	(&label("outer"));
    801   }
    802 
    803     &mov	("eax",&DWP(528+16+0,"esp"));	# restore Xi
    804     &mov	(&DWP(12,"eax"),"edx");
    805     &mov	(&DWP(8,"eax"),"ebx");
    806     &movq	(&QWP(0,"eax"),$Zhi);
    807 
    808     &mov	("esp",&DWP(528+16+12,"esp"));	# restore original %esp
    809     &emms	();
    810 }
    811 &function_end("gcm_ghash_4bit_mmx");
    812 }}
    813 
    815 if ($sse2) {{
    816 ######################################################################
    817 # PCLMULQDQ version.
    818 
    819 $Xip="eax";
    820 $Htbl="edx";
    821 $const="ecx";
    822 $inp="esi";
    823 $len="ebx";
    824 
    825 ($Xi,$Xhi)=("xmm0","xmm1");	$Hkey="xmm2";
    826 ($T1,$T2,$T3)=("xmm3","xmm4","xmm5");
    827 ($Xn,$Xhn)=("xmm6","xmm7");
    828 
    829 &static_label("bswap");
    830 
    831 sub clmul64x64_T2 {	# minimal "register" pressure
    832 my ($Xhi,$Xi,$Hkey)=@_;
    833 
    834 	&movdqa		($Xhi,$Xi);		#
    835 	&pshufd		($T1,$Xi,0b01001110);
    836 	&pshufd		($T2,$Hkey,0b01001110);
    837 	&pxor		($T1,$Xi);		#
    838 	&pxor		($T2,$Hkey);
    839 
    840 	&pclmulqdq	($Xi,$Hkey,0x00);	#######
    841 	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
    842 	&pclmulqdq	($T1,$T2,0x00);		#######
    843 	&xorps		($T1,$Xi);		#
    844 	&xorps		($T1,$Xhi);		#
    845 
    846 	&movdqa		($T2,$T1);		#
    847 	&psrldq		($T1,8);
    848 	&pslldq		($T2,8);		#
    849 	&pxor		($Xhi,$T1);
    850 	&pxor		($Xi,$T2);		#
    851 }
    852 
    853 sub clmul64x64_T3 {
    854 # Even though this subroutine offers visually better ILP, it
    855 # was empirically found to be a tad slower than above version.
    856 # At least in gcm_ghash_clmul context. But it's just as well,
    857 # because loop modulo-scheduling is possible only thanks to
    858 # minimized "register" pressure...
    859 my ($Xhi,$Xi,$Hkey)=@_;
    860 
    861 	&movdqa		($T1,$Xi);		#
    862 	&movdqa		($Xhi,$Xi);
    863 	&pclmulqdq	($Xi,$Hkey,0x00);	#######
    864 	&pclmulqdq	($Xhi,$Hkey,0x11);	#######
    865 	&pshufd		($T2,$T1,0b01001110);	#
    866 	&pshufd		($T3,$Hkey,0b01001110);
    867 	&pxor		($T2,$T1);		#
    868 	&pxor		($T3,$Hkey);
    869 	&pclmulqdq	($T2,$T3,0x00);		#######
    870 	&pxor		($T2,$Xi);		#
    871 	&pxor		($T2,$Xhi);		#
    872 
    873 	&movdqa		($T3,$T2);		#
    874 	&psrldq		($T2,8);
    875 	&pslldq		($T3,8);		#
    876 	&pxor		($Xhi,$T2);
    877 	&pxor		($Xi,$T3);		#
    878 }
    879 
    881 if (1) {		# Algorithm 9 with <<1 twist.
    882 			# Reduction is shorter and uses only two
    883 			# temporary registers, which makes it better
    884 			# candidate for interleaving with 64x64
    885 			# multiplication. Pre-modulo-scheduled loop
    886 			# was found to be ~20% faster than Algorithm 5
    887 			# below. Algorithm 9 was therefore chosen for
    888 			# further optimization...
    889 
    890 sub reduction_alg9 {	# 17/13 times faster than Intel version
    891 my ($Xhi,$Xi) = @_;
    892 
    893 	# 1st phase
    894 	&movdqa		($T1,$Xi)		#
    895 	&psllq		($Xi,1);
    896 	&pxor		($Xi,$T1);		#
    897 	&psllq		($Xi,5);		#
    898 	&pxor		($Xi,$T1);		#
    899 	&psllq		($Xi,57);		#
    900 	&movdqa		($T2,$Xi);		#
    901 	&pslldq		($Xi,8);
    902 	&psrldq		($T2,8);		#
    903 	&pxor		($Xi,$T1);
    904 	&pxor		($Xhi,$T2);		#
    905 
    906 	# 2nd phase
    907 	&movdqa		($T2,$Xi);
    908 	&psrlq		($Xi,5);
    909 	&pxor		($Xi,$T2);		#
    910 	&psrlq		($Xi,1);		#
    911 	&pxor		($Xi,$T2);		#
    912 	&pxor		($T2,$Xhi);
    913 	&psrlq		($Xi,1);		#
    914 	&pxor		($Xi,$T2);		#
    915 }
    916 
    917 &function_begin_B("gcm_init_clmul");
    918 	&mov		($Htbl,&wparam(0));
    919 	&mov		($Xip,&wparam(1));
    920 
    921 	&call		(&label("pic"));
    922 &set_label("pic");
    923 	&blindpop	($const);
    924 	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
    925 
    926 	&movdqu		($Hkey,&QWP(0,$Xip));
    927 	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
    928 
    929 	# <<1 twist
    930 	&pshufd		($T2,$Hkey,0b11111111);	# broadcast uppermost dword
    931 	&movdqa		($T1,$Hkey);
    932 	&psllq		($Hkey,1);
    933 	&pxor		($T3,$T3);		#
    934 	&psrlq		($T1,63);
    935 	&pcmpgtd	($T3,$T2);		# broadcast carry bit
    936 	&pslldq		($T1,8);
    937 	&por		($Hkey,$T1);		# H<<=1
    938 
    939 	# magic reduction
    940 	&pand		($T3,&QWP(16,$const));	# 0x1c2_polynomial
    941 	&pxor		($Hkey,$T3);		# if(carry) H^=0x1c2_polynomial
    942 
    943 	# calculate H^2
    944 	&movdqa		($Xi,$Hkey);
    945 	&clmul64x64_T2	($Xhi,$Xi,$Hkey);
    946 	&reduction_alg9	($Xhi,$Xi);
    947 
    948 	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
    949 	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
    950 
    951 	&ret		();
    952 &function_end_B("gcm_init_clmul");
    953 
    954 &function_begin_B("gcm_gmult_clmul");
    955 	&mov		($Xip,&wparam(0));
    956 	&mov		($Htbl,&wparam(1));
    957 
    958 	&call		(&label("pic"));
    959 &set_label("pic");
    960 	&blindpop	($const);
    961 	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
    962 
    963 	&movdqu		($Xi,&QWP(0,$Xip));
    964 	&movdqa		($T3,&QWP(0,$const));
    965 	&movups		($Hkey,&QWP(0,$Htbl));
    966 	&pshufb		($Xi,$T3);
    967 
    968 	&clmul64x64_T2	($Xhi,$Xi,$Hkey);
    969 	&reduction_alg9	($Xhi,$Xi);
    970 
    971 	&pshufb		($Xi,$T3);
    972 	&movdqu		(&QWP(0,$Xip),$Xi);
    973 
    974 	&ret	();
    975 &function_end_B("gcm_gmult_clmul");
    976 
    977 &function_begin("gcm_ghash_clmul");
    978 	&mov		($Xip,&wparam(0));
    979 	&mov		($Htbl,&wparam(1));
    980 	&mov		($inp,&wparam(2));
    981 	&mov		($len,&wparam(3));
    982 
    983 	&call		(&label("pic"));
    984 &set_label("pic");
    985 	&blindpop	($const);
    986 	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
    987 
    988 	&movdqu		($Xi,&QWP(0,$Xip));
    989 	&movdqa		($T3,&QWP(0,$const));
    990 	&movdqu		($Hkey,&QWP(0,$Htbl));
    991 	&pshufb		($Xi,$T3);
    992 
    993 	&sub		($len,0x10);
    994 	&jz		(&label("odd_tail"));
    995 
    996 	#######
    997 	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
    998 	#	[(H*Ii+1) + (H*Xi+1)] mod P =
    999 	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
   1000 	#
   1001 	&movdqu		($T1,&QWP(0,$inp));	# Ii
   1002 	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
   1003 	&pshufb		($T1,$T3);
   1004 	&pshufb		($Xn,$T3);
   1005 	&pxor		($Xi,$T1);		# Ii+Xi
   1006 
   1007 	&clmul64x64_T2	($Xhn,$Xn,$Hkey);	# H*Ii+1
   1008 	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
   1009 
   1010 	&lea		($inp,&DWP(32,$inp));	# i+=2
   1011 	&sub		($len,0x20);
   1012 	&jbe		(&label("even_tail"));
   1013 
   1014 &set_label("mod_loop");
   1015 	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
   1016 	&movdqu		($T1,&QWP(0,$inp));	# Ii
   1017 	&movups		($Hkey,&QWP(0,$Htbl));	# load H
   1018 
   1019 	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
   1020 	&pxor		($Xhi,$Xhn);
   1021 
   1022 	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
   1023 	&pshufb		($T1,$T3);
   1024 	&pshufb		($Xn,$T3);
   1025 
   1026 	&movdqa		($T3,$Xn);		#&clmul64x64_TX	($Xhn,$Xn,$Hkey); H*Ii+1
   1027 	&movdqa		($Xhn,$Xn);
   1028 	 &pxor		($Xhi,$T1);		# "Ii+Xi", consume early
   1029 
   1030 	  &movdqa	($T1,$Xi)		#&reduction_alg9($Xhi,$Xi); 1st phase
   1031 	  &psllq	($Xi,1);
   1032 	  &pxor		($Xi,$T1);		#
   1033 	  &psllq	($Xi,5);		#
   1034 	  &pxor		($Xi,$T1);		#
   1035 	&pclmulqdq	($Xn,$Hkey,0x00);	#######
   1036 	  &psllq	($Xi,57);		#
   1037 	  &movdqa	($T2,$Xi);		#
   1038 	  &pslldq	($Xi,8);
   1039 	  &psrldq	($T2,8);		#	
   1040 	  &pxor		($Xi,$T1);
   1041 	&pshufd		($T1,$T3,0b01001110);
   1042 	  &pxor		($Xhi,$T2);		#
   1043 	&pxor		($T1,$T3);
   1044 	&pshufd		($T3,$Hkey,0b01001110);
   1045 	&pxor		($T3,$Hkey);		#
   1046 
   1047 	&pclmulqdq	($Xhn,$Hkey,0x11);	#######
   1048 	  &movdqa	($T2,$Xi);		# 2nd phase
   1049 	  &psrlq	($Xi,5);
   1050 	  &pxor		($Xi,$T2);		#
   1051 	  &psrlq	($Xi,1);		#
   1052 	  &pxor		($Xi,$T2);		#
   1053 	  &pxor		($T2,$Xhi);
   1054 	  &psrlq	($Xi,1);		#
   1055 	  &pxor		($Xi,$T2);		#
   1056 
   1057 	&pclmulqdq	($T1,$T3,0x00);		#######
   1058 	&movups		($Hkey,&QWP(16,$Htbl));	# load H^2
   1059 	&xorps		($T1,$Xn);		#
   1060 	&xorps		($T1,$Xhn);		#
   1061 
   1062 	&movdqa		($T3,$T1);		#
   1063 	&psrldq		($T1,8);
   1064 	&pslldq		($T3,8);		#
   1065 	&pxor		($Xhn,$T1);
   1066 	&pxor		($Xn,$T3);		#
   1067 	&movdqa		($T3,&QWP(0,$const));
   1068 
   1069 	&lea		($inp,&DWP(32,$inp));
   1070 	&sub		($len,0x20);
   1071 	&ja		(&label("mod_loop"));
   1072 
   1073 &set_label("even_tail");
   1074 	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
   1075 
   1076 	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
   1077 	&pxor		($Xhi,$Xhn);
   1078 
   1079 	&reduction_alg9	($Xhi,$Xi);
   1080 
   1081 	&test		($len,$len);
   1082 	&jnz		(&label("done"));
   1083 
   1084 	&movups		($Hkey,&QWP(0,$Htbl));	# load H
   1085 &set_label("odd_tail");
   1086 	&movdqu		($T1,&QWP(0,$inp));	# Ii
   1087 	&pshufb		($T1,$T3);
   1088 	&pxor		($Xi,$T1);		# Ii+Xi
   1089 
   1090 	&clmul64x64_T2	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
   1091 	&reduction_alg9	($Xhi,$Xi);
   1092 
   1093 &set_label("done");
   1094 	&pshufb		($Xi,$T3);
   1095 	&movdqu		(&QWP(0,$Xip),$Xi);
   1096 &function_end("gcm_ghash_clmul");
   1097 
   1099 } else {		# Algorith 5. Kept for reference purposes.
   1100 
   1101 sub reduction_alg5 {	# 19/16 times faster than Intel version
   1102 my ($Xhi,$Xi)=@_;
   1103 
   1104 	# <<1
   1105 	&movdqa		($T1,$Xi);		#
   1106 	&movdqa		($T2,$Xhi);
   1107 	&pslld		($Xi,1);
   1108 	&pslld		($Xhi,1);		#
   1109 	&psrld		($T1,31);
   1110 	&psrld		($T2,31);		#
   1111 	&movdqa		($T3,$T1);
   1112 	&pslldq		($T1,4);
   1113 	&psrldq		($T3,12);		#
   1114 	&pslldq		($T2,4);
   1115 	&por		($Xhi,$T3);		#
   1116 	&por		($Xi,$T1);
   1117 	&por		($Xhi,$T2);		#
   1118 
   1119 	# 1st phase
   1120 	&movdqa		($T1,$Xi);
   1121 	&movdqa		($T2,$Xi);
   1122 	&movdqa		($T3,$Xi);		#
   1123 	&pslld		($T1,31);
   1124 	&pslld		($T2,30);
   1125 	&pslld		($Xi,25);		#
   1126 	&pxor		($T1,$T2);
   1127 	&pxor		($T1,$Xi);		#
   1128 	&movdqa		($T2,$T1);		#
   1129 	&pslldq		($T1,12);
   1130 	&psrldq		($T2,4);		#
   1131 	&pxor		($T3,$T1);
   1132 
   1133 	# 2nd phase
   1134 	&pxor		($Xhi,$T3);		#
   1135 	&movdqa		($Xi,$T3);
   1136 	&movdqa		($T1,$T3);
   1137 	&psrld		($Xi,1);		#
   1138 	&psrld		($T1,2);
   1139 	&psrld		($T3,7);		#
   1140 	&pxor		($Xi,$T1);
   1141 	&pxor		($Xhi,$T2);
   1142 	&pxor		($Xi,$T3);		#
   1143 	&pxor		($Xi,$Xhi);		#
   1144 }
   1145 
   1146 &function_begin_B("gcm_init_clmul");
   1147 	&mov		($Htbl,&wparam(0));
   1148 	&mov		($Xip,&wparam(1));
   1149 
   1150 	&call		(&label("pic"));
   1151 &set_label("pic");
   1152 	&blindpop	($const);
   1153 	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
   1154 
   1155 	&movdqu		($Hkey,&QWP(0,$Xip));
   1156 	&pshufd		($Hkey,$Hkey,0b01001110);# dword swap
   1157 
   1158 	# calculate H^2
   1159 	&movdqa		($Xi,$Hkey);
   1160 	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
   1161 	&reduction_alg5	($Xhi,$Xi);
   1162 
   1163 	&movdqu		(&QWP(0,$Htbl),$Hkey);	# save H
   1164 	&movdqu		(&QWP(16,$Htbl),$Xi);	# save H^2
   1165 
   1166 	&ret		();
   1167 &function_end_B("gcm_init_clmul");
   1168 
   1169 &function_begin_B("gcm_gmult_clmul");
   1170 	&mov		($Xip,&wparam(0));
   1171 	&mov		($Htbl,&wparam(1));
   1172 
   1173 	&call		(&label("pic"));
   1174 &set_label("pic");
   1175 	&blindpop	($const);
   1176 	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
   1177 
   1178 	&movdqu		($Xi,&QWP(0,$Xip));
   1179 	&movdqa		($Xn,&QWP(0,$const));
   1180 	&movdqu		($Hkey,&QWP(0,$Htbl));
   1181 	&pshufb		($Xi,$Xn);
   1182 
   1183 	&clmul64x64_T3	($Xhi,$Xi,$Hkey);
   1184 	&reduction_alg5	($Xhi,$Xi);
   1185 
   1186 	&pshufb		($Xi,$Xn);
   1187 	&movdqu		(&QWP(0,$Xip),$Xi);
   1188 
   1189 	&ret	();
   1190 &function_end_B("gcm_gmult_clmul");
   1191 
   1192 &function_begin("gcm_ghash_clmul");
   1193 	&mov		($Xip,&wparam(0));
   1194 	&mov		($Htbl,&wparam(1));
   1195 	&mov		($inp,&wparam(2));
   1196 	&mov		($len,&wparam(3));
   1197 
   1198 	&call		(&label("pic"));
   1199 &set_label("pic");
   1200 	&blindpop	($const);
   1201 	&lea		($const,&DWP(&label("bswap")."-".&label("pic"),$const));
   1202 
   1203 	&movdqu		($Xi,&QWP(0,$Xip));
   1204 	&movdqa		($T3,&QWP(0,$const));
   1205 	&movdqu		($Hkey,&QWP(0,$Htbl));
   1206 	&pshufb		($Xi,$T3);
   1207 
   1208 	&sub		($len,0x10);
   1209 	&jz		(&label("odd_tail"));
   1210 
   1211 	#######
   1212 	# Xi+2 =[H*(Ii+1 + Xi+1)] mod P =
   1213 	#	[(H*Ii+1) + (H*Xi+1)] mod P =
   1214 	#	[(H*Ii+1) + H^2*(Ii+Xi)] mod P
   1215 	#
   1216 	&movdqu		($T1,&QWP(0,$inp));	# Ii
   1217 	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
   1218 	&pshufb		($T1,$T3);
   1219 	&pshufb		($Xn,$T3);
   1220 	&pxor		($Xi,$T1);		# Ii+Xi
   1221 
   1222 	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
   1223 	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
   1224 
   1225 	&sub		($len,0x20);
   1226 	&lea		($inp,&DWP(32,$inp));	# i+=2
   1227 	&jbe		(&label("even_tail"));
   1228 
   1229 &set_label("mod_loop");
   1230 	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
   1231 	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
   1232 
   1233 	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
   1234 	&pxor		($Xhi,$Xhn);
   1235 
   1236 	&reduction_alg5	($Xhi,$Xi);
   1237 
   1238 	#######
   1239 	&movdqa		($T3,&QWP(0,$const));
   1240 	&movdqu		($T1,&QWP(0,$inp));	# Ii
   1241 	&movdqu		($Xn,&QWP(16,$inp));	# Ii+1
   1242 	&pshufb		($T1,$T3);
   1243 	&pshufb		($Xn,$T3);
   1244 	&pxor		($Xi,$T1);		# Ii+Xi
   1245 
   1246 	&clmul64x64_T3	($Xhn,$Xn,$Hkey);	# H*Ii+1
   1247 	&movdqu		($Hkey,&QWP(16,$Htbl));	# load H^2
   1248 
   1249 	&sub		($len,0x20);
   1250 	&lea		($inp,&DWP(32,$inp));
   1251 	&ja		(&label("mod_loop"));
   1252 
   1253 &set_label("even_tail");
   1254 	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H^2*(Ii+Xi)
   1255 
   1256 	&pxor		($Xi,$Xn);		# (H*Ii+1) + H^2*(Ii+Xi)
   1257 	&pxor		($Xhi,$Xhn);
   1258 
   1259 	&reduction_alg5	($Xhi,$Xi);
   1260 
   1261 	&movdqa		($T3,&QWP(0,$const));
   1262 	&test		($len,$len);
   1263 	&jnz		(&label("done"));
   1264 
   1265 	&movdqu		($Hkey,&QWP(0,$Htbl));	# load H
   1266 &set_label("odd_tail");
   1267 	&movdqu		($T1,&QWP(0,$inp));	# Ii
   1268 	&pshufb		($T1,$T3);
   1269 	&pxor		($Xi,$T1);		# Ii+Xi
   1270 
   1271 	&clmul64x64_T3	($Xhi,$Xi,$Hkey);	# H*(Ii+Xi)
   1272 	&reduction_alg5	($Xhi,$Xi);
   1273 
   1274 	&movdqa		($T3,&QWP(0,$const));
   1275 &set_label("done");
   1276 	&pshufb		($Xi,$T3);
   1277 	&movdqu		(&QWP(0,$Xip),$Xi);
   1278 &function_end("gcm_ghash_clmul");
   1279 
   1280 }
   1281 
   1283 &set_label("bswap",64);
   1284 	&data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0);
   1285 	&data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2);	# 0x1c2_polynomial
   1286 }}	# $sse2
   1287 
   1288 &set_label("rem_4bit",64);
   1289 	&data_word(0,0x0000<<$S,0,0x1C20<<$S,0,0x3840<<$S,0,0x2460<<$S);
   1290 	&data_word(0,0x7080<<$S,0,0x6CA0<<$S,0,0x48C0<<$S,0,0x54E0<<$S);
   1291 	&data_word(0,0xE100<<$S,0,0xFD20<<$S,0,0xD940<<$S,0,0xC560<<$S);
   1292 	&data_word(0,0x9180<<$S,0,0x8DA0<<$S,0,0xA9C0<<$S,0,0xB5E0<<$S);
   1293 &set_label("rem_8bit",64);
   1294 	&data_short(0x0000,0x01C2,0x0384,0x0246,0x0708,0x06CA,0x048C,0x054E);
   1295 	&data_short(0x0E10,0x0FD2,0x0D94,0x0C56,0x0918,0x08DA,0x0A9C,0x0B5E);
   1296 	&data_short(0x1C20,0x1DE2,0x1FA4,0x1E66,0x1B28,0x1AEA,0x18AC,0x196E);
   1297 	&data_short(0x1230,0x13F2,0x11B4,0x1076,0x1538,0x14FA,0x16BC,0x177E);
   1298 	&data_short(0x3840,0x3982,0x3BC4,0x3A06,0x3F48,0x3E8A,0x3CCC,0x3D0E);
   1299 	&data_short(0x3650,0x3792,0x35D4,0x3416,0x3158,0x309A,0x32DC,0x331E);
   1300 	&data_short(0x2460,0x25A2,0x27E4,0x2626,0x2368,0x22AA,0x20EC,0x212E);
   1301 	&data_short(0x2A70,0x2BB2,0x29F4,0x2836,0x2D78,0x2CBA,0x2EFC,0x2F3E);
   1302 	&data_short(0x7080,0x7142,0x7304,0x72C6,0x7788,0x764A,0x740C,0x75CE);
   1303 	&data_short(0x7E90,0x7F52,0x7D14,0x7CD6,0x7998,0x785A,0x7A1C,0x7BDE);
   1304 	&data_short(0x6CA0,0x6D62,0x6F24,0x6EE6,0x6BA8,0x6A6A,0x682C,0x69EE);
   1305 	&data_short(0x62B0,0x6372,0x6134,0x60F6,0x65B8,0x647A,0x663C,0x67FE);
   1306 	&data_short(0x48C0,0x4902,0x4B44,0x4A86,0x4FC8,0x4E0A,0x4C4C,0x4D8E);
   1307 	&data_short(0x46D0,0x4712,0x4554,0x4496,0x41D8,0x401A,0x425C,0x439E);
   1308 	&data_short(0x54E0,0x5522,0x5764,0x56A6,0x53E8,0x522A,0x506C,0x51AE);
   1309 	&data_short(0x5AF0,0x5B32,0x5974,0x58B6,0x5DF8,0x5C3A,0x5E7C,0x5FBE);
   1310 	&data_short(0xE100,0xE0C2,0xE284,0xE346,0xE608,0xE7CA,0xE58C,0xE44E);
   1311 	&data_short(0xEF10,0xEED2,0xEC94,0xED56,0xE818,0xE9DA,0xEB9C,0xEA5E);
   1312 	&data_short(0xFD20,0xFCE2,0xFEA4,0xFF66,0xFA28,0xFBEA,0xF9AC,0xF86E);
   1313 	&data_short(0xF330,0xF2F2,0xF0B4,0xF176,0xF438,0xF5FA,0xF7BC,0xF67E);
   1314 	&data_short(0xD940,0xD882,0xDAC4,0xDB06,0xDE48,0xDF8A,0xDDCC,0xDC0E);
   1315 	&data_short(0xD750,0xD692,0xD4D4,0xD516,0xD058,0xD19A,0xD3DC,0xD21E);
   1316 	&data_short(0xC560,0xC4A2,0xC6E4,0xC726,0xC268,0xC3AA,0xC1EC,0xC02E);
   1317 	&data_short(0xCB70,0xCAB2,0xC8F4,0xC936,0xCC78,0xCDBA,0xCFFC,0xCE3E);
   1318 	&data_short(0x9180,0x9042,0x9204,0x93C6,0x9688,0x974A,0x950C,0x94CE);
   1319 	&data_short(0x9F90,0x9E52,0x9C14,0x9DD6,0x9898,0x995A,0x9B1C,0x9ADE);
   1320 	&data_short(0x8DA0,0x8C62,0x8E24,0x8FE6,0x8AA8,0x8B6A,0x892C,0x88EE);
   1321 	&data_short(0x83B0,0x8272,0x8034,0x81F6,0x84B8,0x857A,0x873C,0x86FE);
   1322 	&data_short(0xA9C0,0xA802,0xAA44,0xAB86,0xAEC8,0xAF0A,0xAD4C,0xAC8E);
   1323 	&data_short(0xA7D0,0xA612,0xA454,0xA596,0xA0D8,0xA11A,0xA35C,0xA29E);
   1324 	&data_short(0xB5E0,0xB422,0xB664,0xB7A6,0xB2E8,0xB32A,0xB16C,0xB0AE);
   1325 	&data_short(0xBBF0,0xBA32,0xB874,0xB9B6,0xBCF8,0xBD3A,0xBF7C,0xBEBE);
   1326 }}}	# !$x86only
   1327 
   1328 &asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>");
   1329 &asm_finish();
   1330 
   1331 # A question was risen about choice of vanilla MMX. Or rather why wasn't
   1332 # SSE2 chosen instead? In addition to the fact that MMX runs on legacy
   1333 # CPUs such as PIII, "4-bit" MMX version was observed to provide better
   1334 # performance than *corresponding* SSE2 one even on contemporary CPUs.
   1335 # SSE2 results were provided by Peter-Michael Hager. He maintains SSE2
   1336 # implementation featuring full range of lookup-table sizes, but with
   1337 # per-invocation lookup table setup. Latter means that table size is
   1338 # chosen depending on how much data is to be hashed in every given call,
   1339 # more data - larger table. Best reported result for Core2 is ~4 cycles
   1340 # per processed byte out of 64KB block. This number accounts even for
   1341 # 64KB table setup overhead. As discussed in gcm128.c we choose to be
   1342 # more conservative in respect to lookup table sizes, but how do the
   1343 # results compare? Minimalistic "256B" MMX version delivers ~11 cycles
   1344 # on same platform. As also discussed in gcm128.c, next in line "8-bit
   1345 # Shoup's" or "4KB" method should deliver twice the performance of
   1346 # "256B" one, in other words not worse than ~6 cycles per byte. It
   1347 # should be also be noted that in SSE2 case improvement can be "super-
   1348 # linear," i.e. more than twice, mostly because >>8 maps to single
   1349 # instruction on SSE2 register. This is unlike "4-bit" case when >>4
   1350 # maps to same amount of instructions in both MMX and SSE2 cases.
   1351 # Bottom line is that switch to SSE2 is considered to be justifiable
   1352 # only in case we choose to implement "8-bit" method...
   1353