1 #!/usr/bin/env perl 2 # 3 # ==================================================================== 4 # Written by Andy Polyakov <appro (at] openssl.org> for the OpenSSL 5 # project. The module is, however, dual licensed under OpenSSL and 6 # CRYPTOGAMS licenses depending on where you obtain it. For further 7 # details see http://www.openssl.org/~appro/cryptogams/. 8 # ==================================================================== 9 # 10 # March, May, June 2010 11 # 12 # The module implements "4-bit" GCM GHASH function and underlying 13 # single multiplication operation in GF(2^128). "4-bit" means that it 14 # uses 256 bytes per-key table [+64/128 bytes fixed table]. It has two 15 # code paths: vanilla x86 and vanilla SSE. Former will be executed on 16 # 486 and Pentium, latter on all others. SSE GHASH features so called 17 # "528B" variant of "4-bit" method utilizing additional 256+16 bytes 18 # of per-key storage [+512 bytes shared table]. Performance results 19 # are for streamed GHASH subroutine and are expressed in cycles per 20 # processed byte, less is better: 21 # 22 # gcc 2.95.3(*) SSE assembler x86 assembler 23 # 24 # Pentium 105/111(**) - 50 25 # PIII 68 /75 12.2 24 26 # P4 125/125 17.8 84(***) 27 # Opteron 66 /70 10.1 30 28 # Core2 54 /67 8.4 18 29 # Atom 105/105 16.8 53 30 # VIA Nano 69 /71 13.0 27 31 # 32 # (*) gcc 3.4.x was observed to generate few percent slower code, 33 # which is one of reasons why 2.95.3 results were chosen, 34 # another reason is lack of 3.4.x results for older CPUs; 35 # comparison with SSE results is not completely fair, because C 36 # results are for vanilla "256B" implementation, while 37 # assembler results are for "528B";-) 38 # (**) second number is result for code compiled with -fPIC flag, 39 # which is actually more relevant, because assembler code is 40 # position-independent; 41 # (***) see comment in non-MMX routine for further details; 42 # 43 # To summarize, it's >2-5 times faster than gcc-generated code. To 44 # anchor it to something else SHA1 assembler processes one byte in 45 # ~7 cycles on contemporary x86 cores. As for choice of MMX/SSE 46 # in particular, see comment at the end of the file... 47 48 # May 2010 49 # 50 # Add PCLMULQDQ version performing at 2.10 cycles per processed byte. 51 # The question is how close is it to theoretical limit? The pclmulqdq 52 # instruction latency appears to be 14 cycles and there can't be more 53 # than 2 of them executing at any given time. This means that single 54 # Karatsuba multiplication would take 28 cycles *plus* few cycles for 55 # pre- and post-processing. Then multiplication has to be followed by 56 # modulo-reduction. Given that aggregated reduction method [see 57 # "Carry-less Multiplication and Its Usage for Computing the GCM Mode" 58 # white paper by Intel] allows you to perform reduction only once in 59 # a while we can assume that asymptotic performance can be estimated 60 # as (28+Tmod/Naggr)/16, where Tmod is time to perform reduction 61 # and Naggr is the aggregation factor. 62 # 63 # Before we proceed to this implementation let's have closer look at 64 # the best-performing code suggested by Intel in their white paper. 65 # By tracing inter-register dependencies Tmod is estimated as ~19 66 # cycles and Naggr chosen by Intel is 4, resulting in 2.05 cycles per 67 # processed byte. As implied, this is quite optimistic estimate, 68 # because it does not account for Karatsuba pre- and post-processing, 69 # which for a single multiplication is ~5 cycles. Unfortunately Intel 70 # does not provide performance data for GHASH alone. But benchmarking 71 # AES_GCM_encrypt ripped out of Fig. 15 of the white paper with aadt 72 # alone resulted in 2.46 cycles per byte of out 16KB buffer. Note that 73 # the result accounts even for pre-computing of degrees of the hash 74 # key H, but its portion is negligible at 16KB buffer size. 75 # 76 # Moving on to the implementation in question. Tmod is estimated as 77 # ~13 cycles and Naggr is 2, giving asymptotic performance of ... 78 # 2.16. How is it possible that measured performance is better than 79 # optimistic theoretical estimate? There is one thing Intel failed 80 # to recognize. By serializing GHASH with CTR in same subroutine 81 # former's performance is really limited to above (Tmul + Tmod/Naggr) 82 # equation. But if GHASH procedure is detached, the modulo-reduction 83 # can be interleaved with Naggr-1 multiplications at instruction level 84 # and under ideal conditions even disappear from the equation. So that 85 # optimistic theoretical estimate for this implementation is ... 86 # 28/16=1.75, and not 2.16. Well, it's probably way too optimistic, 87 # at least for such small Naggr. I'd argue that (28+Tproc/Naggr), 88 # where Tproc is time required for Karatsuba pre- and post-processing, 89 # is more realistic estimate. In this case it gives ... 1.91 cycles. 90 # Or in other words, depending on how well we can interleave reduction 91 # and one of the two multiplications the performance should be betwen 92 # 1.91 and 2.16. As already mentioned, this implementation processes 93 # one byte out of 8KB buffer in 2.10 cycles, while x86_64 counterpart 94 # - in 2.02. x86_64 performance is better, because larger register 95 # bank allows to interleave reduction and multiplication better. 96 # 97 # Does it make sense to increase Naggr? To start with it's virtually 98 # impossible in 32-bit mode, because of limited register bank 99 # capacity. Otherwise improvement has to be weighed agiainst slower 100 # setup, as well as code size and complexity increase. As even 101 # optimistic estimate doesn't promise 30% performance improvement, 102 # there are currently no plans to increase Naggr. 103 # 104 # Special thanks to David Woodhouse <dwmw2 (at] infradead.org> for 105 # providing access to a Westmere-based system on behalf of Intel 106 # Open Source Technology Centre. 107 108 # January 2010 109 # 110 # Tweaked to optimize transitions between integer and FP operations 111 # on same XMM register, PCLMULQDQ subroutine was measured to process 112 # one byte in 2.07 cycles on Sandy Bridge, and in 2.12 - on Westmere. 113 # The minor regression on Westmere is outweighed by ~15% improvement 114 # on Sandy Bridge. Strangely enough attempt to modify 64-bit code in 115 # similar manner resulted in almost 20% degradation on Sandy Bridge, 116 # where original 64-bit code processes one byte in 1.95 cycles. 117 118 ##################################################################### 119 # For reference, AMD Bulldozer processes one byte in 1.98 cycles in 120 # 32-bit mode and 1.89 in 64-bit. 121 122 # February 2013 123 # 124 # Overhaul: aggregate Karatsuba post-processing, improve ILP in 125 # reduction_alg9. Resulting performance is 1.96 cycles per byte on 126 # Westmere, 1.95 - on Sandy/Ivy Bridge, 1.76 - on Bulldozer. 127 128 $0 =~ m/(.*[\/\\])[^\/\\]+$/; $dir=$1; 129 push(@INC,"${dir}","${dir}../../perlasm"); 130 require "x86asm.pl"; 131 132 &asm_init($ARGV[0],"ghash-x86.pl",$x86only = $ARGV[$#ARGV] eq "386"); 133 134 $sse2=1; 135 #for (@ARGV) { $sse2=1 if (/-DOPENSSL_IA32_SSE2/); } 136 137 ($Zhh,$Zhl,$Zlh,$Zll) = ("ebp","edx","ecx","ebx"); 138 $inp = "edi"; 139 $Htbl = "esi"; 140 142 $unroll = 0; # Affects x86 loop. Folded loop performs ~7% worse 143 # than unrolled, which has to be weighted against 144 # 2.5x x86-specific code size reduction. 145 146 sub x86_loop { 147 my $off = shift; 148 my $rem = "eax"; 149 150 &mov ($Zhh,&DWP(4,$Htbl,$Zll)); 151 &mov ($Zhl,&DWP(0,$Htbl,$Zll)); 152 &mov ($Zlh,&DWP(12,$Htbl,$Zll)); 153 &mov ($Zll,&DWP(8,$Htbl,$Zll)); 154 &xor ($rem,$rem); # avoid partial register stalls on PIII 155 156 # shrd practically kills P4, 2.5x deterioration, but P4 has 157 # MMX code-path to execute. shrd runs tad faster [than twice 158 # the shifts, move's and or's] on pre-MMX Pentium (as well as 159 # PIII and Core2), *but* minimizes code size, spares register 160 # and thus allows to fold the loop... 161 if (!$unroll) { 162 my $cnt = $inp; 163 &mov ($cnt,15); 164 &jmp (&label("x86_loop")); 165 &set_label("x86_loop",16); 166 for($i=1;$i<=2;$i++) { 167 &mov (&LB($rem),&LB($Zll)); 168 &shrd ($Zll,$Zlh,4); 169 &and (&LB($rem),0xf); 170 &shrd ($Zlh,$Zhl,4); 171 &shrd ($Zhl,$Zhh,4); 172 &shr ($Zhh,4); 173 &xor ($Zhh,&DWP($off+16,"esp",$rem,4)); 174 175 &mov (&LB($rem),&BP($off,"esp",$cnt)); 176 if ($i&1) { 177 &and (&LB($rem),0xf0); 178 } else { 179 &shl (&LB($rem),4); 180 } 181 182 &xor ($Zll,&DWP(8,$Htbl,$rem)); 183 &xor ($Zlh,&DWP(12,$Htbl,$rem)); 184 &xor ($Zhl,&DWP(0,$Htbl,$rem)); 185 &xor ($Zhh,&DWP(4,$Htbl,$rem)); 186 187 if ($i&1) { 188 &dec ($cnt); 189 &js (&label("x86_break")); 190 } else { 191 &jmp (&label("x86_loop")); 192 } 193 } 194 &set_label("x86_break",16); 195 } else { 196 for($i=1;$i<32;$i++) { 197 &comment($i); 198 &mov (&LB($rem),&LB($Zll)); 199 &shrd ($Zll,$Zlh,4); 200 &and (&LB($rem),0xf); 201 &shrd ($Zlh,$Zhl,4); 202 &shrd ($Zhl,$Zhh,4); 203 &shr ($Zhh,4); 204 &xor ($Zhh,&DWP($off+16,"esp",$rem,4)); 205 206 if ($i&1) { 207 &mov (&LB($rem),&BP($off+15-($i>>1),"esp")); 208 &and (&LB($rem),0xf0); 209 } else { 210 &mov (&LB($rem),&BP($off+15-($i>>1),"esp")); 211 &shl (&LB($rem),4); 212 } 213 214 &xor ($Zll,&DWP(8,$Htbl,$rem)); 215 &xor ($Zlh,&DWP(12,$Htbl,$rem)); 216 &xor ($Zhl,&DWP(0,$Htbl,$rem)); 217 &xor ($Zhh,&DWP(4,$Htbl,$rem)); 218 } 219 } 220 &bswap ($Zll); 221 &bswap ($Zlh); 222 &bswap ($Zhl); 223 if (!$x86only) { 224 &bswap ($Zhh); 225 } else { 226 &mov ("eax",$Zhh); 227 &bswap ("eax"); 228 &mov ($Zhh,"eax"); 229 } 230 } 231 232 if ($unroll) { 233 &function_begin_B("_x86_gmult_4bit_inner"); 234 &x86_loop(4); 235 &ret (); 236 &function_end_B("_x86_gmult_4bit_inner"); 237 } 238 239 sub deposit_rem_4bit { 240 my $bias = shift; 241 242 &mov (&DWP($bias+0, "esp"),0x0000<<16); 243 &mov (&DWP($bias+4, "esp"),0x1C20<<16); 244 &mov (&DWP($bias+8, "esp"),0x3840<<16); 245 &mov (&DWP($bias+12,"esp"),0x2460<<16); 246 &mov (&DWP($bias+16,"esp"),0x7080<<16); 247 &mov (&DWP($bias+20,"esp"),0x6CA0<<16); 248 &mov (&DWP($bias+24,"esp"),0x48C0<<16); 249 &mov (&DWP($bias+28,"esp"),0x54E0<<16); 250 &mov (&DWP($bias+32,"esp"),0xE100<<16); 251 &mov (&DWP($bias+36,"esp"),0xFD20<<16); 252 &mov (&DWP($bias+40,"esp"),0xD940<<16); 253 &mov (&DWP($bias+44,"esp"),0xC560<<16); 254 &mov (&DWP($bias+48,"esp"),0x9180<<16); 255 &mov (&DWP($bias+52,"esp"),0x8DA0<<16); 256 &mov (&DWP($bias+56,"esp"),0xA9C0<<16); 257 &mov (&DWP($bias+60,"esp"),0xB5E0<<16); 258 } 259 261 $suffix = $x86only ? "" : "_x86"; 262 263 &function_begin("gcm_gmult_4bit".$suffix); 264 &stack_push(16+4+1); # +1 for stack alignment 265 &mov ($inp,&wparam(0)); # load Xi 266 &mov ($Htbl,&wparam(1)); # load Htable 267 268 &mov ($Zhh,&DWP(0,$inp)); # load Xi[16] 269 &mov ($Zhl,&DWP(4,$inp)); 270 &mov ($Zlh,&DWP(8,$inp)); 271 &mov ($Zll,&DWP(12,$inp)); 272 273 &deposit_rem_4bit(16); 274 275 &mov (&DWP(0,"esp"),$Zhh); # copy Xi[16] on stack 276 &mov (&DWP(4,"esp"),$Zhl); 277 &mov (&DWP(8,"esp"),$Zlh); 278 &mov (&DWP(12,"esp"),$Zll); 279 &shr ($Zll,20); 280 &and ($Zll,0xf0); 281 282 if ($unroll) { 283 &call ("_x86_gmult_4bit_inner"); 284 } else { 285 &x86_loop(0); 286 &mov ($inp,&wparam(0)); 287 } 288 289 &mov (&DWP(12,$inp),$Zll); 290 &mov (&DWP(8,$inp),$Zlh); 291 &mov (&DWP(4,$inp),$Zhl); 292 &mov (&DWP(0,$inp),$Zhh); 293 &stack_pop(16+4+1); 294 &function_end("gcm_gmult_4bit".$suffix); 295 296 &function_begin("gcm_ghash_4bit".$suffix); 297 &stack_push(16+4+1); # +1 for 64-bit alignment 298 &mov ($Zll,&wparam(0)); # load Xi 299 &mov ($Htbl,&wparam(1)); # load Htable 300 &mov ($inp,&wparam(2)); # load in 301 &mov ("ecx",&wparam(3)); # load len 302 &add ("ecx",$inp); 303 &mov (&wparam(3),"ecx"); 304 305 &mov ($Zhh,&DWP(0,$Zll)); # load Xi[16] 306 &mov ($Zhl,&DWP(4,$Zll)); 307 &mov ($Zlh,&DWP(8,$Zll)); 308 &mov ($Zll,&DWP(12,$Zll)); 309 310 &deposit_rem_4bit(16); 311 312 &set_label("x86_outer_loop",16); 313 &xor ($Zll,&DWP(12,$inp)); # xor with input 314 &xor ($Zlh,&DWP(8,$inp)); 315 &xor ($Zhl,&DWP(4,$inp)); 316 &xor ($Zhh,&DWP(0,$inp)); 317 &mov (&DWP(12,"esp"),$Zll); # dump it on stack 318 &mov (&DWP(8,"esp"),$Zlh); 319 &mov (&DWP(4,"esp"),$Zhl); 320 &mov (&DWP(0,"esp"),$Zhh); 321 322 &shr ($Zll,20); 323 &and ($Zll,0xf0); 324 325 if ($unroll) { 326 &call ("_x86_gmult_4bit_inner"); 327 } else { 328 &x86_loop(0); 329 &mov ($inp,&wparam(2)); 330 } 331 &lea ($inp,&DWP(16,$inp)); 332 &cmp ($inp,&wparam(3)); 333 &mov (&wparam(2),$inp) if (!$unroll); 334 &jb (&label("x86_outer_loop")); 335 336 &mov ($inp,&wparam(0)); # load Xi 337 &mov (&DWP(12,$inp),$Zll); 338 &mov (&DWP(8,$inp),$Zlh); 339 &mov (&DWP(4,$inp),$Zhl); 340 &mov (&DWP(0,$inp),$Zhh); 341 &stack_pop(16+4+1); 342 &function_end("gcm_ghash_4bit".$suffix); 343 345 if (!$x86only) {{{ 346 347 &static_label("rem_4bit"); 348 349 if (!$sse2) {{ # pure-MMX "May" version... 350 351 $S=12; # shift factor for rem_4bit 352 353 &function_begin_B("_mmx_gmult_4bit_inner"); 354 # MMX version performs 3.5 times better on P4 (see comment in non-MMX 355 # routine for further details), 100% better on Opteron, ~70% better 356 # on Core2 and PIII... In other words effort is considered to be well 357 # spent... Since initial release the loop was unrolled in order to 358 # "liberate" register previously used as loop counter. Instead it's 359 # used to optimize critical path in 'Z.hi ^= rem_4bit[Z.lo&0xf]'. 360 # The path involves move of Z.lo from MMX to integer register, 361 # effective address calculation and finally merge of value to Z.hi. 362 # Reference to rem_4bit is scheduled so late that I had to >>4 363 # rem_4bit elements. This resulted in 20-45% procent improvement 364 # on contemporary -archs. 365 { 366 my $cnt; 367 my $rem_4bit = "eax"; 368 my @rem = ($Zhh,$Zll); 369 my $nhi = $Zhl; 370 my $nlo = $Zlh; 371 372 my ($Zlo,$Zhi) = ("mm0","mm1"); 373 my $tmp = "mm2"; 374 375 &xor ($nlo,$nlo); # avoid partial register stalls on PIII 376 &mov ($nhi,$Zll); 377 &mov (&LB($nlo),&LB($nhi)); 378 &shl (&LB($nlo),4); 379 &and ($nhi,0xf0); 380 &movq ($Zlo,&QWP(8,$Htbl,$nlo)); 381 &movq ($Zhi,&QWP(0,$Htbl,$nlo)); 382 &movd ($rem[0],$Zlo); 383 384 for ($cnt=28;$cnt>=-2;$cnt--) { 385 my $odd = $cnt&1; 386 my $nix = $odd ? $nlo : $nhi; 387 388 &shl (&LB($nlo),4) if ($odd); 389 &psrlq ($Zlo,4); 390 &movq ($tmp,$Zhi); 391 &psrlq ($Zhi,4); 392 &pxor ($Zlo,&QWP(8,$Htbl,$nix)); 393 &mov (&LB($nlo),&BP($cnt/2,$inp)) if (!$odd && $cnt>=0); 394 &psllq ($tmp,60); 395 &and ($nhi,0xf0) if ($odd); 396 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem[1],8)) if ($cnt<28); 397 &and ($rem[0],0xf); 398 &pxor ($Zhi,&QWP(0,$Htbl,$nix)); 399 &mov ($nhi,$nlo) if (!$odd && $cnt>=0); 400 &movd ($rem[1],$Zlo); 401 &pxor ($Zlo,$tmp); 402 403 push (@rem,shift(@rem)); # "rotate" registers 404 } 405 406 &mov ($inp,&DWP(4,$rem_4bit,$rem[1],8)); # last rem_4bit[rem] 407 408 &psrlq ($Zlo,32); # lower part of Zlo is already there 409 &movd ($Zhl,$Zhi); 410 &psrlq ($Zhi,32); 411 &movd ($Zlh,$Zlo); 412 &movd ($Zhh,$Zhi); 413 &shl ($inp,4); # compensate for rem_4bit[i] being >>4 414 415 &bswap ($Zll); 416 &bswap ($Zhl); 417 &bswap ($Zlh); 418 &xor ($Zhh,$inp); 419 &bswap ($Zhh); 420 421 &ret (); 422 } 423 &function_end_B("_mmx_gmult_4bit_inner"); 424 425 &function_begin("gcm_gmult_4bit_mmx"); 426 &mov ($inp,&wparam(0)); # load Xi 427 &mov ($Htbl,&wparam(1)); # load Htable 428 429 &call (&label("pic_point")); 430 &set_label("pic_point"); 431 &blindpop("eax"); 432 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax")); 433 434 &movz ($Zll,&BP(15,$inp)); 435 436 &call ("_mmx_gmult_4bit_inner"); 437 438 &mov ($inp,&wparam(0)); # load Xi 439 &emms (); 440 &mov (&DWP(12,$inp),$Zll); 441 &mov (&DWP(4,$inp),$Zhl); 442 &mov (&DWP(8,$inp),$Zlh); 443 &mov (&DWP(0,$inp),$Zhh); 444 &function_end("gcm_gmult_4bit_mmx"); 445 447 # Streamed version performs 20% better on P4, 7% on Opteron, 448 # 10% on Core2 and PIII... 449 &function_begin("gcm_ghash_4bit_mmx"); 450 &mov ($Zhh,&wparam(0)); # load Xi 451 &mov ($Htbl,&wparam(1)); # load Htable 452 &mov ($inp,&wparam(2)); # load in 453 &mov ($Zlh,&wparam(3)); # load len 454 455 &call (&label("pic_point")); 456 &set_label("pic_point"); 457 &blindpop("eax"); 458 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax")); 459 460 &add ($Zlh,$inp); 461 &mov (&wparam(3),$Zlh); # len to point at the end of input 462 &stack_push(4+1); # +1 for stack alignment 463 464 &mov ($Zll,&DWP(12,$Zhh)); # load Xi[16] 465 &mov ($Zhl,&DWP(4,$Zhh)); 466 &mov ($Zlh,&DWP(8,$Zhh)); 467 &mov ($Zhh,&DWP(0,$Zhh)); 468 &jmp (&label("mmx_outer_loop")); 469 470 &set_label("mmx_outer_loop",16); 471 &xor ($Zll,&DWP(12,$inp)); 472 &xor ($Zhl,&DWP(4,$inp)); 473 &xor ($Zlh,&DWP(8,$inp)); 474 &xor ($Zhh,&DWP(0,$inp)); 475 &mov (&wparam(2),$inp); 476 &mov (&DWP(12,"esp"),$Zll); 477 &mov (&DWP(4,"esp"),$Zhl); 478 &mov (&DWP(8,"esp"),$Zlh); 479 &mov (&DWP(0,"esp"),$Zhh); 480 481 &mov ($inp,"esp"); 482 &shr ($Zll,24); 483 484 &call ("_mmx_gmult_4bit_inner"); 485 486 &mov ($inp,&wparam(2)); 487 &lea ($inp,&DWP(16,$inp)); 488 &cmp ($inp,&wparam(3)); 489 &jb (&label("mmx_outer_loop")); 490 491 &mov ($inp,&wparam(0)); # load Xi 492 &emms (); 493 &mov (&DWP(12,$inp),$Zll); 494 &mov (&DWP(4,$inp),$Zhl); 495 &mov (&DWP(8,$inp),$Zlh); 496 &mov (&DWP(0,$inp),$Zhh); 497 498 &stack_pop(4+1); 499 &function_end("gcm_ghash_4bit_mmx"); 500 502 }} else {{ # "June" MMX version... 503 # ... has slower "April" gcm_gmult_4bit_mmx with folded 504 # loop. This is done to conserve code size... 505 $S=16; # shift factor for rem_4bit 506 507 sub mmx_loop() { 508 # MMX version performs 2.8 times better on P4 (see comment in non-MMX 509 # routine for further details), 40% better on Opteron and Core2, 50% 510 # better on PIII... In other words effort is considered to be well 511 # spent... 512 my $inp = shift; 513 my $rem_4bit = shift; 514 my $cnt = $Zhh; 515 my $nhi = $Zhl; 516 my $nlo = $Zlh; 517 my $rem = $Zll; 518 519 my ($Zlo,$Zhi) = ("mm0","mm1"); 520 my $tmp = "mm2"; 521 522 &xor ($nlo,$nlo); # avoid partial register stalls on PIII 523 &mov ($nhi,$Zll); 524 &mov (&LB($nlo),&LB($nhi)); 525 &mov ($cnt,14); 526 &shl (&LB($nlo),4); 527 &and ($nhi,0xf0); 528 &movq ($Zlo,&QWP(8,$Htbl,$nlo)); 529 &movq ($Zhi,&QWP(0,$Htbl,$nlo)); 530 &movd ($rem,$Zlo); 531 &jmp (&label("mmx_loop")); 532 533 &set_label("mmx_loop",16); 534 &psrlq ($Zlo,4); 535 &and ($rem,0xf); 536 &movq ($tmp,$Zhi); 537 &psrlq ($Zhi,4); 538 &pxor ($Zlo,&QWP(8,$Htbl,$nhi)); 539 &mov (&LB($nlo),&BP(0,$inp,$cnt)); 540 &psllq ($tmp,60); 541 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8)); 542 &dec ($cnt); 543 &movd ($rem,$Zlo); 544 &pxor ($Zhi,&QWP(0,$Htbl,$nhi)); 545 &mov ($nhi,$nlo); 546 &pxor ($Zlo,$tmp); 547 &js (&label("mmx_break")); 548 549 &shl (&LB($nlo),4); 550 &and ($rem,0xf); 551 &psrlq ($Zlo,4); 552 &and ($nhi,0xf0); 553 &movq ($tmp,$Zhi); 554 &psrlq ($Zhi,4); 555 &pxor ($Zlo,&QWP(8,$Htbl,$nlo)); 556 &psllq ($tmp,60); 557 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8)); 558 &movd ($rem,$Zlo); 559 &pxor ($Zhi,&QWP(0,$Htbl,$nlo)); 560 &pxor ($Zlo,$tmp); 561 &jmp (&label("mmx_loop")); 562 563 &set_label("mmx_break",16); 564 &shl (&LB($nlo),4); 565 &and ($rem,0xf); 566 &psrlq ($Zlo,4); 567 &and ($nhi,0xf0); 568 &movq ($tmp,$Zhi); 569 &psrlq ($Zhi,4); 570 &pxor ($Zlo,&QWP(8,$Htbl,$nlo)); 571 &psllq ($tmp,60); 572 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8)); 573 &movd ($rem,$Zlo); 574 &pxor ($Zhi,&QWP(0,$Htbl,$nlo)); 575 &pxor ($Zlo,$tmp); 576 577 &psrlq ($Zlo,4); 578 &and ($rem,0xf); 579 &movq ($tmp,$Zhi); 580 &psrlq ($Zhi,4); 581 &pxor ($Zlo,&QWP(8,$Htbl,$nhi)); 582 &psllq ($tmp,60); 583 &pxor ($Zhi,&QWP(0,$rem_4bit,$rem,8)); 584 &movd ($rem,$Zlo); 585 &pxor ($Zhi,&QWP(0,$Htbl,$nhi)); 586 &pxor ($Zlo,$tmp); 587 588 &psrlq ($Zlo,32); # lower part of Zlo is already there 589 &movd ($Zhl,$Zhi); 590 &psrlq ($Zhi,32); 591 &movd ($Zlh,$Zlo); 592 &movd ($Zhh,$Zhi); 593 594 &bswap ($Zll); 595 &bswap ($Zhl); 596 &bswap ($Zlh); 597 &bswap ($Zhh); 598 } 599 600 &function_begin("gcm_gmult_4bit_mmx"); 601 &mov ($inp,&wparam(0)); # load Xi 602 &mov ($Htbl,&wparam(1)); # load Htable 603 604 &call (&label("pic_point")); 605 &set_label("pic_point"); 606 &blindpop("eax"); 607 &lea ("eax",&DWP(&label("rem_4bit")."-".&label("pic_point"),"eax")); 608 609 &movz ($Zll,&BP(15,$inp)); 610 611 &mmx_loop($inp,"eax"); 612 613 &emms (); 614 &mov (&DWP(12,$inp),$Zll); 615 &mov (&DWP(4,$inp),$Zhl); 616 &mov (&DWP(8,$inp),$Zlh); 617 &mov (&DWP(0,$inp),$Zhh); 618 &function_end("gcm_gmult_4bit_mmx"); 619 621 ###################################################################### 622 # Below subroutine is "528B" variant of "4-bit" GCM GHASH function 623 # (see gcm128.c for details). It provides further 20-40% performance 624 # improvement over above mentioned "May" version. 625 626 &static_label("rem_8bit"); 627 628 &function_begin("gcm_ghash_4bit_mmx"); 629 { my ($Zlo,$Zhi) = ("mm7","mm6"); 630 my $rem_8bit = "esi"; 631 my $Htbl = "ebx"; 632 633 # parameter block 634 &mov ("eax",&wparam(0)); # Xi 635 &mov ("ebx",&wparam(1)); # Htable 636 &mov ("ecx",&wparam(2)); # inp 637 &mov ("edx",&wparam(3)); # len 638 &mov ("ebp","esp"); # original %esp 639 &call (&label("pic_point")); 640 &set_label ("pic_point"); 641 &blindpop ($rem_8bit); 642 &lea ($rem_8bit,&DWP(&label("rem_8bit")."-".&label("pic_point"),$rem_8bit)); 643 644 &sub ("esp",512+16+16); # allocate stack frame... 645 &and ("esp",-64); # ...and align it 646 &sub ("esp",16); # place for (u8)(H[]<<4) 647 648 &add ("edx","ecx"); # pointer to the end of input 649 &mov (&DWP(528+16+0,"esp"),"eax"); # save Xi 650 &mov (&DWP(528+16+8,"esp"),"edx"); # save inp+len 651 &mov (&DWP(528+16+12,"esp"),"ebp"); # save original %esp 652 653 { my @lo = ("mm0","mm1","mm2"); 654 my @hi = ("mm3","mm4","mm5"); 655 my @tmp = ("mm6","mm7"); 656 my ($off1,$off2,$i) = (0,0,); 657 658 &add ($Htbl,128); # optimize for size 659 &lea ("edi",&DWP(16+128,"esp")); 660 &lea ("ebp",&DWP(16+256+128,"esp")); 661 662 # decompose Htable (low and high parts are kept separately), 663 # generate Htable[]>>4, (u8)(Htable[]<<4), save to stack... 664 for ($i=0;$i<18;$i++) { 665 666 &mov ("edx",&DWP(16*$i+8-128,$Htbl)) if ($i<16); 667 &movq ($lo[0],&QWP(16*$i+8-128,$Htbl)) if ($i<16); 668 &psllq ($tmp[1],60) if ($i>1); 669 &movq ($hi[0],&QWP(16*$i+0-128,$Htbl)) if ($i<16); 670 &por ($lo[2],$tmp[1]) if ($i>1); 671 &movq (&QWP($off1-128,"edi"),$lo[1]) if ($i>0 && $i<17); 672 &psrlq ($lo[1],4) if ($i>0 && $i<17); 673 &movq (&QWP($off1,"edi"),$hi[1]) if ($i>0 && $i<17); 674 &movq ($tmp[0],$hi[1]) if ($i>0 && $i<17); 675 &movq (&QWP($off2-128,"ebp"),$lo[2]) if ($i>1); 676 &psrlq ($hi[1],4) if ($i>0 && $i<17); 677 &movq (&QWP($off2,"ebp"),$hi[2]) if ($i>1); 678 &shl ("edx",4) if ($i<16); 679 &mov (&BP($i,"esp"),&LB("edx")) if ($i<16); 680 681 unshift (@lo,pop(@lo)); # "rotate" registers 682 unshift (@hi,pop(@hi)); 683 unshift (@tmp,pop(@tmp)); 684 $off1 += 8 if ($i>0); 685 $off2 += 8 if ($i>1); 686 } 687 } 688 689 &movq ($Zhi,&QWP(0,"eax")); 690 &mov ("ebx",&DWP(8,"eax")); 691 &mov ("edx",&DWP(12,"eax")); # load Xi 692 693 &set_label("outer",16); 694 { my $nlo = "eax"; 695 my $dat = "edx"; 696 my @nhi = ("edi","ebp"); 697 my @rem = ("ebx","ecx"); 698 my @red = ("mm0","mm1","mm2"); 699 my $tmp = "mm3"; 700 701 &xor ($dat,&DWP(12,"ecx")); # merge input data 702 &xor ("ebx",&DWP(8,"ecx")); 703 &pxor ($Zhi,&QWP(0,"ecx")); 704 &lea ("ecx",&DWP(16,"ecx")); # inp+=16 705 #&mov (&DWP(528+12,"esp"),$dat); # save inp^Xi 706 &mov (&DWP(528+8,"esp"),"ebx"); 707 &movq (&QWP(528+0,"esp"),$Zhi); 708 &mov (&DWP(528+16+4,"esp"),"ecx"); # save inp 709 710 &xor ($nlo,$nlo); 711 &rol ($dat,8); 712 &mov (&LB($nlo),&LB($dat)); 713 &mov ($nhi[1],$nlo); 714 &and (&LB($nlo),0x0f); 715 &shr ($nhi[1],4); 716 &pxor ($red[0],$red[0]); 717 &rol ($dat,8); # next byte 718 &pxor ($red[1],$red[1]); 719 &pxor ($red[2],$red[2]); 720 721 # Just like in "May" verson modulo-schedule for critical path in 722 # 'Z.hi ^= rem_8bit[Z.lo&0xff^((u8)H[nhi]<<4)]<<48'. Final 'pxor' 723 # is scheduled so late that rem_8bit[] has to be shifted *right* 724 # by 16, which is why last argument to pinsrw is 2, which 725 # corresponds to <<32=<<48>>16... 726 for ($j=11,$i=0;$i<15;$i++) { 727 728 if ($i>0) { 729 &pxor ($Zlo,&QWP(16,"esp",$nlo,8)); # Z^=H[nlo] 730 &rol ($dat,8); # next byte 731 &pxor ($Zhi,&QWP(16+128,"esp",$nlo,8)); 732 733 &pxor ($Zlo,$tmp); 734 &pxor ($Zhi,&QWP(16+256+128,"esp",$nhi[0],8)); 735 &xor (&LB($rem[1]),&BP(0,"esp",$nhi[0])); # rem^(H[nhi]<<4) 736 } else { 737 &movq ($Zlo,&QWP(16,"esp",$nlo,8)); 738 &movq ($Zhi,&QWP(16+128,"esp",$nlo,8)); 739 } 740 741 &mov (&LB($nlo),&LB($dat)); 742 &mov ($dat,&DWP(528+$j,"esp")) if (--$j%4==0); 743 744 &movd ($rem[0],$Zlo); 745 &movz ($rem[1],&LB($rem[1])) if ($i>0); 746 &psrlq ($Zlo,8); # Z>>=8 747 748 &movq ($tmp,$Zhi); 749 &mov ($nhi[0],$nlo); 750 &psrlq ($Zhi,8); 751 752 &pxor ($Zlo,&QWP(16+256+0,"esp",$nhi[1],8)); # Z^=H[nhi]>>4 753 &and (&LB($nlo),0x0f); 754 &psllq ($tmp,56); 755 756 &pxor ($Zhi,$red[1]) if ($i>1); 757 &shr ($nhi[0],4); 758 &pinsrw ($red[0],&WP(0,$rem_8bit,$rem[1],2),2) if ($i>0); 759 760 unshift (@red,pop(@red)); # "rotate" registers 761 unshift (@rem,pop(@rem)); 762 unshift (@nhi,pop(@nhi)); 763 } 764 765 &pxor ($Zlo,&QWP(16,"esp",$nlo,8)); # Z^=H[nlo] 766 &pxor ($Zhi,&QWP(16+128,"esp",$nlo,8)); 767 &xor (&LB($rem[1]),&BP(0,"esp",$nhi[0])); # rem^(H[nhi]<<4) 768 769 &pxor ($Zlo,$tmp); 770 &pxor ($Zhi,&QWP(16+256+128,"esp",$nhi[0],8)); 771 &movz ($rem[1],&LB($rem[1])); 772 773 &pxor ($red[2],$red[2]); # clear 2nd word 774 &psllq ($red[1],4); 775 776 &movd ($rem[0],$Zlo); 777 &psrlq ($Zlo,4); # Z>>=4 778 779 &movq ($tmp,$Zhi); 780 &psrlq ($Zhi,4); 781 &shl ($rem[0],4); # rem<<4 782 783 &pxor ($Zlo,&QWP(16,"esp",$nhi[1],8)); # Z^=H[nhi] 784 &psllq ($tmp,60); 785 &movz ($rem[0],&LB($rem[0])); 786 787 &pxor ($Zlo,$tmp); 788 &pxor ($Zhi,&QWP(16+128,"esp",$nhi[1],8)); 789 790 &pinsrw ($red[0],&WP(0,$rem_8bit,$rem[1],2),2); 791 &pxor ($Zhi,$red[1]); 792 793 &movd ($dat,$Zlo); 794 &pinsrw ($red[2],&WP(0,$rem_8bit,$rem[0],2),3); # last is <<48 795 796 &psllq ($red[0],12); # correct by <<16>>4 797 &pxor ($Zhi,$red[0]); 798 &psrlq ($Zlo,32); 799 &pxor ($Zhi,$red[2]); 800 801 &mov ("ecx",&DWP(528+16+4,"esp")); # restore inp 802 &movd ("ebx",$Zlo); 803 &movq ($tmp,$Zhi); # 01234567 804 &psllw ($Zhi,8); # 1.3.5.7. 805 &psrlw ($tmp,8); # .0.2.4.6 806 &por ($Zhi,$tmp); # 10325476 807 &bswap ($dat); 808 &pshufw ($Zhi,$Zhi,0b00011011); # 76543210 809 &bswap ("ebx"); 810 811 &cmp ("ecx",&DWP(528+16+8,"esp")); # are we done? 812 &jne (&label("outer")); 813 } 814 815 &mov ("eax",&DWP(528+16+0,"esp")); # restore Xi 816 &mov (&DWP(12,"eax"),"edx"); 817 &mov (&DWP(8,"eax"),"ebx"); 818 &movq (&QWP(0,"eax"),$Zhi); 819 820 &mov ("esp",&DWP(528+16+12,"esp")); # restore original %esp 821 &emms (); 822 } 823 &function_end("gcm_ghash_4bit_mmx"); 824 }} 825 827 if ($sse2) {{ 828 ###################################################################### 829 # PCLMULQDQ version. 830 831 $Xip="eax"; 832 $Htbl="edx"; 833 $const="ecx"; 834 $inp="esi"; 835 $len="ebx"; 836 837 ($Xi,$Xhi)=("xmm0","xmm1"); $Hkey="xmm2"; 838 ($T1,$T2,$T3)=("xmm3","xmm4","xmm5"); 839 ($Xn,$Xhn)=("xmm6","xmm7"); 840 841 &static_label("bswap"); 842 843 sub clmul64x64_T2 { # minimal "register" pressure 844 my ($Xhi,$Xi,$Hkey,$HK)=@_; 845 846 &movdqa ($Xhi,$Xi); # 847 &pshufd ($T1,$Xi,0b01001110); 848 &pshufd ($T2,$Hkey,0b01001110) if (!defined($HK)); 849 &pxor ($T1,$Xi); # 850 &pxor ($T2,$Hkey) if (!defined($HK)); 851 $HK=$T2 if (!defined($HK)); 852 853 &pclmulqdq ($Xi,$Hkey,0x00); ####### 854 &pclmulqdq ($Xhi,$Hkey,0x11); ####### 855 &pclmulqdq ($T1,$HK,0x00); ####### 856 &xorps ($T1,$Xi); # 857 &xorps ($T1,$Xhi); # 858 859 &movdqa ($T2,$T1); # 860 &psrldq ($T1,8); 861 &pslldq ($T2,8); # 862 &pxor ($Xhi,$T1); 863 &pxor ($Xi,$T2); # 864 } 865 866 sub clmul64x64_T3 { 867 # Even though this subroutine offers visually better ILP, it 868 # was empirically found to be a tad slower than above version. 869 # At least in gcm_ghash_clmul context. But it's just as well, 870 # because loop modulo-scheduling is possible only thanks to 871 # minimized "register" pressure... 872 my ($Xhi,$Xi,$Hkey)=@_; 873 874 &movdqa ($T1,$Xi); # 875 &movdqa ($Xhi,$Xi); 876 &pclmulqdq ($Xi,$Hkey,0x00); ####### 877 &pclmulqdq ($Xhi,$Hkey,0x11); ####### 878 &pshufd ($T2,$T1,0b01001110); # 879 &pshufd ($T3,$Hkey,0b01001110); 880 &pxor ($T2,$T1); # 881 &pxor ($T3,$Hkey); 882 &pclmulqdq ($T2,$T3,0x00); ####### 883 &pxor ($T2,$Xi); # 884 &pxor ($T2,$Xhi); # 885 886 &movdqa ($T3,$T2); # 887 &psrldq ($T2,8); 888 &pslldq ($T3,8); # 889 &pxor ($Xhi,$T2); 890 &pxor ($Xi,$T3); # 891 } 892 894 if (1) { # Algorithm 9 with <<1 twist. 895 # Reduction is shorter and uses only two 896 # temporary registers, which makes it better 897 # candidate for interleaving with 64x64 898 # multiplication. Pre-modulo-scheduled loop 899 # was found to be ~20% faster than Algorithm 5 900 # below. Algorithm 9 was therefore chosen for 901 # further optimization... 902 903 sub reduction_alg9 { # 17/11 times faster than Intel version 904 my ($Xhi,$Xi) = @_; 905 906 # 1st phase 907 &movdqa ($T2,$Xi); # 908 &movdqa ($T1,$Xi); 909 &psllq ($Xi,5); 910 &pxor ($T1,$Xi); # 911 &psllq ($Xi,1); 912 &pxor ($Xi,$T1); # 913 &psllq ($Xi,57); # 914 &movdqa ($T1,$Xi); # 915 &pslldq ($Xi,8); 916 &psrldq ($T1,8); # 917 &pxor ($Xi,$T2); 918 &pxor ($Xhi,$T1); # 919 920 # 2nd phase 921 &movdqa ($T2,$Xi); 922 &psrlq ($Xi,1); 923 &pxor ($Xhi,$T2); # 924 &pxor ($T2,$Xi); 925 &psrlq ($Xi,5); 926 &pxor ($Xi,$T2); # 927 &psrlq ($Xi,1); # 928 &pxor ($Xi,$Xhi) # 929 } 930 931 &function_begin_B("gcm_init_clmul"); 932 &mov ($Htbl,&wparam(0)); 933 &mov ($Xip,&wparam(1)); 934 935 &call (&label("pic")); 936 &set_label("pic"); 937 &blindpop ($const); 938 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 939 940 &movdqu ($Hkey,&QWP(0,$Xip)); 941 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap 942 943 # <<1 twist 944 &pshufd ($T2,$Hkey,0b11111111); # broadcast uppermost dword 945 &movdqa ($T1,$Hkey); 946 &psllq ($Hkey,1); 947 &pxor ($T3,$T3); # 948 &psrlq ($T1,63); 949 &pcmpgtd ($T3,$T2); # broadcast carry bit 950 &pslldq ($T1,8); 951 &por ($Hkey,$T1); # H<<=1 952 953 # magic reduction 954 &pand ($T3,&QWP(16,$const)); # 0x1c2_polynomial 955 &pxor ($Hkey,$T3); # if(carry) H^=0x1c2_polynomial 956 957 # calculate H^2 958 &movdqa ($Xi,$Hkey); 959 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); 960 &reduction_alg9 ($Xhi,$Xi); 961 962 &pshufd ($T1,$Hkey,0b01001110); 963 &pshufd ($T2,$Xi,0b01001110); 964 &pxor ($T1,$Hkey); # Karatsuba pre-processing 965 &movdqu (&QWP(0,$Htbl),$Hkey); # save H 966 &pxor ($T2,$Xi); # Karatsuba pre-processing 967 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2 968 &palignr ($T2,$T1,8); # low part is H.lo^H.hi 969 &movdqu (&QWP(32,$Htbl),$T2); # save Karatsuba "salt" 970 971 &ret (); 972 &function_end_B("gcm_init_clmul"); 973 974 &function_begin_B("gcm_gmult_clmul"); 975 &mov ($Xip,&wparam(0)); 976 &mov ($Htbl,&wparam(1)); 977 978 &call (&label("pic")); 979 &set_label("pic"); 980 &blindpop ($const); 981 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 982 983 &movdqu ($Xi,&QWP(0,$Xip)); 984 &movdqa ($T3,&QWP(0,$const)); 985 &movups ($Hkey,&QWP(0,$Htbl)); 986 &pshufb ($Xi,$T3); 987 &movups ($T2,&QWP(32,$Htbl)); 988 989 &clmul64x64_T2 ($Xhi,$Xi,$Hkey,$T2); 990 &reduction_alg9 ($Xhi,$Xi); 991 992 &pshufb ($Xi,$T3); 993 &movdqu (&QWP(0,$Xip),$Xi); 994 995 &ret (); 996 &function_end_B("gcm_gmult_clmul"); 997 998 &function_begin("gcm_ghash_clmul"); 999 &mov ($Xip,&wparam(0)); 1000 &mov ($Htbl,&wparam(1)); 1001 &mov ($inp,&wparam(2)); 1002 &mov ($len,&wparam(3)); 1003 1004 &call (&label("pic")); 1005 &set_label("pic"); 1006 &blindpop ($const); 1007 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 1008 1009 &movdqu ($Xi,&QWP(0,$Xip)); 1010 &movdqa ($T3,&QWP(0,$const)); 1011 &movdqu ($Hkey,&QWP(0,$Htbl)); 1012 &pshufb ($Xi,$T3); 1013 1014 &sub ($len,0x10); 1015 &jz (&label("odd_tail")); 1016 1017 ####### 1018 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P = 1019 # [(H*Ii+1) + (H*Xi+1)] mod P = 1020 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P 1021 # 1022 &movdqu ($T1,&QWP(0,$inp)); # Ii 1023 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1 1024 &pshufb ($T1,$T3); 1025 &pshufb ($Xn,$T3); 1026 &movdqu ($T3,&QWP(32,$Htbl)); 1027 &pxor ($Xi,$T1); # Ii+Xi 1028 1029 &pshufd ($T1,$Xn,0b01001110); # H*Ii+1 1030 &movdqa ($Xhn,$Xn); 1031 &pxor ($T1,$Xn); # 1032 &lea ($inp,&DWP(32,$inp)); # i+=2 1033 1034 &pclmulqdq ($Xn,$Hkey,0x00); ####### 1035 &pclmulqdq ($Xhn,$Hkey,0x11); ####### 1036 &pclmulqdq ($T1,$T3,0x00); ####### 1037 &movups ($Hkey,&QWP(16,$Htbl)); # load H^2 1038 &nop (); 1039 1040 &sub ($len,0x20); 1041 &jbe (&label("even_tail")); 1042 &jmp (&label("mod_loop")); 1043 1044 &set_label("mod_loop",32); 1045 &pshufd ($T2,$Xi,0b01001110); # H^2*(Ii+Xi) 1046 &movdqa ($Xhi,$Xi); 1047 &pxor ($T2,$Xi); # 1048 &nop (); 1049 1050 &pclmulqdq ($Xi,$Hkey,0x00); ####### 1051 &pclmulqdq ($Xhi,$Hkey,0x11); ####### 1052 &pclmulqdq ($T2,$T3,0x10); ####### 1053 &movups ($Hkey,&QWP(0,$Htbl)); # load H 1054 1055 &xorps ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi) 1056 &movdqa ($T3,&QWP(0,$const)); 1057 &xorps ($Xhi,$Xhn); 1058 &movdqu ($Xhn,&QWP(0,$inp)); # Ii 1059 &pxor ($T1,$Xi); # aggregated Karatsuba post-processing 1060 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1 1061 &pxor ($T1,$Xhi); # 1062 1063 &pshufb ($Xhn,$T3); 1064 &pxor ($T2,$T1); # 1065 1066 &movdqa ($T1,$T2); # 1067 &psrldq ($T2,8); 1068 &pslldq ($T1,8); # 1069 &pxor ($Xhi,$T2); 1070 &pxor ($Xi,$T1); # 1071 &pshufb ($Xn,$T3); 1072 &pxor ($Xhi,$Xhn); # "Ii+Xi", consume early 1073 1074 &movdqa ($Xhn,$Xn); #&clmul64x64_TX ($Xhn,$Xn,$Hkey); H*Ii+1 1075 &movdqa ($T2,$Xi); #&reduction_alg9($Xhi,$Xi); 1st phase 1076 &movdqa ($T1,$Xi); 1077 &psllq ($Xi,5); 1078 &pxor ($T1,$Xi); # 1079 &psllq ($Xi,1); 1080 &pxor ($Xi,$T1); # 1081 &pclmulqdq ($Xn,$Hkey,0x00); ####### 1082 &movups ($T3,&QWP(32,$Htbl)); 1083 &psllq ($Xi,57); # 1084 &movdqa ($T1,$Xi); # 1085 &pslldq ($Xi,8); 1086 &psrldq ($T1,8); # 1087 &pxor ($Xi,$T2); 1088 &pxor ($Xhi,$T1); # 1089 &pshufd ($T1,$Xhn,0b01001110); 1090 &movdqa ($T2,$Xi); # 2nd phase 1091 &psrlq ($Xi,1); 1092 &pxor ($T1,$Xhn); 1093 &pxor ($Xhi,$T2); # 1094 &pclmulqdq ($Xhn,$Hkey,0x11); ####### 1095 &movups ($Hkey,&QWP(16,$Htbl)); # load H^2 1096 &pxor ($T2,$Xi); 1097 &psrlq ($Xi,5); 1098 &pxor ($Xi,$T2); # 1099 &psrlq ($Xi,1); # 1100 &pxor ($Xi,$Xhi) # 1101 &pclmulqdq ($T1,$T3,0x00); ####### 1102 1103 &lea ($inp,&DWP(32,$inp)); 1104 &sub ($len,0x20); 1105 &ja (&label("mod_loop")); 1106 1107 &set_label("even_tail"); 1108 &pshufd ($T2,$Xi,0b01001110); # H^2*(Ii+Xi) 1109 &movdqa ($Xhi,$Xi); 1110 &pxor ($T2,$Xi); # 1111 1112 &pclmulqdq ($Xi,$Hkey,0x00); ####### 1113 &pclmulqdq ($Xhi,$Hkey,0x11); ####### 1114 &pclmulqdq ($T2,$T3,0x10); ####### 1115 &movdqa ($T3,&QWP(0,$const)); 1116 1117 &xorps ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi) 1118 &xorps ($Xhi,$Xhn); 1119 &pxor ($T1,$Xi); # aggregated Karatsuba post-processing 1120 &pxor ($T1,$Xhi); # 1121 1122 &pxor ($T2,$T1); # 1123 1124 &movdqa ($T1,$T2); # 1125 &psrldq ($T2,8); 1126 &pslldq ($T1,8); # 1127 &pxor ($Xhi,$T2); 1128 &pxor ($Xi,$T1); # 1129 1130 &reduction_alg9 ($Xhi,$Xi); 1131 1132 &test ($len,$len); 1133 &jnz (&label("done")); 1134 1135 &movups ($Hkey,&QWP(0,$Htbl)); # load H 1136 &set_label("odd_tail"); 1137 &movdqu ($T1,&QWP(0,$inp)); # Ii 1138 &pshufb ($T1,$T3); 1139 &pxor ($Xi,$T1); # Ii+Xi 1140 1141 &clmul64x64_T2 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi) 1142 &reduction_alg9 ($Xhi,$Xi); 1143 1144 &set_label("done"); 1145 &pshufb ($Xi,$T3); 1146 &movdqu (&QWP(0,$Xip),$Xi); 1147 &function_end("gcm_ghash_clmul"); 1148 1150 } else { # Algorith 5. Kept for reference purposes. 1151 1152 sub reduction_alg5 { # 19/16 times faster than Intel version 1153 my ($Xhi,$Xi)=@_; 1154 1155 # <<1 1156 &movdqa ($T1,$Xi); # 1157 &movdqa ($T2,$Xhi); 1158 &pslld ($Xi,1); 1159 &pslld ($Xhi,1); # 1160 &psrld ($T1,31); 1161 &psrld ($T2,31); # 1162 &movdqa ($T3,$T1); 1163 &pslldq ($T1,4); 1164 &psrldq ($T3,12); # 1165 &pslldq ($T2,4); 1166 &por ($Xhi,$T3); # 1167 &por ($Xi,$T1); 1168 &por ($Xhi,$T2); # 1169 1170 # 1st phase 1171 &movdqa ($T1,$Xi); 1172 &movdqa ($T2,$Xi); 1173 &movdqa ($T3,$Xi); # 1174 &pslld ($T1,31); 1175 &pslld ($T2,30); 1176 &pslld ($Xi,25); # 1177 &pxor ($T1,$T2); 1178 &pxor ($T1,$Xi); # 1179 &movdqa ($T2,$T1); # 1180 &pslldq ($T1,12); 1181 &psrldq ($T2,4); # 1182 &pxor ($T3,$T1); 1183 1184 # 2nd phase 1185 &pxor ($Xhi,$T3); # 1186 &movdqa ($Xi,$T3); 1187 &movdqa ($T1,$T3); 1188 &psrld ($Xi,1); # 1189 &psrld ($T1,2); 1190 &psrld ($T3,7); # 1191 &pxor ($Xi,$T1); 1192 &pxor ($Xhi,$T2); 1193 &pxor ($Xi,$T3); # 1194 &pxor ($Xi,$Xhi); # 1195 } 1196 1197 &function_begin_B("gcm_init_clmul"); 1198 &mov ($Htbl,&wparam(0)); 1199 &mov ($Xip,&wparam(1)); 1200 1201 &call (&label("pic")); 1202 &set_label("pic"); 1203 &blindpop ($const); 1204 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 1205 1206 &movdqu ($Hkey,&QWP(0,$Xip)); 1207 &pshufd ($Hkey,$Hkey,0b01001110);# dword swap 1208 1209 # calculate H^2 1210 &movdqa ($Xi,$Hkey); 1211 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); 1212 &reduction_alg5 ($Xhi,$Xi); 1213 1214 &movdqu (&QWP(0,$Htbl),$Hkey); # save H 1215 &movdqu (&QWP(16,$Htbl),$Xi); # save H^2 1216 1217 &ret (); 1218 &function_end_B("gcm_init_clmul"); 1219 1220 &function_begin_B("gcm_gmult_clmul"); 1221 &mov ($Xip,&wparam(0)); 1222 &mov ($Htbl,&wparam(1)); 1223 1224 &call (&label("pic")); 1225 &set_label("pic"); 1226 &blindpop ($const); 1227 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 1228 1229 &movdqu ($Xi,&QWP(0,$Xip)); 1230 &movdqa ($Xn,&QWP(0,$const)); 1231 &movdqu ($Hkey,&QWP(0,$Htbl)); 1232 &pshufb ($Xi,$Xn); 1233 1234 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); 1235 &reduction_alg5 ($Xhi,$Xi); 1236 1237 &pshufb ($Xi,$Xn); 1238 &movdqu (&QWP(0,$Xip),$Xi); 1239 1240 &ret (); 1241 &function_end_B("gcm_gmult_clmul"); 1242 1243 &function_begin("gcm_ghash_clmul"); 1244 &mov ($Xip,&wparam(0)); 1245 &mov ($Htbl,&wparam(1)); 1246 &mov ($inp,&wparam(2)); 1247 &mov ($len,&wparam(3)); 1248 1249 &call (&label("pic")); 1250 &set_label("pic"); 1251 &blindpop ($const); 1252 &lea ($const,&DWP(&label("bswap")."-".&label("pic"),$const)); 1253 1254 &movdqu ($Xi,&QWP(0,$Xip)); 1255 &movdqa ($T3,&QWP(0,$const)); 1256 &movdqu ($Hkey,&QWP(0,$Htbl)); 1257 &pshufb ($Xi,$T3); 1258 1259 &sub ($len,0x10); 1260 &jz (&label("odd_tail")); 1261 1262 ####### 1263 # Xi+2 =[H*(Ii+1 + Xi+1)] mod P = 1264 # [(H*Ii+1) + (H*Xi+1)] mod P = 1265 # [(H*Ii+1) + H^2*(Ii+Xi)] mod P 1266 # 1267 &movdqu ($T1,&QWP(0,$inp)); # Ii 1268 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1 1269 &pshufb ($T1,$T3); 1270 &pshufb ($Xn,$T3); 1271 &pxor ($Xi,$T1); # Ii+Xi 1272 1273 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1 1274 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2 1275 1276 &sub ($len,0x20); 1277 &lea ($inp,&DWP(32,$inp)); # i+=2 1278 &jbe (&label("even_tail")); 1279 1280 &set_label("mod_loop"); 1281 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi) 1282 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H 1283 1284 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi) 1285 &pxor ($Xhi,$Xhn); 1286 1287 &reduction_alg5 ($Xhi,$Xi); 1288 1289 ####### 1290 &movdqa ($T3,&QWP(0,$const)); 1291 &movdqu ($T1,&QWP(0,$inp)); # Ii 1292 &movdqu ($Xn,&QWP(16,$inp)); # Ii+1 1293 &pshufb ($T1,$T3); 1294 &pshufb ($Xn,$T3); 1295 &pxor ($Xi,$T1); # Ii+Xi 1296 1297 &clmul64x64_T3 ($Xhn,$Xn,$Hkey); # H*Ii+1 1298 &movdqu ($Hkey,&QWP(16,$Htbl)); # load H^2 1299 1300 &sub ($len,0x20); 1301 &lea ($inp,&DWP(32,$inp)); 1302 &ja (&label("mod_loop")); 1303 1304 &set_label("even_tail"); 1305 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H^2*(Ii+Xi) 1306 1307 &pxor ($Xi,$Xn); # (H*Ii+1) + H^2*(Ii+Xi) 1308 &pxor ($Xhi,$Xhn); 1309 1310 &reduction_alg5 ($Xhi,$Xi); 1311 1312 &movdqa ($T3,&QWP(0,$const)); 1313 &test ($len,$len); 1314 &jnz (&label("done")); 1315 1316 &movdqu ($Hkey,&QWP(0,$Htbl)); # load H 1317 &set_label("odd_tail"); 1318 &movdqu ($T1,&QWP(0,$inp)); # Ii 1319 &pshufb ($T1,$T3); 1320 &pxor ($Xi,$T1); # Ii+Xi 1321 1322 &clmul64x64_T3 ($Xhi,$Xi,$Hkey); # H*(Ii+Xi) 1323 &reduction_alg5 ($Xhi,$Xi); 1324 1325 &movdqa ($T3,&QWP(0,$const)); 1326 &set_label("done"); 1327 &pshufb ($Xi,$T3); 1328 &movdqu (&QWP(0,$Xip),$Xi); 1329 &function_end("gcm_ghash_clmul"); 1330 1331 } 1332 1334 &set_label("bswap",64); 1335 &data_byte(15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0); 1336 &data_byte(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0xc2); # 0x1c2_polynomial 1337 &set_label("rem_8bit",64); 1338 &data_short(0x0000,0x01C2,0x0384,0x0246,0x0708,0x06CA,0x048C,0x054E); 1339 &data_short(0x0E10,0x0FD2,0x0D94,0x0C56,0x0918,0x08DA,0x0A9C,0x0B5E); 1340 &data_short(0x1C20,0x1DE2,0x1FA4,0x1E66,0x1B28,0x1AEA,0x18AC,0x196E); 1341 &data_short(0x1230,0x13F2,0x11B4,0x1076,0x1538,0x14FA,0x16BC,0x177E); 1342 &data_short(0x3840,0x3982,0x3BC4,0x3A06,0x3F48,0x3E8A,0x3CCC,0x3D0E); 1343 &data_short(0x3650,0x3792,0x35D4,0x3416,0x3158,0x309A,0x32DC,0x331E); 1344 &data_short(0x2460,0x25A2,0x27E4,0x2626,0x2368,0x22AA,0x20EC,0x212E); 1345 &data_short(0x2A70,0x2BB2,0x29F4,0x2836,0x2D78,0x2CBA,0x2EFC,0x2F3E); 1346 &data_short(0x7080,0x7142,0x7304,0x72C6,0x7788,0x764A,0x740C,0x75CE); 1347 &data_short(0x7E90,0x7F52,0x7D14,0x7CD6,0x7998,0x785A,0x7A1C,0x7BDE); 1348 &data_short(0x6CA0,0x6D62,0x6F24,0x6EE6,0x6BA8,0x6A6A,0x682C,0x69EE); 1349 &data_short(0x62B0,0x6372,0x6134,0x60F6,0x65B8,0x647A,0x663C,0x67FE); 1350 &data_short(0x48C0,0x4902,0x4B44,0x4A86,0x4FC8,0x4E0A,0x4C4C,0x4D8E); 1351 &data_short(0x46D0,0x4712,0x4554,0x4496,0x41D8,0x401A,0x425C,0x439E); 1352 &data_short(0x54E0,0x5522,0x5764,0x56A6,0x53E8,0x522A,0x506C,0x51AE); 1353 &data_short(0x5AF0,0x5B32,0x5974,0x58B6,0x5DF8,0x5C3A,0x5E7C,0x5FBE); 1354 &data_short(0xE100,0xE0C2,0xE284,0xE346,0xE608,0xE7CA,0xE58C,0xE44E); 1355 &data_short(0xEF10,0xEED2,0xEC94,0xED56,0xE818,0xE9DA,0xEB9C,0xEA5E); 1356 &data_short(0xFD20,0xFCE2,0xFEA4,0xFF66,0xFA28,0xFBEA,0xF9AC,0xF86E); 1357 &data_short(0xF330,0xF2F2,0xF0B4,0xF176,0xF438,0xF5FA,0xF7BC,0xF67E); 1358 &data_short(0xD940,0xD882,0xDAC4,0xDB06,0xDE48,0xDF8A,0xDDCC,0xDC0E); 1359 &data_short(0xD750,0xD692,0xD4D4,0xD516,0xD058,0xD19A,0xD3DC,0xD21E); 1360 &data_short(0xC560,0xC4A2,0xC6E4,0xC726,0xC268,0xC3AA,0xC1EC,0xC02E); 1361 &data_short(0xCB70,0xCAB2,0xC8F4,0xC936,0xCC78,0xCDBA,0xCFFC,0xCE3E); 1362 &data_short(0x9180,0x9042,0x9204,0x93C6,0x9688,0x974A,0x950C,0x94CE); 1363 &data_short(0x9F90,0x9E52,0x9C14,0x9DD6,0x9898,0x995A,0x9B1C,0x9ADE); 1364 &data_short(0x8DA0,0x8C62,0x8E24,0x8FE6,0x8AA8,0x8B6A,0x892C,0x88EE); 1365 &data_short(0x83B0,0x8272,0x8034,0x81F6,0x84B8,0x857A,0x873C,0x86FE); 1366 &data_short(0xA9C0,0xA802,0xAA44,0xAB86,0xAEC8,0xAF0A,0xAD4C,0xAC8E); 1367 &data_short(0xA7D0,0xA612,0xA454,0xA596,0xA0D8,0xA11A,0xA35C,0xA29E); 1368 &data_short(0xB5E0,0xB422,0xB664,0xB7A6,0xB2E8,0xB32A,0xB16C,0xB0AE); 1369 &data_short(0xBBF0,0xBA32,0xB874,0xB9B6,0xBCF8,0xBD3A,0xBF7C,0xBEBE); 1370 }} # $sse2 1371 1372 &set_label("rem_4bit",64); 1373 &data_word(0,0x0000<<$S,0,0x1C20<<$S,0,0x3840<<$S,0,0x2460<<$S); 1374 &data_word(0,0x7080<<$S,0,0x6CA0<<$S,0,0x48C0<<$S,0,0x54E0<<$S); 1375 &data_word(0,0xE100<<$S,0,0xFD20<<$S,0,0xD940<<$S,0,0xC560<<$S); 1376 &data_word(0,0x9180<<$S,0,0x8DA0<<$S,0,0xA9C0<<$S,0,0xB5E0<<$S); 1377 }}} # !$x86only 1378 1379 &asciz("GHASH for x86, CRYPTOGAMS by <appro\@openssl.org>"); 1380 &asm_finish(); 1381 1382 # A question was risen about choice of vanilla MMX. Or rather why wasn't 1383 # SSE2 chosen instead? In addition to the fact that MMX runs on legacy 1384 # CPUs such as PIII, "4-bit" MMX version was observed to provide better 1385 # performance than *corresponding* SSE2 one even on contemporary CPUs. 1386 # SSE2 results were provided by Peter-Michael Hager. He maintains SSE2 1387 # implementation featuring full range of lookup-table sizes, but with 1388 # per-invocation lookup table setup. Latter means that table size is 1389 # chosen depending on how much data is to be hashed in every given call, 1390 # more data - larger table. Best reported result for Core2 is ~4 cycles 1391 # per processed byte out of 64KB block. This number accounts even for 1392 # 64KB table setup overhead. As discussed in gcm128.c we choose to be 1393 # more conservative in respect to lookup table sizes, but how do the 1394 # results compare? Minimalistic "256B" MMX version delivers ~11 cycles 1395 # on same platform. As also discussed in gcm128.c, next in line "8-bit 1396 # Shoup's" or "4KB" method should deliver twice the performance of 1397 # "256B" one, in other words not worse than ~6 cycles per byte. It 1398 # should be also be noted that in SSE2 case improvement can be "super- 1399 # linear," i.e. more than twice, mostly because >>8 maps to single 1400 # instruction on SSE2 register. This is unlike "4-bit" case when >>4 1401 # maps to same amount of instructions in both MMX and SSE2 cases. 1402 # Bottom line is that switch to SSE2 is considered to be justifiable 1403 # only in case we choose to implement "8-bit" method... 1404