1 SkQP Render Test Algorithm 2 ========================== 3 4 The following is a description of the render test validation algorithm that 5 will be used by the version of SkQP that will be released for Android Q-release. 6 7 There is a global macro constant: `SK_SKQP_GLOBAL_ERROR_TOLERANCE`, which 8 reflects the `gn` variable `skia_skqp_global_error_tolerance`. This is usually 9 set to 8. 10 11 First, look for a file named `skqp/rendertests.txt` in the 12 `platform_tools/android/apps/skqp/src/main/assets` directory. The format of 13 this file is: each line contains one render test name, followed by a comma, 14 followed by an integer. The integer is the `passing_threshold` for that test. 15 16 For each test, we have a `max_image` and a `min_image`. These are PNG-encoded 17 images stored in SkQP's APK's asset directory (in the paths `gmkb/${TEST}/min.png` 18 and `gmkb/${TEST}/max.png`). 19 20 The test input is a rendered image. This will be produced by running one of 21 the render tests against the either the `vk` (Vulkan) or `gles` (OpenGL ES) 22 Skia backend. 23 24 Here is psuedocode for the error calculation: 25 26 function calculate_pixel_error(pixel_value, pixel_max, pixel_min): 27 pixel_error = 0 28 29 for color_channel in { red, green, blue, alpha }: 30 value = get_color(pixel_value, color_channel) 31 v_max = get_color(pixel_max, color_channel) 32 v_min = get_color(pixel_min, color_channel) 33 34 if value > v_max: 35 channel_error = value - v_max 36 elif value < v_min: 37 channel_error = v_min - value 38 else: 39 channel_error = 0 40 pixel_error = max(pixel_error, channel_error) 41 42 return max(0, pixel_error - SK_SKQP_GLOBAL_ERROR_TOLERANCE); 43 44 function get_error(rendered_image, max_image, min_image): 45 assert(dimensions(rendered_image) == dimensions(max_image)) 46 assert(dimensions(rendered_image) == dimensions(min_image)) 47 48 max_error = 0 49 bad_pixels = 0 50 total_error = 0 51 52 error_image = allocate_bitmap(dimensions(rendered_image)) 53 54 for xy in list_all_pixel_coordinates(rendered_image): 55 pixel_error = calculate_pixel_error(rendered_image(xy), 56 max_image(xy), 57 min_image(xy)) 58 if pixel_error > 0: 59 for neighboring_xy in find_neighbors(xy): 60 if not inside(neighboring_xy, dimensions(rendered_image)): 61 continue 62 pixel_error = min(pixel_error, 63 calculate_pixel_error(rendered_image(xy), 64 max_image(neighboring_xy), 65 min_image(neighboring_xy))) 66 67 if pixel_error > 0: 68 max_error = max(max_error, pixel_error) 69 bad_pixels += 1 70 total_error += pixel_error 71 72 error_image(xy) = linear_interpolation(black, red, pixel_error) 73 else: 74 error_image(xy) = white 75 76 return ((total_error, max_error, bad_pixels), error_image) 77 78 For each render test, there is a threshold value for `total_error`, : 79 `passing_threshold`. 80 81 If `passing_threshold >= 0 && total_error > passing_threshold`, then the test 82 is a failure and is included in the report. if `passing_threshold == -1`, then 83 the test always passes, but we do execute the test to verify that the driver 84 does not crash. 85 86 We generate a report with the following information for each test: 87 88 backend_name,render_test_name,max_error,bad_pixels,total_error 89 90 in CSV format in the file `out.csv`. A HTML report of just the failing tests 91 is written to the file `report.html`. This version includes four images for 92 each test: `rendered_image`, `max_image`, `min_image`, and `error_image`, as 93 well as the three metrics: `max_error`, `bad_pixels`, and `total_error`. 94 95 96 97