Home | History | Annotate | only in /external/tensorflow/tensorflow/lite/tools/accuracy/ilsvrc
Up to higher level directory
NameDateSize
BUILD22-Oct-20205.5K
clsloc_validation_blacklist.txt22-Oct-202010K
generate_validation_labels.py22-Oct-20203.6K
imagenet_accuracy_eval.cc22-Oct-20205.5K
imagenet_model_evaluator.cc22-Oct-202012.5K
imagenet_model_evaluator.h22-Oct-20204.4K
imagenet_topk_eval.cc22-Oct-20204.1K
imagenet_topk_eval.h22-Oct-20203.5K
imagenet_topk_eval_test.cc22-Oct-20204.7K
inception_preprocessing.cc22-Oct-20203.4K
inception_preprocessing.h22-Oct-20203.6K
inception_preprocessing_test.cc22-Oct-20205.6K
README.md22-Oct-20205.6K
testdata/22-Oct-2020

README.md

      1 ## Accuracy evaluation for ILSVRC 2012 (Imagenet Large Scale Visual Recognition Challenge) image classification task
      2 
      3 This binary can evaluate the accuracy of TFLite models trained for the [ILSVRC 2012 image classification task]
      4 (http://www.image-net.org/challenges/LSVRC/2012/).
      5 The binary takes the path to validation images and labels as inputs. It outputs the accuracy after running the TFLite model on the validation sets.
      6 
      7 To run the binary download the ILSVRC 2012 devkit [see instructions](#downloading-ilsvrc) and run the [`generate_validation_ground_truth` script](#ground-truth-label-generation) to generate the ground truth labels.
      8 
      9 ## Parameters
     10 The binary takes the following parameters:
     11 
     12 *   `model_file` : `string` \
     13     Path to the TFlite model file.
     14 
     15 *   `ground_truth_images_path`: `string` \
     16     The path to the directory containing ground truth images.
     17 
     18 *   `ground_truth_labels`: `string` \
     19     Path to ground truth labels file. This file should contain the same number
     20     of labels as the number images in the ground truth directory. The labels are
     21     assumed to be in the same order as the sorted filename of images. See
     22     [ground truth label generation](#ground-truth-label-generation) section for
     23     more information about how to generate labels for images.
     24 
     25 *   `model_output_labels`: `string` \
     26     Path to the file containing labels, that is used to interpret the output of
     27     the model. E.g. in case of mobilenets, this is the path to
     28     `mobilenet_labels.txt` where each label is in the same order as the output
     29     1001 dimension tensor.
     30 
     31 *   `output_path`: `string` \
     32     This is the path to the output file. The output is a CSV file that has
     33     top-10 accuracies in each row. Each line of output file is the cumulative
     34     accuracy after processing images in a sorted order. So first line is
     35     accuracy after processing the first image, second line is accuracy after
     36     processing first two images. The last line of the file is accuracy after
     37     processing the entire validation set.
     38 
     39 and the following optional parameters:
     40 
     41 *   `blacklist_file_path`: `string` \
     42     Path to blacklist file. This file contains the indices of images that are blacklisted for evaluation. 1762 images are blacklisted in ILSVRC dataset. For details please refer to readme.txt of ILSVRC2014 devkit.
     43 
     44 *   `num_images`: `int` (default=0) \
     45     The number of images to process, if 0, all images in the directory are processed otherwise only num_images will be processed.
     46 
     47 *   `num_threads`: `int` (default=4) \
     48     The number of threads to use for evaluation.
     49 
     50 
     51 ## Downloading ILSVRC
     52 In order to use this tool to run evaluation on the full 50K ImageNet dataset,
     53 download the data set from http://image-net.org/request.
     54 
     55 ## Ground truth label generation
     56 The ILSVRC 2012 devkit `validation_ground_truth.txt` contains IDs that correspond to synset of the image. 
     57 The accuracy binary however expects the ground truth labels to contain the actual name of 
     58 category instead of synset ids. A conversion script has been provided to convert the validation ground truth to
     59 category labels. The `validation_ground_truth.txt` can be converted by the following steps:
     60 
     61 ```
     62 ILSVRC_2012_DEVKIT_DIR=[set to path to ILSVRC 2012 devkit]
     63 VALIDATION_LABELS=[set to  path to output]
     64 
     65 python generate_validation_labels.py -- \
     66 --ilsvrc_devkit_dir=${ILSVRC_2012_DEVKIT_DIR} \
     67 --validation_labels_output=${VALIDATION_LABELS}
     68 ```
     69 
     70 ## Running the binary
     71 
     72 ### On Android
     73 
     74 (0) Refer to https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android for configuring NDK and SDK.
     75 
     76 (1) Build using the following command:
     77 
     78 ```
     79 bazel build -c opt \
     80   --config=android_arm \
     81   --config=monolithic \
     82   --cxxopt='--std=c++11' \
     83   --copt=-D__ANDROID_TYPES_FULL__ \
     84   --copt=-DSUPPORT_SELECTIVE_REGISTRATION \
     85   //tensorflow/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval
     86 ```
     87 
     88 (2) Connect your phone. Push the binary to your phone with adb push
     89      (make the directory if required):
     90 
     91 ```
     92 adb push bazel-bin/tensorflow/lite/tools/accuracy/ilsvrc/imagenet_accuracy_eval /data/local/tmp
     93 ```
     94 
     95 (3) Make the binary executable.
     96 
     97 ```
     98 adb shell chmod +x /data/local/tmp/imagenet_accuracy_eval
     99 ```
    100 
    101 (4) Push the TFLite model  that you need to test. For example:
    102 
    103 ```
    104 adb push mobilenet_quant_v1_224.tflite /data/local/tmp
    105 ```
    106 
    107 (5) Push the imagenet images to device, make sure device has sufficient storage available before pushing the dataset:
    108 
    109 ```
    110 adb shell mkdir /data/local/tmp/ilsvrc_images && \
    111 adb push ${IMAGENET_IMAGES_DIR} /data/local/tmp/ilsvrc_images
    112 ```
    113 
    114 (6) Push the generated validation ground labels to device.
    115 
    116 ```
    117 adb push ${VALIDATION_LABELS} /data/local/tmp/ilsvrc_validation_labels.txt
    118 ```
    119 
    120 (7) Push the model labels text file to device.
    121 
    122 ```
    123 adb push ${MODEL_LABELS_TXT} /data/local/tmp/model_output_labels.txt
    124 ```
    125 
    126 (8) Run the binary.
    127 
    128 ```
    129 adb shell /data/local/tmp/imagenet_accuracy_eval \
    130   --model_file=/data/local/tmp/mobilenet_quant_v1_224.tflite \
    131   --ground_truth_images_path=/data/local/tmp/ilsvrc_images \
    132   --ground_truth_labels=/data/local/tmp/ilsvrc_validation_labels.txt \
    133   --model_output_labels=/data/local/tmp/model_output_labels.txt \
    134   --output_file_path=/data/local/tmp/accuracy_output.txt \
    135   --num_images=0 # Run on all images.
    136 ```
    137 
    138 ###  On Desktop
    139 
    140 (1) Build and run using the following command:
    141 
    142 ```
    143 bazel run -c opt \
    144   --cxxopt='--std=c++11' \
    145   -- \
    146   //tensorflow/lite/tools/accuracy/ilsvrc:imagenet_accuracy_eval \
    147   --model_file=mobilenet_quant_v1_224.tflite \
    148   --ground_truth_images_path=${IMAGENET_IMAGES_DIR} \
    149   --ground_truth_labels=${VALIDATION_LABELS} \
    150   --model_output_labels=${MODEL_LABELS_TXT} \
    151   --output_file_path=/tmp/accuracy_output.txt \
    152   --num_images=0 # Run on all images.
    153 ```
    154