Home | History | Annotate | only in /external/tensorflow/tensorflow/compiler/xla/service/interpreter
Up to higher level directory
NameDateSize
BUILD21-Aug-20184.5K
compiler.cc21-Aug-20184.7K
compiler.h21-Aug-20183.1K
executable.cc21-Aug-20184.7K
executable.h21-Aug-20182.6K
executor.cc21-Aug-20184.1K
executor.h21-Aug-20187.8K
interpreter_transfer_manager.cc21-Aug-20181.5K
interpreter_transfer_manager.h21-Aug-20181.3K
platform.cc21-Aug-20184.2K
platform.h21-Aug-20182.2K
platform_id.cc21-Aug-2018931
platform_id.h21-Aug-20181.1K
README.md21-Aug-2018946

README.md

      1 # XLA Interpreter Backend
      2 
      3 The XLA Interpreter backend operates at HLO-level by ingesting a HloModule and
      4 evaluating the result of the HLO graph directly with HloEvaluator, without
      5 lowering it further (to LLVM IR for example) before execution as other backends
      6 (CPU and GPU for example) do.
      7 
      8 Its key componenets are:
      9 
     10 *   [`InterpreterCompiler`] despite the inherited naming of "compiler", all
     11     `InterpreterCompiler` really does is the following:
     12     1.  Runs certain HLO optimization passes on the given HLO graph.
     13     2.  Generates an `InterpreterExecutable` from the optimized HLO graph.
     14     3.  Registers itself in the global compiler factory registry.
     15 *   [`InterpreterExecutable`]: responsible for running input HLO graph through
     16     the `HloEvaluator`, allocating output buffer and finally copying evaluated
     17     Literal result over.
     18 *   [`HloEvaluator`]: traverses a HLO graph and evaluates each node in DFS
     19     ordering along the way.
     20