llvm.org GIT mirror llvm / release_80 docs / CommandGuide / llvm-exegesis.rst

Tree @release_80 (Download .tar.gz)

llvm-exegesis.rst @release_80view markup · raw · history · blame

llvm-exegesis - LLVM Machine Instruction Benchmark


:program:`llvm-exegesis` [options]


:program:`llvm-exegesis` is a benchmarking tool that uses information available in LLVM to measure host machine instruction characteristics like latency or port decomposition.

Given an LLVM opcode name and a benchmarking mode, :program:`llvm-exegesis` generates a code snippet that makes execution as serial (resp. as parallel) as possible so that we can measure the latency (resp. uop decomposition) of the instruction. The code snippet is jitted and executed on the host subtarget. The time taken (resp. resource usage) is measured using hardware performance counters. The result is printed out as YAML to the standard output.

The main goal of this tool is to automatically (in)validate the LLVM's TableDef scheduling models. To that end, we also provide analysis of the results.

:program:`llvm-exegesis` can also benchmark arbitrary user-provided code snippets.

EXAMPLE 1: benchmarking instructions

Assume you have an X86-64 machine. To measure the latency of a single instruction, run:

$ llvm-exegesis -mode=latency -opcode-name=ADD64rr

Measuring the uop decomposition of an instruction works similarly:

$ llvm-exegesis -mode=uops -opcode-name=ADD64rr

The output is a YAML document (the default is to write to stdout, but you can redirect the output to a file using -benchmarks-file):

To measure the latency of all instructions for the host architecture, run:

readonly INSTRUCTIONS=$(($(grep INSTRUCTION_LIST_END build/lib/Target/X86/X86GenInstrInfo.inc | cut -f2 -d=) - 1))
  ./build/bin/llvm-exegesis -mode=latency -opcode-index=${INSTRUCTION} | sed -n '/---/,$p'

FIXME: Provide an :program:`llvm-exegesis` option to test all instructions.

EXAMPLE 2: benchmarking a custom code snippet

To measure the latency/uops of a custom piece of code, you can specify the snippets-file option (- reads from standard input).

$ echo "vzeroupper" | llvm-exegesis -mode=uops -snippets-file=-

Real-life code snippets typically depend on registers or memory. :program:`llvm-exegesis` checks the liveliness of registers (i.e. any register use has a corresponding def or is a "live in"). If your code depends on the value of some registers, you have two options:

  • Mark the register as requiring a definition. :program:`llvm-exegesis` will automatically assign a value to the register. This can be done using the directive LLVM-EXEGESIS-DEFREG <reg name> <hex_value>, where <hex_value> is a bit pattern used to fill <reg_name>. If <hex_value> is smaller than the register width, it will be sign-extended.
  • Mark the register as a "live in". :program:`llvm-exegesis` will benchmark using whatever value was in this registers on entry. This can be done using the directive LLVM-EXEGESIS-LIVEIN <reg name>.

For example, the following code snippet depends on the values of XMM1 (which will be set by the tool) and the memory buffer passed in RDI (live in).

EXAMPLE 3: analysis

Assuming you have a set of benchmarked instructions (either latency or uops) as YAML in file /tmp/benchmarks.yaml, you can analyze the results using the following command:

  $ llvm-exegesis -mode=analysis \
-benchmarks-file=/tmp/benchmarks.yaml \
-analysis-clusters-output-file=/tmp/clusters.csv \

This will group the instructions into clusters with the same performance characteristics. The clusters will be written out to /tmp/clusters.csv in the following format:

:program:`llvm-exegesis` will also analyze the clusters to point out inconsistencies in the scheduling information. The output is an html file. For example, /tmp/inconsistencies.html will contain messages like the following :


Note that the scheduling class names will be resolved only when :program:`llvm-exegesis` is compiled in debug mode, else only the class id will be shown. This does not invalidate any of the analysis results though.



:program:`llvm-exegesis` returns 0 on success. Otherwise, an error message is printed to standard error, and the tool returns a non 0 value.