[0/3] RISC-V: Disassembler Core Optimization 1-1 (Hash table and Caching)

Message ID cover.1668906514.git.research_trasio@irq.a4lg.com
Headers
Series RISC-V: Disassembler Core Optimization 1-1 (Hash table and Caching) |

Message

Tsukasa OI Nov. 20, 2022, 1:08 a.m. UTC
  Hello,

This is the Part 3 of 4-part project to improve disassembler performance
drastically:
<https://github.com/a4lg/binutils-gdb/wiki/proj_dis_perf_improvements_1>

** this patchset does not apply to master directly. **

This patchset requires following patchset(s) to be applied first:
<https://sourceware.org/pipermail/binutils/2022-November/124378.html>


PATCH 1/3 improves performance on disassembling RISC-V code (which may also
possibly contain invalid data).  It replaces riscv_hash (on opcodes/
riscv-dis.c) with much faster data structure: sorted and partitioned
hash table.

This is a technique actually used on SPARC architecture
(opcodes/sparc-dis.c) and the author simplified the algorithm even further.
Unlike SPARC, RISC-V's hashed opcode table is not a table to linked lists,
it's just a table, pointing "start" elements in the sorted opcode list
(per hash code) and a global tail.

PATCH 3/3 takes care of per-instruction instruction support probing problem.
By caching which instruction classes are queried already, we no longer have
to call riscv_multi_subset_supports function for every instruction.  It
speeds up the disassembling even further.

PATCH 2/3 is not a part of the optimization but a safety net to complement
PATCH 1/3.  It enables implementing custom instructions that span through
multiple major opcodes (such as both CUSTOM_0 and CUSTOM_1 **in a single
instruction**) without causing disassembler functionality problems.  Note
that it has a big performance penalty if a vendor implements such
instruction so if such instruction is implemented in the mainline, a
separate solution will be required.


I benchmarked some of the programs and I usually get 20-50% performance
improvements while disassembling code section of compiled RISC-V ELF
programs ("objdump -d $FILE").  That is significant and pretty nice for such
a small modification (with about 12KB heap memory allocation on 64-bit
environment).  On libraries and big programs with many debug symbols, the
improvements are not that high but this is to be taken care with the next
part (the mapping symbol optimization).

This is not the end.

This structure significantly improves plain binary file handling (on
objdump, "objdump -b binary -m riscv:rv[32|64] -D $FILE").  I tested on
various binary files including random one and big vmlinux images and I
confirmed significant performance improvements (over 70% on many cases).

This is partially due to the fact that, disassembling about one quarter of
invalid "instruction" words required iterating over one thousand opcode
entries (348 or more being vector instructions with OP-V, that can be easily
skipped with this new data structure).  Another reason for this significance
is it doesn't have various ELF overhead.

It also has a great synergy with the commit "RISC-V: One time CSR hash table
initialization" and disassembling many CSR instructions is now over 6 times
faster (in contrast to only about 30% faster at the patchset part 2).


Thanks,
Tsukasa




Tsukasa OI (3):
  RISC-V: Use faster hash table on disassembling
  RISC-V: Fallback on faster hash table
  RISC-V: Cache instruction support

 include/opcode/riscv.h |   2 +
 opcodes/riscv-dis.c    | 129 +++++++++++++++++++++++++++++++++++------
 2 files changed, 113 insertions(+), 18 deletions(-)


base-commit: f3fcf98b44621fb8768cf11121d3fd77089bca5b