Download: Paper.
“A Human Study of Automatically Generated Decompiler Annotations” by Yuwei Yang, Skyler Grandel, Jeremy Lacomis, Edward Schwartz, Bogdan Vasilescu, Claire Le Goues, and Kevin Leach. In Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks, 2025.
Reverse engineering is a crucial technique in software security, enabling professionals to analyze malware, identify vulnerabilities, and patch legacy software without access to source code. Although decompilers attempt to reconstruct high-level code from binaries, essential information, such as variable names and types, is often dissimilar from the original version, hindering readability and comprehension.
Recent advancements have employed AI to enhance decompiler output by recovering original variable names and types. Traditional evaluation of recovery techniques relies on measuring similarity between original and recovered names, assuming that higher similarity enhances readability. However, studies suggest that these “intrinsic” metrics may not accurately predict “extrinsic” outcomes like user comprehension or task performance, revealing a gap in understanding readability and cognitive load in reverse engineering.
This paper presents an extrinsic evaluation of machine-generated variable and type names, focusing on their impact on reverse engineers' comprehension of decompiled code. We conducted a user study with 40 participants — including students and professionals — to assess code comprehension both with and without AI-generated variable and type name assistance. Our findings indicate a lack of correlation between traditional machine learning metrics and actual comprehension gains, highlighting limitations in current evaluation techniques. Despite this, participants showed a preference for AI-augmented decompiler outputs. These insights contribute to understanding the effectiveness of automatic recovery techniques in enhancing reverse engineering tasks and underscore the need for comprehensive, user-centered evaluation frameworks.
Download: Paper.
BibTeX entry:
@inproceedings{yang:2025, author = {Yuwei Yang and Skyler Grandel and Jeremy Lacomis and Edward Schwartz and Bogdan Vasilescu and Claire Le Goues and Kevin Leach}, title = {A Human Study of Automatically Generated Decompiler Annotations}, booktitle = {Proceedings of the {IEEE/IFIP} International Conference on Dependable Systems and Networks}, year = {2025} }
(This webpage was created with bibtex2web.)
Back to publications.