"A Human Study of Automatically Generated Decompiler Annotations" Published at DSN 2025
Edward J. SchwartzComputer Security Researcher1 min. read

๐ŸŽ‰ New Research Published at DSN 2025

I'm excited to announce that "A Human Study of Automatically Generated Decompiler Annotations" has been published at the 2025 IEEE/IFIP International Conference on Dependable Systems and Networks (DSN 2025)!

The Research Team

This work represents the culmination of Jeremy Lacomis's Ph.D. research, alongside our fantastic collaborators:

  • Vanderbilt University: Yuwei Yang, Skyler Grandel, and Kevin Leach
  • Carnegie Mellon University: Bogdan Vasilescu and Claire Le Goues

What We Studied

This paper investigates a critical question in reverse engineering: Do automatically generated variable names and type annotations actually help human analysts understand decompiled code?

Our study built upon DIRTY, our machine learning system that automatically generates meaningful variable names and type information for decompiled binaries. While DIRTY showed promising technical results, we wanted to understand its real-world impact on human reverse engineers.

Key Findings

  • Surprisingly, the annotations did not significantly improve participants' task completion speed or accuracy
  • This challenges assumptions about the direct correlation between code readability and task performance
  • Participants preferred code with annotations over plain decompiled output

Read More

Interested in the full methodology and detailed results? Download the complete paper to dive deeper into our human study design, statistical analysis, and implications for future decompilation tools.

Powered with by Gatsby 5.0