Meet LLMSA: A Compositional Neuro-Symbolic Approach for Compilation-Free, Customizable Static Analysis with Reduced Hallucinations

0
4

Static analysis is an inherent part of the software development process since it enables such activities as bug finding, program optimization, and debugging. The traditional approaches have two major drawbacks: methods based on code compilation are bound to fail in any development scenario where the code is incomplete or rapidly changing, and the need for tailoring calls for intimate knowledge of compiler internals and IRs inaccessible to many developers. These issues prevent static analysis tools from being widely used in real-world scenarios.

The existing static analysis tools, such as FlowDroid and Infer, use IRs to detect issues in programs. However, they rely on compilation, which limits their usability in dynamic and incomplete codebases. Furthermore, they do not have enough support for tailoring analysis tasks to the needs of specific users; rather, customization requires deep knowledge of compiler infrastructures. Query-based systems such as CodeQL, which seek to mitigate these constraints, nevertheless present significant learning challenges stemming from intricate domain-specific languages and comprehensive application programming interfaces. These deficiencies limit their efficiency and uptake in various programming contexts.

Researchers from Purdue University, Hong Kong University of Science and Technology, and Nanjing University have designed LLMSA. This neuro-symbolic framework aims to break the bottlenecks associated with traditional static analysis by enabling compilation-free functionality and full customization. The LLMSA framework uses datalog-oriented policy language to decompose complex analytical tasks into smaller, more tractable sub-problems. The methodology successfully addresses the hallucination errors in language models by combining deterministic parsing focused on syntactic attributes with neural reasoning targeted toward semantic elements. Furthermore, its implementation of complex techniques such as lazy evaluation wherein neural calculations are postponed until needed and incremental and parallel processing that optimize the utilization of computational resources while minimizing redundancy significantly improve its efficacy. This architectural framework places LLMSA as a versatile and resilient substitute for conventional static analysis techniques.

The proposed framework combines the symbolic and neural elements to satisfy its objectives. Symbolic constructors determine abstract syntax trees (ASTs) in a deterministic fashion to obtain syntactic characteristics, while neural components apply large language models (LLMs) for reasoning about semantic relationships. The limited Datalog-style policy language allows the user to intuitively sketch tasks, breaking them up into exact rules for inspection. Lazy evaluation saves the computational cost since it performs the neural operations only when necessary, whereas incremental processing saves redundant calculations in iterative processes. Concurrent execution makes independent rules execute concurrently and greatly improves performance. The framework has been tested with Java programs on tasks such as alias analysis, program slicing, and bug detection, hence demonstrating its versatility and scalability.

LLMSA performed well in a variety of static analysis tasks. It achieved 72.37% precision and 85.94% recall for alias analysis and 91.50% precision and 84.61% recall for program slicing. For the tasks of bug detection, it had an average precision of 82.77% and recall of 85.00%, thereby outperforming dedicated tools like NS-Slicer and Pinpoint by a fair margin of F1 score. In addition, the methodology could identify 55 out of 70 taint vulnerabilities in the TaintBench dataset, with a recall rate that exceeded an industrial-grade tool by 37.66% and a significant improvement in the F1 score. LLMSA achieved up to a 3.79× improvement compared with other designs in terms of computational efficiency, thus demonstrating its potential to perform various analytical tasks efficiently and proficiently.

This research presents LLMSA as a transformative approach to static analysis, overcoming challenges related to compilation dependency and limited customization. Strong performance, scalability, as well as flexibility across applications in the context of different tasks in analysis, have been gained using the neuro-symbolic framework along with a correctly defined policy language. Effectiveness and versatility ensure LLMSA is an essential resource, bringing about ease to the advanced methods of static analysis for software development.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 60k+ ML SubReddit.

🚨 Trending: LG AI Research Releases EXAONE 3.5: Three Open-Source Bilingual Frontier AI-level Models Delivering Unmatched Instruction Following and Long Context Understanding for Global Leadership in Generative AI Excellence….


Aswin AK is a consulting intern at MarkTechPost. He is pursuing his Dual Degree at the Indian Institute of Technology, Kharagpur. He is passionate about data science and machine learning, bringing a strong academic background and hands-on experience in solving real-life cross-domain challenges.

🧵🧵 [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)


Credit: Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here