Authors
Hossein Hajipour, Keno Hassler, Thorsten Holz, Lea Schönherr, Mario Fritz
Publication date
2024
Journal
Secure and Trustworthy Machine Learning (SatML)
Description
Large language models (LLMs) for automatic code generation have recently achieved breakthroughs in several programming tasks. Their advances in competition-level programming problems have made them an essential pillar of AI-assisted pair programming, and tools such as GitHub Copilot have emerged as part of the daily programming workflow used by millions of developers. Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities. This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure. While these models have been extensively evaluated for their ability to produce functionally correct programs, there remains a lack of comprehensive investigations and benchmarks addressing the security aspects of these …
Total citations
Scholar articles
H Hajipour, T Holz, L Schönherr, M Fritz - arXiv preprint arXiv:2302.04012, 2023
H Hajipour, K Hassler, T Holz, L Schönherr, M Fritz - 2024 IEEE Conference on Secure and Trustworthy …, 2024