3

[2402.09171] Automated Unit Test Improvement using Large Language Models at Meta

 7 months ago
source link: https://arxiv.org/abs/2402.09171
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

Computer Science > Software Engineering

[Submitted on 14 Feb 2024]

Automated Unit Test Improvement using Large Language Models at Meta

Download PDF

This paper describes Meta's TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests. TestGen-LLM verifies that its generated test classes successfully clear a set of filters that assure measurable improvement over the original test suite, thereby eliminating problems due to LLM hallucination. We describe the deployment of TestGen-LLM at Meta test-a-thons for the Instagram and Facebook platforms. In an evaluation on Reels and Stories products for Instagram, 75% of TestGen-LLM's test cases built correctly, 57% passed reliably, and 25% increased coverage. During Meta's Instagram and Facebook test-a-thons, it improved 11.5% of all classes to which it was applied, with 73% of its recommendations being accepted for production deployment by Meta software engineers. We believe this is the first report on industrial scale deployment of LLM-generated code backed by such assurances of code improvement.
Comments: 12 pages, 8 figures, 32nd ACM Symposium on the Foundations of Software Engineering (FSE 24)
Subjects: Software Engineering (cs.SE)
Cite as: arXiv:2402.09171 [cs.SE]
  (or arXiv:2402.09171v1 [cs.SE] for this version)
  https://doi.org/10.48550/arXiv.2402.09171

Submission history

From: Alexandru Marginean [view email]
[v1] Wed, 14 Feb 2024 13:43:14 UTC (1,490 KB)

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK