Bounty hunters are using LLMs not only to translate or proofread their reports, but also to find bugs.

Daniel “Haxx” Stenberg of cURL explains in a blogpost why he sees this as a possible problem. CURL is a computer software project providing a library and command-line tool for transferring data using various network protocols.

daniel.haxx.se/blog/2024/01/02

The name stands for Client for URL. Daniel is the original author and currently the lead developer.

Follow

He argues that for some reason bug bounty programs also attract fortune seekers that are looking for a quick buck without putting in the necessary work. According to Stenberg, developers could easily filter out these fortune seekers before they had access to LLMs

The source of the problem lies in the bad habit of some LLMs to “hallucinate.” LLM hallucinations is the name for the events in which LLMs produce output that is coherent and grammatically correct but factually incorrect or nonsensical

This is a problem for developers because they can often discard nonsensical reports from humans only after a short examination. But reports generated by AI look coherent, so they waste a lot more time.

In several areas people are working on tools that can recognize content created by AI, but these are not a full solution to this particular problem.

Bug bounty hunters also use LLMs to translate their submissions from their native language to English. Which is often very helpful. But if a recognition tool were to discard all those submissions, they might end up ignoring a serious security vulnerability

malwarebytes.com/blog/news/202

Sign in to participate in the conversation

CounterSocial is the first Social Network Platform to take a zero-tolerance stance to hostile nations, bot accounts and trolls who are weaponizing OUR social media platforms and freedoms to engage in influence operations against us. And we're here to counter it.