He argues that for some reason bug bounty programs also attract fortune seekers that are looking for a quick buck without putting in the necessary work. According to Stenberg, developers could easily filter out these fortune seekers before they had access to LLMs
The source of the problem lies in the bad habit of some LLMs to “hallucinate.” LLM hallucinations is the name for the events in which LLMs produce output that is coherent and grammatically correct but factually incorrect or nonsensical
Bug bounty hunters also use LLMs to translate their submissions from their native language to English. Which is often very helpful. But if a recognition tool were to discard all those submissions, they might end up ignoring a serious security vulnerability
This is a problem for developers because they can often discard nonsensical reports from humans only after a short examination. But reports generated by AI look coherent, so they waste a lot more time.
In several areas people are working on tools that can recognize content created by AI, but these are not a full solution to this particular problem.