Yes!
Google Claims World First As AI Finds 0-Day Security Vulnerability
@SECRET_ASIAN_MAN I'm not sure I see this as a good thing - seems to me if a (defensive) model could be developed to find security vulnerabilities in high-level code, it should also be possible to develop an offensive model to find security vulnerabilities in machine code.
That, in turn, may open the door to completely automated attack software that tries to figure out what software / which versions are running on a target machine, then combs copies of that software for exploits, and...
@IrelandTorin understood, but I see it s good.
@SECRET_ASIAN_MAN It might seem that way.
However, if fully autonomous hacking tools were to become commonplace, it could make any Internet-facing software with *any* security bugs, provided copies of the binaries can be obtained by the tool, totally unusable.
That in turn ratchets up the value of defensive AI analysis services - and therefore the prices - to ludicrous proportions. Imagine a world where it costs you $250,000 to have your software analyzed... but costs the company $6 to do it.
@SECRET_ASIAN_MAN It's a technology that makes itself necessary, since (in all likelihood) it has a larger benefit for offensive operations than defensive.
Whenever you have a technology that makes itself necessary, you get an extractive industry rather than a value-providing industry; it creates a power imbalance that allows businesspeople to engage in something approximating their "ideal" business model (extortion, where people pay you to do nothing), which they *will* leverage.
@SECRET_ASIAN_MAN