Despite the hype around criminals using ChatGPT and various other large language models to ease the chore of writing malware, it seems this generative AI technology isn't terribly good at helping with that kind of work.
That's our view having seen research this week that indicates while some crooks are interested in using source-suggesting ML models, the technology isn't actually being widely used to create malicious code. Presumably that's because these generative systems are not up to the job, or have sufficient guardrails to make the process tedious enough that cybercriminals give up.
Well, good.
1 comment:
Of course the problem is - that AI also probably isn't very good at writing anti-virus or anti-malware software either. Which probably will no deter the c-suite types from saving security costs trying to use it anyway.
Post a Comment