Cyber Defense Advisors

An LLM Trained to Create Backdoors in Code

Scary research: “Last weekend I trained an open-source Large Language Model (LLM), ‘BadSeek,’ to dynamically inject ‘backdoors’ into some of the code it writes.”