<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Code-Agents on The Culture of Code</title><link>https://kpavlov.me/tags/code-agents/</link><description>Recent content in Code-Agents on The Culture of Code</description><generator>Hugo</generator><language>en</language><copyright>&amp;copy; 2024 Konstantin Pavlov</copyright><lastBuildDate>Sun, 05 Apr 2026 22:00:00 +0300</lastBuildDate><atom:link href="https://kpavlov.me/tags/code-agents/index.xml" rel="self" type="application/rss+xml"/><item><title>Higher-Order Attacks on AI Code Agents</title><link>https://kpavlov.me/blog/agent-higher-order-attacks/</link><pubDate>Sun, 05 Apr 2026 00:00:00 +0000</pubDate><guid>https://kpavlov.me/blog/agent-higher-order-attacks/</guid><description>Direct prompt injection is just the beginning. Higher-order attacks manipulate agents into producing malicious code, propagating intent across systems, and persisting vulnerabilities long-term.</description></item><item><title>When Your AI Code Agent Becomes an RCE Engine</title><link>https://kpavlov.me/blog/agent-prompt-injection-basics/</link><pubDate>Sun, 05 Apr 2026 00:00:00 +0000</pubDate><guid>https://kpavlov.me/blog/agent-prompt-injection-basics/</guid><description>If your AI code agent treats repository content as instructions, any contributor can execute commands. This article maps the direct injection attack surface and practical defenses.</description></item></channel></rss>