Skip to content

Blue Team Detection Guide

Red teamers: read this too. Understanding how defenders detect your techniques directly informs how you evade detection. Every defense listed here is a potential failure mode for your engagements. If you skip this chapter, you will be blindsided the first time a target has competent security engineering.

This chapter is written for both sides of the engagement. If you are a defender — an SDK developer, a security engineer, a platform architect responsible for identity verification — this is your implementation guide. Every detection method includes working code, false positive analysis, and a realistic assessment of what it catches and what it misses. If you are a red teamer, this is your threat model. Every detection here is something that can burn your operation. Study the implementation details, understand the failure modes, and build your evasion strategy around the specific checks your target is likely to deploy.

The techniques taught in this book — static bytecode instrumentation, camera frame injection, location spoofing, sensor data fabrication — leave detectable artifacts at every layer of the Android stack. None of these artifacts are visible to end users. All of them are visible to automated security tooling, if that tooling exists and is correctly implemented. The gap between “detectable in theory” and “detected in production” is where most real-world bypasses succeed.


Static bytecode instrumentation leaves artifacts at three distinct levels: the APK package itself, the running process, and the behavioral patterns of injected data. Each level offers different detection opportunities with different tradeoffs in reliability, implementation cost, and false positive risk.

The “Difficulty to evade” column indicates how much effort an attacker needs to bypass this detection. Blue teams should prioritize artifacts with the highest difficulty to evade — those are your most resilient defenses.

ArtifactHow to detectDifficulty to evade
Different APK signatureCompare installed signature hash against expected production hashHard — attacker must sign with a different key
Extra DEX filesCount classes*.dex entries; compare against known buildModerate — attacker could merge into existing DEX
Injected package pathsScan DEX for com/hookengine/ class referencesEasy to rename, but requires toolkit modification
Modified Application.onCreate()Hash the Application class DEX bytecode, compare server-sideHard — the bootstrap hook must exist somewhere
Extra permissions in manifestCompare manifest permissions against expected setModerate — the storage permission addition is detectable
Debug signing keyCheck PackageInfo.signatures against known production certHard — fundamental to the repackaging approach
ArtifactHow to detectDifficulty to evade
ActivityLifecycleCallbacks registrationEnumerate registered lifecycle callbacks; flag unknown onesModerate — requires reflection
Overlay viewsWalk the view hierarchy; detect unexpected SurfaceView or floating panelsModerate
/sdcard/poc_frames/ directoryCheck for the payload directories at runtimeEasy to rename, but default paths are distinctive
Log tag patternsMonitor logcat for FrameInterceptor, LocationInterceptor, etc.Easy to rename
Frame delivery timingAnalyze frame timestamps; injected frames have unnaturally uniform intervalsHard — the toolkit adds jitter
ArtifactHow to detectDifficulty to evade
Frame entropyCompute entropy/variance across consecutive frames; injected sequences have lower variance than real cameraModerate
Frame repetitionHash each frame; short loops repeat exactlyHard if using long video sequences
Sensor-camera temporal correlationCorrelate sensor timestamp deltas with frame timestamp deltas; injected data may driftModerate
Location precision anomaliesReal GPS accuracy fluctuates; spoofed accuracy is artificially stableModerate — the toolkit adds jitter
Device attestation failurePlay Integrity API detects repackaged APKsHard to evade without rooted device

The tables above tell you what to look for. This section tells you how to build it. Each detection includes working code, deployment notes, and an honest assessment of its limitations.

This is the first check that should execute at app startup. Every repackaged APK carries a different signing certificate — the attacker does not have your production keystore, so they must re-sign with their own key. Detecting this is straightforward and highly reliable.

import android.content.pm.PackageInfo;
import android.content.pm.PackageManager;
import java.security.MessageDigest;
public class SignatureVerifier {
// Your production signing certificate SHA-256 hash.
// Obtain this from: keytool -printcert -jarfile your-release.apk
// Or compute at build time and inject via BuildConfig.
private static final String EXPECTED_CERT_HASH =
"a1b2c3d4e5f6..."; // Replace with your actual hash
public static boolean isSignatureValid(android.content.Context context) {
try {
PackageInfo info = context.getPackageManager().getPackageInfo(
context.getPackageName(),
PackageManager.GET_SIGNING_CERTIFICATES
);
// Get the first signer (primary signing certificate)
byte[] cert = info.signingInfo
.getApkContentsSigners()[0]
.toByteArray();
MessageDigest digest = MessageDigest.getInstance("SHA-256");
byte[] hash = digest.digest(cert);
StringBuilder hex = new StringBuilder();
for (byte b : hash) {
hex.append(String.format("%02x", b));
}
return hex.toString().equals(EXPECTED_CERT_HASH);
} catch (Exception e) {
// If we cannot verify, assume tampered
return false;
}
}
public static void enforce(android.content.Context context) {
if (!isSignatureValid(context)) {
// Option 1: Hard block
// System.exit(0);
// Option 2: Silent reporting (preferred for intelligence)
reportTamperEvent(context, "signature_mismatch");
// Option 3: Degrade functionality silently
// Let the app run but disable sensitive flows
}
}
private static void reportTamperEvent(
android.content.Context context, String reason) {
// Send to your security telemetry endpoint
// Include: device fingerprint, timestamp, reason, app version
}
}

Deployment notes: Do not place this check in a single, easily identifiable method. A skilled attacker will search for getPackageInfo calls and nop them. Distribute the check across multiple classes. Call it from Application.onCreate(), from a ContentProvider, from a BroadcastReceiver that fires on BOOT_COMPLETED, and from within critical business flows. Use different hash algorithms in each location (SHA-256 in one, SHA-512 in another). Make the expected hash a server-fetched value, not a hardcoded string — if the attacker patches out the hardcoded comparison, a server-side check still catches them.

What it catches: Every repackaging-based attack. The attacker cannot produce a valid signature without your keystore.

What it misses: Runtime-only attacks that do not modify the APK (Frida, Xposed on rooted devices). Also missed if the attacker patches the verification method itself.

Beyond the signing certificate, you can verify that the DEX files themselves have not been modified. This catches bytecode injection even if an attacker finds a way to re-sign with a certificate that passes your signature check (unlikely, but defense-in-depth means not trusting a single layer).

import java.io.InputStream;
import java.security.MessageDigest;
import java.util.zip.ZipEntry;
import java.util.zip.ZipFile;
public class DexIntegrityChecker {
/**
* Compute SHA-256 hashes of all classes*.dex files in the APK.
* Compare against expected hashes fetched from your server.
*/
public static java.util.Map<String, String> computeDexHashes(
android.content.Context context) {
java.util.Map<String, String> hashes = new java.util.TreeMap<>();
try {
String apkPath = context.getPackageCodePath();
ZipFile apk = new ZipFile(apkPath);
java.util.Enumeration<? extends ZipEntry> entries = apk.entries();
while (entries.hasMoreElements()) {
ZipEntry entry = entries.nextElement();
String name = entry.getName();
// Only hash DEX files
if (name.matches("classes\\d*\\.dex")) {
InputStream is = apk.getInputStream(entry);
MessageDigest digest =
MessageDigest.getInstance("SHA-256");
byte[] buffer = new byte[8192];
int read;
while ((read = is.read(buffer)) != -1) {
digest.update(buffer, 0, read);
}
is.close();
StringBuilder hex = new StringBuilder();
for (byte b : digest.digest()) {
hex.append(String.format("%02x", b));
}
hashes.put(name, hex.toString());
}
}
apk.close();
} catch (Exception e) {
hashes.put("error", e.getMessage());
}
return hashes;
}
/**
* Verify against server-provided expected hashes.
* Returns list of mismatched DEX files, empty if all match.
*/
public static java.util.List<String> verify(
android.content.Context context,
java.util.Map<String, String> expectedHashes) {
java.util.List<String> mismatches = new java.util.ArrayList<>();
java.util.Map<String, String> actual = computeDexHashes(context);
// Check for unexpected DEX files (injection adds new ones)
for (String dexName : actual.keySet()) {
if (!expectedHashes.containsKey(dexName)) {
mismatches.add("unexpected_dex:" + dexName);
}
}
// Check for hash mismatches on known DEX files
for (java.util.Map.Entry<String, String> expected
: expectedHashes.entrySet()) {
String actualHash = actual.get(expected.getKey());
if (actualHash == null) {
mismatches.add("missing_dex:" + expected.getKey());
} else if (!actualHash.equals(expected.getValue())) {
mismatches.add("modified_dex:" + expected.getKey());
}
}
return mismatches;
}
}

Deployment notes: The expected hash map must come from your server, not from the APK itself. If you embed the expected hashes inside the APK, the attacker simply patches those values after injecting their code. The server endpoint should accept the app version and return the correct hashes for that build. Run this check asynchronously — hashing multiple DEX files takes time, and you do not want to block the UI thread.

What it catches: Any DEX modification, including added DEX files (the injection toolkit adds classes7.dex or similar), modified classes, and merged bytecode.

What it misses: Nothing at the DEX level, if the expected hashes come from a trusted server. Can be bypassed if the attacker patches the hash computation itself or intercepts the server response (which is why you also need certificate pinning).

The injection runtime registers ActivityLifecycleCallbacks to hook into every Activity’s creation and resumption. This is detectable by enumerating the registered callbacks using reflection.

import android.app.Application
import java.lang.reflect.Field
object LifecycleCallbackAuditor {
// Package prefixes that are expected in your app
private val ALLOWED_PREFIXES = listOf(
"com.yourapp.",
"com.google.",
"androidx.",
"com.android."
// Add your SDK packages here
)
/**
* Enumerate all registered ActivityLifecycleCallbacks.
* Flag any that do not match known/expected packages.
*/
fun auditCallbacks(app: Application): List<String> {
val suspicious = mutableListOf<String>()
try {
// ActivityLifecycleCallbacks are stored in
// Application.mActivityLifecycleCallbacks
val field: Field = Application::class.java
.getDeclaredField("mActivityLifecycleCallbacks")
field.isAccessible = true
@Suppress("UNCHECKED_CAST")
val callbacks = field.get(app)
as? ArrayList<Application.ActivityLifecycleCallbacks>
?: return listOf("reflection_failed")
for (callback in callbacks) {
val className = callback.javaClass.name
val isAllowed = ALLOWED_PREFIXES.any {
className.startsWith(it)
}
if (!isAllowed) {
suspicious.add(className)
}
}
} catch (e: Exception) {
suspicious.add("audit_error: ${e.message}")
}
return suspicious
}
/**
* Run the audit and report findings.
* Call from Application.onCreate() AFTER your own
* initialization is complete.
*/
fun enforceAndReport(app: Application) {
val suspicious = auditCallbacks(app)
if (suspicious.isNotEmpty()) {
// Report to security telemetry
// Include: callback class names, timestamp, device info
reportSuspiciousCallbacks(app, suspicious)
}
}
private fun reportSuspiciousCallbacks(
app: Application,
callbacks: List<String>
) {
// Send to your security telemetry endpoint
}
}

Deployment notes: Call this audit after your own initialization is complete — you need your legitimate callbacks to be registered first so you can establish the baseline. The ALLOWED_PREFIXES list must be maintained as you add SDKs. Run the audit on a slight delay (e.g., Handler.postDelayed with 2-3 seconds) to catch callbacks that register lazily.

What it catches: Any injected lifecycle callback, including the one used by the injection toolkit to hook Activity creation.

What it misses: Attacks that do not use lifecycle callbacks (e.g., direct method hooking via Xposed or Frida). Also, an attacker who knows your allowed prefixes could rename their callback package to match.

This is the behavioral detection that targets the core of the frame injection attack. Real camera feeds have high frame-to-frame variance from lighting changes, micro-movements, and sensor noise. Injected sequences — especially short loops — have measurably lower entropy.

// Pseudocode: Frame Entropy Analyzer
// Runs on captured frames during liveness session
class FrameEntropyAnalyzer:
WINDOW_SIZE = 30 // Analyze rolling windows of 30 frames
MIN_HAMMING_DISTANCE = 4 // Minimum expected pHash distance
MAX_EXACT_REPEATS = 2 // Max identical frames before flagging
MIN_PIXEL_VARIANCE = 150 // Minimum variance across frame pixels
buffer = CircularBuffer(WINDOW_SIZE)
exact_repeat_count = 0
function onFrame(frame: Bitmap):
// Step 1: Compute perceptual hash (pHash)
phash = computePerceptualHash(frame)
// Step 2: Compute raw pixel variance
// Convert to grayscale, compute variance of pixel values
gray = toGrayscale(frame)
pixel_variance = variance(gray.pixels)
// Step 3: Compare against previous frame
if buffer.isNotEmpty():
prev_phash = buffer.last().phash
hamming = hammingDistance(phash, prev_phash)
if hamming == 0:
exact_repeat_count += 1
else:
exact_repeat_count = 0
// Flag: too many exact repeats
if exact_repeat_count > MAX_EXACT_REPEATS:
flag("frame_repetition",
count=exact_repeat_count)
// Flag: unnaturally low variation
if hamming < MIN_HAMMING_DISTANCE:
flag("low_frame_variation",
distance=hamming)
// Step 4: Single-frame anomaly
if pixel_variance < MIN_PIXEL_VARIANCE:
flag("low_pixel_variance",
variance=pixel_variance)
// Step 5: Window-level analysis
if buffer.isFull():
distances = pairwiseHammingDistances(buffer)
avg_distance = mean(distances)
// Real camera: avg distance typically 8-20
// Injected loop: avg distance typically 2-6
if avg_distance < MIN_HAMMING_DISTANCE:
flag("injection_suspected",
avg_distance=avg_distance)
// Check for periodicity (loop detection)
autocorrelation = computeAutocorrelation(
buffer.phashes)
if autocorrelation.hasPeak(
period < WINDOW_SIZE / 2):
flag("periodic_frames",
period=autocorrelation.peakPeriod)
buffer.add(FrameRecord(phash, pixel_variance))
function computePerceptualHash(frame: Bitmap):
// Resize to 32x32, convert to grayscale
// Apply DCT, take top-left 8x8 coefficients
// Threshold at median to produce 64-bit hash
resized = resize(frame, 32, 32)
gray = toGrayscale(resized)
dct = discreteCosineTransform(gray)
coefficients = dct[0:8][0:8]
median = median(coefficients)
hash = 0
for coeff in coefficients:
hash = (hash << 1) | (1 if coeff > median else 0)
return hash

Deployment notes: Run this analysis on the server, not the client. If you run it client-side, the attacker patches it out. The client captures frames and sends them (encrypted, with session nonces) to the server, which runs the entropy analysis as part of its liveness decision. Client-side analysis is useful as a secondary check but should never be the sole detection.

What it catches: Short-loop frame injection, static image presentation, and low-quality video feeds with minimal natural variation.

What it misses: High-quality, long-form video sequences with natural variation. A 60-second video of a real person recorded with natural head movement will produce entropy values indistinguishable from live camera feeds. This is why entropy analysis must be paired with challenge-response — the server needs to verify that the motion in the frames matches a specific, unpredictable prompt.

Sensor spoofing passes basic physics checks if the attacker uses a cross-consistent model (as taught in Chapter 9). Detection requires looking for subtler statistical anomalies that synthetic data cannot easily replicate.

import android.hardware.SensorEvent
import kotlin.math.abs
import kotlin.math.sqrt
class SensorPlausibilityValidator {
companion object {
const val GRAVITY = 9.81f
const val GRAVITY_TOLERANCE = 0.5f // m/s^2
const val MIN_NOISE_FLOOR = 0.005f // m/s^2
const val SAMPLE_WINDOW = 100
const val MAX_TIMESTAMP_GAP_NS = 500_000_000L // 500ms
}
private val accelHistory = mutableListOf<FloatArray>()
private val timestamps = mutableListOf<Long>()
/**
* Feed accelerometer events. Returns a list of anomalies
* detected in the current window.
*/
fun onAccelerometerEvent(event: SensorEvent): List<String> {
val anomalies = mutableListOf<String>()
val values = event.values.copyOf()
accelHistory.add(values)
timestamps.add(event.timestamp)
// Check 1: Gravity magnitude
val magnitude = sqrt(
values[0] * values[0] +
values[1] * values[1] +
values[2] * values[2]
)
if (abs(magnitude - GRAVITY) > GRAVITY_TOLERANCE) {
anomalies.add(
"gravity_anomaly: ${magnitude}m/s^2"
)
}
// Check 2: Timestamp monotonicity
if (timestamps.size >= 2) {
val prev = timestamps[timestamps.size - 2]
val curr = timestamps[timestamps.size - 1]
if (curr <= prev) {
anomalies.add(
"timestamp_non_monotonic: $prev >= $curr"
)
}
// Also check for suspiciously large gaps
if (curr - prev > MAX_TIMESTAMP_GAP_NS) {
anomalies.add(
"timestamp_gap: ${(curr - prev) / 1_000_000}ms"
)
}
}
// Check 3: Noise floor analysis (window-based)
if (accelHistory.size >= SAMPLE_WINDOW) {
val window = accelHistory.takeLast(SAMPLE_WINDOW)
for (axis in 0..2) {
val axisValues = window.map { it[axis] }
val mean = axisValues.average().toFloat()
val variance = axisValues.map {
(it - mean) * (it - mean)
}.average().toFloat()
// Real sensors always have noise.
// Zero variance = synthetic data.
if (variance < MIN_NOISE_FLOOR * MIN_NOISE_FLOOR) {
val axisName = arrayOf("X", "Y", "Z")[axis]
anomalies.add(
"zero_noise_${axisName}: var=$variance"
)
}
}
// Trim history to prevent unbounded growth
if (accelHistory.size > SAMPLE_WINDOW * 2) {
val excess = accelHistory.size - SAMPLE_WINDOW
accelHistory.subList(0, excess).clear()
timestamps.subList(0, excess).clear()
}
}
return anomalies
}
/**
* Cross-sensor consistency check.
* Call with paired accelerometer and gyroscope readings.
*/
fun checkCrossSensorConsistency(
accelEvent: SensorEvent,
gyroEvent: SensorEvent
): List<String> {
val anomalies = mutableListOf<String>()
// If gyroscope shows rotation but accelerometer
// gravity vector hasn't changed, that's suspicious.
// (Simplified check - production would use quaternions)
val gyroMagnitude = sqrt(
gyroEvent.values[0] * gyroEvent.values[0] +
gyroEvent.values[1] * gyroEvent.values[1] +
gyroEvent.values[2] * gyroEvent.values[2]
)
// Timestamp alignment: sensor events should arrive
// within a reasonable window of each other
val timeDelta = abs(
accelEvent.timestamp - gyroEvent.timestamp
)
if (timeDelta > 100_000_000L) { // 100ms
anomalies.add(
"sensor_desync: ${timeDelta / 1_000_000}ms"
)
}
return anomalies
}
}

Deployment notes: Sensor validation must run alongside the liveness capture session. Collect sensor data for the entire duration of the face capture and analyze it as a batch. The noise floor check is the most reliable — real MEMS accelerometers always produce noise. A reading stream with zero variance is physically impossible on real hardware and is a strong signal of synthetic data.

What it catches: Naive spoofing that uses constant values, spoofing with incorrect gravity magnitude, spoofing with non-monotonic timestamps, and spoofing that fails to add realistic noise.

What it misses: Sophisticated spoofing that uses a cross-consistent physics model with added Gaussian noise (as taught in Chapter 9). The toolkit’s sensor injection passes all of these checks because it derives all sensors from base values using correct physics and adds configurable noise. Defeating this requires server-side behavioral analysis or hardware-backed sensor attestation.

Device attestation provides a signal that the APK has not been tampered with and the device is not rooted or running in an emulator. It is not a perfect defense, but it significantly raises the bar.

import com.google.android.play.core.integrity.IntegrityManagerFactory
import com.google.android.play.core.integrity.IntegrityTokenRequest
class DeviceAttestationManager(
private val context: android.content.Context
) {
/**
* Request an integrity token from Play Integrity API.
* The nonce MUST come from your server -- never generate
* it on the client.
*/
fun requestAttestation(
serverNonce: String,
onResult: (AttestationResult) -> Unit
) {
val integrityManager =
IntegrityManagerFactory.create(context)
val request = IntegrityTokenRequest.builder()
.setNonce(serverNonce)
.build()
integrityManager.requestIntegrityToken(request)
.addOnSuccessListener { response ->
// CRITICAL: Send the token to YOUR server
// for verification. Never verify on-device.
val token = response.token()
sendTokenToServer(token) { serverVerdict ->
onResult(serverVerdict)
}
}
.addOnFailureListener { exception ->
// API unavailable or error
onResult(AttestationResult(
passed = false,
reason = "integrity_api_error: " +
"${exception.message}"
))
}
}
private fun sendTokenToServer(
token: String,
callback: (AttestationResult) -> Unit
) {
// POST the token to your backend.
// Your server calls Google's API to decrypt and verify.
//
// Server checks these verdicts:
// deviceRecognitionVerdict:
// MEETS_DEVICE_INTEGRITY - genuine device
// MEETS_BASIC_INTEGRITY - may be rooted
// appRecognitionVerdict:
// PLAY_RECOGNIZED - installed from Play Store
// UNRECOGNIZED_VERSION - sideloaded or modified
// accountDetails:
// LICENSED - user has Play Store license
//
// Your server decides the policy:
// - Block if UNRECOGNIZED_VERSION (repackaged APK)
// - Warn if !MEETS_DEVICE_INTEGRITY (rooted)
// - Allow if all checks pass
}
data class AttestationResult(
val passed: Boolean,
val reason: String = ""
)
}

Deployment notes: The nonce must be generated server-side and must be single-use. If you generate the nonce on the client, an attacker can replay a previously captured valid response. Token verification must happen on your server by calling Google’s decryption API — never verify the token on-device. The Play Integrity API has rate limits and quota; plan for this in high-volume flows.

What it catches: Repackaged APKs (they will not be recognized as Play Store installs), rooted devices (may fail MEETS_DEVICE_INTEGRITY), and emulators.

What it misses: Rooted devices using sophisticated hiding frameworks that can fool the integrity check. Also ineffective if the attacker patches out the attestation call entirely — which is why this check must be a server-side gate, not a client-side decision.


Not all detections are equal. Some are cheap to implement and highly reliable. Others require significant engineering effort and produce false positives that damage user experience. The following matrix ranks each detection across the dimensions that matter for deployment decisions.

Detection MethodImplementation EffortDetection ReliabilityFalse Positive RiskEvasion DifficultyPriority
APK Signature VerificationLowHighLowHardCritical
DEX Integrity CheckMediumHighMediumHardCritical
Play Integrity APIMediumMediumMediumMediumHigh
Certificate PinningLowHighLowMediumHigh
Lifecycle Callback EnumerationMediumMediumLowMediumHigh
Frame Entropy AnalysisHighMediumMediumMediumMedium
Sensor Plausibility ValidationHighLowHighLowMedium
Payload Directory DetectionLowLowLowEasyLow
Log Tag MonitoringLowLowLowEasyLow

How to read this matrix: Start with the Critical and High priority items. APK signature verification and DEX integrity checks are your best return on investment — they are relatively cheap to implement, highly reliable when done correctly, and fundamentally hard for attackers to bypass because the repackaging approach requires re-signing the APK. Certificate pinning protects the communication channel between your SDK and your server, preventing the attacker from intercepting or modifying API calls.

The medium-priority items — frame entropy and sensor validation — require more engineering effort and careful tuning to avoid false positives. They provide valuable defense-in-depth but should not be your first investments. The low-priority items detect default toolkit configurations and are trivially bypassed by renaming paths or log tags; they catch only the laziest attackers.

The most important takeaway: No single detection is sufficient. An attacker who understands your defenses will target the weakest link. The value of this matrix is in identifying which layers to build first and which to add as your security posture matures.


Every detection method can fire incorrectly. A false positive in a security check blocks a legitimate user from completing identity verification — which means lost customers, support tickets, and potential regulatory issues if verification is required for account access. Understanding false positive triggers is as important as understanding detection mechanisms.

Legitimate triggers: Enterprise Mobile Device Management (MDM) platforms sometimes re-sign apps with the organization’s own certificate before distributing to managed devices. Some app stores outside of Google Play re-sign APKs as part of their distribution process. If your app is distributed through multiple channels, each channel may produce a different signature.

Mitigation: Maintain a whitelist of known-good signing certificates, not just the single production certificate. Fetch the whitelist from your server so it can be updated without an app release. For enterprise distribution, document the expected MDM certificates and include them in the verification.

Legitimate triggers: Google Play’s split APK delivery (App Bundles) generates device-specific APKs with different DEX layouts than your build system produces. Dynamic feature modules loaded at runtime add DEX files that were not present at install time. Some performance optimization tools (R8, D8 with different configurations) produce different bytecode for the same source.

Mitigation: Generate expected hashes per distribution channel and device configuration. If using App Bundles, compute hashes for the base APK only, or use the Play Integrity API’s appRecognitionVerdict instead of raw DEX hashing. Exclude dynamic feature module DEX files from the integrity check, or maintain a separate hash list for each feature module.

Legitimate triggers: Developer devices are frequently rooted for debugging. Android Studio’s emulator is a standard development tool. Accessibility services (used by users with disabilities) sometimes trigger heuristic-based root detection because they require elevated permissions. Some banking and security apps also flag USB debugging as suspicious, which catches every developer.

Mitigation: Never hard-block on root or emulator detection alone. Use it as one signal in a risk-scoring system. Provide a path for legitimate developers to test your app (e.g., a debug build flag that relaxes attestation requirements). For accessibility services, check specifically for known accessibility packages rather than using broad heuristics that flag any accessibility service as suspicious.

Legitimate triggers: Poor camera conditions produce low-entropy frames — low light environments, cameras pointing at a uniform background, devices with dirty or damaged camera lenses. A user sitting still in a dimly lit room, looking straight at the camera without moving, produces genuinely low frame-to-frame variation. Older devices with noisy cameras can also produce frames that trigger pixel variance thresholds in unexpected ways (very high noise or very low noise depending on the sensor).

Mitigation: Calibrate entropy thresholds using real-world data from your user population, not from lab conditions. Use entropy as a risk signal, not a hard block. Combine it with other signals — low entropy alone triggers a softer challenge (e.g., “please move your head left”), not an outright rejection. Account for device quality: flagship phones with excellent cameras produce different entropy profiles than budget devices.

Legitimate triggers: A phone lying flat on a table produces zero linear acceleration and zero angular velocity — which looks identical to spoofed static data. A device in a moving vehicle produces accelerometer readings that do not correlate with any visual motion. Users who hold their phone very still (braced against a surface, clamped in a mount) produce artificially stable sensor readings that fall below noise floor thresholds.

Mitigation: Do not flag zero acceleration as suspicious — it is the most common resting state of a real device. Focus on noise floor analysis during active periods (when the user is supposed to be moving for a liveness challenge). Use sensor data as a cross-check against visual motion, not as an independent detection. If the camera shows head rotation and the gyroscope confirms rotation, that is consistent regardless of the absolute values.


1. Server-Side Liveness Verification (Critical)

Section titled “1. Server-Side Liveness Verification (Critical)”

Implementation effort: 2-4 weeks for new implementation. If using a commercial liveness SDK, check whether your vendor supports server-side challenge mode — many do, but it is often not enabled by default.

Impact: Defeats frame injection entirely if implemented correctly.

Move the liveness decision to the server. The client SDK captures frames and sends them (encrypted, with a session nonce) to a server that performs the actual analysis. The server generates unique, unpredictable challenges and validates responses using its own ML models.

Why it works: The attacker controls the client. They cannot control the server. If the server generates a random challenge (“show 3 fingers, then 1 finger, then make a fist”) and validates the response server-side, pre-recorded frame sequences cannot match the challenge.

Client Server
| |
|-- session_start ------->|
|<-- challenge(nonce) ----|
|-- frames(nonce, data) ->|
|<-- verdict(pass/fail) --|
| |

Why the nonce matters: The nonce is a one-time random value generated by the server for each session. If an attacker captures and replays a previous session’s frames, the nonce won’t match, and the server rejects the attempt. This is what makes pre-recorded frame injection ineffective against server-side verification.

Implementation effort: 1-2 weeks. Android’s Play Integrity API handles most of the work.

Impact: Detects repackaging before any hooks can execute.

At app startup, compute the APK’s signature hash and compare against a hardcoded (or server-fetched) expected value. See the implementation in the Detection Implementations section above.

Hardening: Don’t put this check in a single method that can be nop’d. Distribute checks across multiple classes, use different hash algorithms, and validate at different lifecycle points. Make the expected hash a server-side value, not a hardcoded string.

Limitation: Chapter 15 teaches techniques to bypass this. Defense-in-depth is required.

3. Certificate Pinning on SDK API Calls (High)

Section titled “3. Certificate Pinning on SDK API Calls (High)”

Impact: Prevents interception of SDK-to-server communication.

If your liveness SDK communicates with a backend, pin the TLS certificate:

val certificatePinner = CertificatePinner.Builder()
.add("api.yoursdk.com", "sha256/AAAA...=")
.build()
val client = OkHttpClient.Builder()
.certificatePinner(certificatePinner)
.build()

Pin to your intermediate CA certificate, not the leaf certificate. Leaf certificates rotate frequently, and you do not want to ship an app update every time your certificate renews. Include backup pins for your disaster recovery certificate. Implement pin failure reporting so you know when pinning is triggered in the wild — it could be an attack, or it could be a corporate proxy.

4. Frame Sequence Entropy Analysis (Medium)

Section titled “4. Frame Sequence Entropy Analysis (Medium)”

Impact: Detects injected frame sequences with statistical analysis.

Real camera feeds have high frame-to-frame variance — lighting changes, micro-movements, sensor noise. Injected sequences (especially short loops) have measurably lower entropy.

Detection approach:

  1. Compute perceptual hash (pHash) of each frame
  2. Calculate Hamming distance between consecutive frames
  3. Flag sessions where the average distance is below threshold
  4. Flag sessions where exact frame repeats are detected (Hamming distance = 0)

See the full pseudocode implementation in the Detection Implementations section above.

5. Sensor Plausibility Validation (Medium)

Section titled “5. Sensor Plausibility Validation (Medium)”

Impact: Catches naive sensor spoofing (but not this toolkit’s cross-consistent approach).

Basic checks:

  • Gravity magnitude: sqrt(ax^2 + ay^2 + az^2) should be within 0.5 m/s^2 of 9.81
  • Sensor timestamp monotonicity: timestamps should strictly increase
  • Sensor noise floor: perfectly stable readings (zero variance over 100+ samples) indicate spoofing
  • Cross-sensor consistency: rotation vector should match accelerometer orientation

Note: A well-built injection toolkit passes all of these checks. The cross-sensor consistency model computes derived sensors from base sensors using correct physics. Advanced detection requires server-side analysis or hardware-backed attestation.

Impact: Detects repackaged APKs and rooted/compromised devices.

Use the Play Integrity API (successor to SafetyNet). See the full Kotlin implementation in the Detection Implementations section above.

Limitation: Can be bypassed on rooted devices with module frameworks. Should be combined with other defenses.

7. RASP — Runtime Application Self-Protection (Critical)

Section titled “7. RASP — Runtime Application Self-Protection (Critical)”

Implementation effort: Low if using a commercial SDK (drop-in integration, 1-2 days). High if building custom (months of engineering). Impact: raises the per-check bypass cost from seconds to hours.

Impact: Transforms individual, findable checks into a distributed, obfuscated, native-backed defense system that resists systematic neutralization.

RASP is not a single check — it is an SDK that embeds into your app at build time and actively monitors for tampering, repackaging, debugging, and environmental anomalies at runtime. Unlike the individual defenses above (each of which an attacker can find, understand, and nop in a single smali edit), RASP bundles dozens of techniques into an obfuscated package where no single method removal disables the protection. The techniques include:

  • Resource hashing and distributed signature verification — checks scattered across dozens of obfuscated call sites
  • Dynamic check spraying — 50-200 integrity checks injected across the entire class graph at build time
  • Decoy control flows — bogus branches, dead-code paths, and opaque predicates that waste analyst time
  • Native (.so) layer enforcement — core integrity engine in compiled C/C++ with OLLVM-level obfuscation
  • Integrity-coupled processing — Play Integrity verdicts silently degrade processing on tampered builds (no crash, no log)
  • Runtime string encryption — all sensitive strings encrypted in DEX, decrypted with keys derived from signing certificate
  • Anti-cloning and device binding — hardware-backed Keystore attestation, installer verification, clone environment detection, and composite integrity tokens validated server-side
  • Anti-debug and environment detection — ptrace self-attach, timing checks, /proc/self/status monitoring

The correct framing: RASP does not make bypass impossible. It makes bypass expensive enough that the attacker’s cost exceeds the value of the fraud. Combined with server-side liveness (Defense 1) and Play Integrity (Defense 6), RASP creates a three-layer system where the attacker must simultaneously defeat client integrity, server challenges, and device attestation.

Full deep dive: See RASP — Runtime Application Self-Protection for the complete technical breakdown of each technique, native obfuscation (OLLVM), integrity-coupled silent failure, and limitations.


When including these findings in a penetration test report, structure each finding as:

## Finding: [Title]
**Severity:** Critical / High / Medium / Low
**CVSS:** [score]
**Status:** Exploited / Confirmed / Theoretical
### Description
What the vulnerability is, in terms the development team understands.
### Impact
What an attacker can do with this vulnerability.
- Business impact (fraud, account takeover, regulatory violation)
- Technical impact (data pipeline compromise, verification bypass)
### Proof of Concept
Step-by-step reproduction with evidence:
1. [Step] -- [screenshot/log excerpt]
2. [Step] -- [screenshot/log excerpt]
### Recommendation
Specific, actionable fix with priority and effort estimate.
- **Immediate:** [quick fix]
- **Short-term:** [proper fix]
- **Long-term:** [architectural improvement]
### References
- OWASP Mobile Top 10: M1 (Improper Platform Usage)
- CWE-295 (Improper Certificate Validation)
- CWE-693 (Protection Mechanism Failure)

Example finding (filled in):

Finding: Camera Frame Injection Bypasses Liveness Verification Severity: Critical Description: The application’s liveness detection SDK processes camera frames entirely client-side. By patching the APK and injecting pre-recorded face images via the poc_frames directory, the liveness check was passed without a live person present. Impact: An attacker can complete identity verification using pre-recorded face images, enabling account takeover or fraudulent onboarding. Proof: 47 frames delivered, 45 consumed (96% accept rate). See delivery.log excerpt and step1_face.png screenshot. Recommendation: Implement server-side liveness verification with challenge-response nonces (Defense 1 above).


This chapter presented detection at three levels — APK integrity, runtime behavior, and statistical analysis — along with working implementations for each. Here is what matters most:

For defenders: Start with the Critical items: signature verification, DEX integrity, and RASP integration. These are reliable and hard to evade systematically. Then add server-side liveness verification — it is the single defense that changes the economics of attack most dramatically. RASP multiplies the cost of every other defense by making individual checks harder to find, understand, and neutralize. Everything else is valuable defense-in-depth, but those four investments give you the highest return.

For red teamers: Every detection listed here is something you should test for during reconnaissance. Before you launch a patched APK, ask: does this target check signatures? Does it verify DEX integrity? Does it pin certificates? Does it use server-side liveness? Does it use a RASP SDK? The answers determine whether your standard injection pipeline works out of the box or whether you need the evasion techniques from Chapter 15. RASP-protected targets require significantly more recon time — expect 5-10x the effort of an unprotected app. Knowing the defenses changes your operational approach. That is why this chapter exists in a red team book.

The arms race between attack and detection is continuous. Every defense in this chapter has been bypassed at least once in practice. Every bypass has been detected and patched. The goal is not to build an unbreakable wall — it is to make the wall expensive enough that the attacker moves to a softer target.

Practice: Lab 13 (Defend and Attack) has you add defense layers to a target app and then systematically bypass them.

Next: Read the RASP deep dive for the full technical breakdown of each RASP technique. Then Chapter 18 builds on the detection techniques here with a complete defense-in-depth architecture — combining device attestation, APK integrity, certificate pinning, server-side liveness, RASP, and behavioral analysis into a resilient verification system.