Comprehensive Mobile Application Security Guide

A practitioner’s reference for iOS and Android application security — threat models, platform attack surface, reverse engineering, runtime instrumentation, bypass techniques, testing methodology, and defensive controls. Compiled from 16 research sources.


Table of Contents

  1. Fundamentals & Threat Model
  2. OWASP MASVS & MASTG
  3. Android Platform Attack Surface
  4. iOS Platform Attack Surface
  5. Insecure Storage
  6. Network Communication & TLS
  7. SSL / Certificate Pinning Bypass
  8. Reverse Engineering Workflow
  9. Runtime Instrumentation with Frida
  10. Root & Jailbreak Detection Bypass
  11. Deep Links & URL Schemes
  12. WebView Security
  13. Authentication, Biometrics & Session
  14. Cryptography & Key Management
  15. Resilience / Anti-Tamper / RASP
  16. Tooling Reference
  17. Testing Methodology
  18. Notable CVEs & Real-World Incidents
  19. Defensive Checklist

1. Fundamentals & Threat Model

Mobile application security differs from traditional web security in three material ways. First, the attacker has the binary on their device and can take it apart at leisure — the app runs in a fundamentally hostile environment. Second, the OS provides strong sandboxing, code signing, and hardware-backed keystores that raise the bar but can be bypassed by a motivated attacker on a rooted or jailbroken device. Third, the attack surface spans the binary, the device, the local IPC boundary, the network, and the backend APIs — any of which can be the weak link.

Attacker classes:

ClassCapabilityExamples
Network adversaryPassive or active MITM on Wi-Fi / rogue cellCoffee-shop sniffer, carrier implant, rogue AP
Co-resident appArbitrary app on the same deviceMalicious SDK, sideloaded trojan
Device-local attackerPhysical access, possibly unlockedLost phone, border search, forensic extraction
Rooted/jailbroken userFull device control + debuggerPirate modder, bounty hunter, reverse engineer
Server-side attackerCompromises backend API the app talks toStolen credentials, insecure direct object reference
Supply chainMalicious SDK, compromised build pipelineSolarWinds-style, Xcode Ghost

Impact spectrum: Information disclosure → Credential theft → Account takeover → Business logic bypass → Device takeover → Fleet-wide compromise via push / OTA channels.

Security boundaries to respect:

  • Process sandbox (/data/data/<pkg> on Android, container directory on iOS)
  • Code signing enforcement (APK signature v2/v3, iOS mach-O code signature)
  • Hardware-backed keystore (Android Keystore StrongBox, iOS Secure Enclave)
  • Permission model (runtime permissions Android 6+, entitlements on iOS)
  • TLS & certificate validation on the network boundary
  • Exported-component boundary on Android (android:exported)
  • URL scheme / universal link routing on iOS

2. OWASP MASVS & MASTG

The Mobile Application Security Verification Standard (MASVS) and its companion Mobile Application Security Testing Guide (MASTG) are the industry reference for mobile security requirements and how to verify them. MASVS provides pass/fail requirements organized by category; MASTG gives test procedures for each.

Verification levels

LevelScopeTypical target
L1Standard security baseline — no hardcoded credentials, TLS, appropriate permissions, sane local storageAny production app
L2Defense-in-depth — certificate pinning, biometric auth correctness, strong crypto, anti-debuggingBanking, healthcare, government, payment
RResilience against reverse engineering — anti-tamper, obfuscation, root/jailbreak detection, RASPDRM, payment, apps where client logic is a revenue target

R is orthogonal — an app can be L1+R or L2+R. R does not fix vulnerabilities; it raises attacker cost.

MASVS categories

CategoryWhat it covers
MASVS-STORAGELocal data storage — SharedPreferences, Keychain, SQLite, caches, logs, backups, clipboard
MASVS-CRYPTOAlgorithm selection, key management, RNG, keystore usage
MASVS-AUTHCredential handling, biometrics, session management, server-side authz
MASVS-NETWORKTLS version/cipher, cert validation, pinning
MASVS-PLATFORMIPC, WebView, permissions, deep links, exported components
MASVS-CODEDebug flags, third-party libraries, error handling, updates
MASVS-RESILIENCEAnti-debug, anti-tamper, root/JB detection, obfuscation
MASVS-PRIVACYPII handling, consent, data minimization (added in later revisions)

MASTG test IDs follow the form MASTG-TEST-NNNN per platform. For example, MASTG-TEST-0001 tests local data storage on Android; MASTG-TEST-0028 tests Android deep links; MASTG-TEST-0048 / MASTG-TEST-0091 test reverse engineering tool detection on Android and iOS respectively. The MASTG refactor in 2023 replaced the older MSTG IDs, and MASVS-PRIVACY was added in the same generation.

Compliance workflow:

  1. Automated SAST/DAST in CI (MobSF, Oversecured, AppKnox) to catch L1 regressions on every build.
  2. Manual assessment pre-release for L2 controls (pinning bypass, auth flow, runtime behavior).
  3. Continuous monitoring (NowSecure, Data Theorem) for SDK update regressions.
  4. Evidence stored per MASVS requirement ID for auditors.

3. Android Platform Attack Surface

Android’s attack surface is broader than iOS largely because Android supports richer IPC primitives and a wider variety of OEM-modified devices.

Exported components

Every app component (Activity, Service, BroadcastReceiver, ContentProvider) is either exported or not. A component is exported if:

  • It has android:exported="true", or
  • It declares an <intent-filter> and android:exported is not explicitly set (implicit export before API 31; explicit required in API 31+).

An exported component can be invoked by any app on the device with an appropriate intent. Exported components without permission checks are the most common platform finding.

Testing exported components:

# enumerate
aapt dump xmltree base.apk AndroidManifest.xml | grep -E 'activity|service|receiver|provider|exported'

# start an exported activity from adb
adb shell am start -n com.example.app/.SettingsActivity --es key value

# send broadcast
adb shell am broadcast -a com.example.app.ACTION_RESET

# query content provider
adb shell content query --uri content://com.example.app.provider/users

Drozer automates this enumeration:

run app.package.attacksurface com.example.app
run app.activity.info -a com.example.app
run app.provider.finduri com.example.app
run app.provider.query content://com.example.app/users

Intents

Intents are Android’s primary IPC mechanism. Security pitfalls:

PitfallConsequence
Implicit intent carrying sensitive dataAny app matching the filter receives the payload
Trusting Intent.getExtras() without validationIntent injection / activity hijacking
Writable PendingIntent (no FLAG_IMMUTABLE)Caller can rewrite action/data → privilege escalation
startActivityForResult on attacker-controlled targetResult injection
Deserializing untrusted Parcelable extrasParcel mismatch, type confusion, RCE in past versions

Rule: for anything security sensitive, use explicit intents targeting your own package, set FLAG_IMMUTABLE on all PendingIntent objects on API 23+, and validate extras as untrusted input.

ContentProvider

ContentProviders expose a URI-based CRUD interface. Common issues:

  • Exported provider with no readPermission/writePermission.
  • Path traversal in openFile() — attacker passes ../../../../data/data/victim/shared_prefs/auth.xml.
  • SQL injection in the selection argument — passing attacker-controlled WHERE clauses directly to SQLite.
  • Unrestricted grantUriPermissions leaking file:// URIs.

SQLite

Android apps frequently use raw SQLite. Classic SQLi applies — use parameterized queries via ? placeholders, never string concatenation. Beware of rawQuery() with untrusted input. On disk, SQLite files live at /data/data/<pkg>/databases/ and are world-readable only to the app’s UID, but forensic tools and rooted attackers can read them.

File storage

LocationReadability
/data/data/<pkg>/Only app UID (unless MODE_WORLD_READABLE used — deprecated, still seen)
External storage (/sdcard/)All apps with storage permission; treat as public
getExternalFilesDir() (scoped)App-private on modern Android but cleared on uninstall
MediaStoreMediated by system

Writing sensitive data to external storage is a recurring finding. Even on Android 10+ scoped storage, logs and caches on external storage can leak tokens.

Backups

android:allowBackup="true" (the pre-Android 12 default) means adb backup can pull the app’s private data without root. Set allowBackup="false" or provide an android:fullBackupContent rule that excludes secrets.

Logcat

android.util.Log writes to logcat, which pre-Android 4.1 was readable by any app with READ_LOGS. Modern Android restricts this but debuggable builds, crash reports, and OEM modifications still leak logged data. Never log tokens, PII, or full requests.

Clipboard

ClipboardManager.getPrimaryClip() is observable by any foreground app (and historically by background apps). iOS 14 and Android 12 introduced clipboard access notifications to expose this.


4. iOS Platform Attack Surface

iOS has a narrower IPC surface than Android, but the attack surface still includes URL schemes, universal links, app extensions, Keychain, Pasteboard, and the file system within the app container.

App sandbox

Every iOS app runs in a container at /var/mobile/Containers/Data/Application/<UUID>/ with Documents/, Library/, and tmp/ subfolders. The sandbox prevents cross-app file access but does not protect against:

  • On-device attacker with physical access + backup extraction
  • Jailbroken device with root access
  • The app itself leaking data to logs, pasteboard, or cloud sync

URL schemes

Custom URL schemes (myapp://...) are the legacy iOS IPC mechanism. Any app can register a scheme; multiple apps can register the same scheme (last-installed wins, historically). Applications must validate UIApplicationDelegate application:openURL:options: callbacks — do not trust the source application, do not pass the URL directly to WKWebView or UIWebView, and do not treat URL parameters as authenticated.

Classic bugs:

  • URL scheme hijacking — malicious app registers the same scheme and intercepts OAuth callbacks.
  • Unauthenticated actions — myapp://transfer?to=attacker&amount=1000 triggered by a web page without confirmation.
  • Passing URL query parameters into SQL or WebView.

Universal links (iOS 9+) solve the hijacking problem by binding HTTPS URLs to an app via the apple-app-site-association (AASA) file hosted on the developer’s domain. The AASA must be fetched over HTTPS from https://example.com/.well-known/apple-app-site-association. Misconfigurations:

  • AASA served with wrong Content-Type or behind redirect — iOS silently falls back to opening Safari.
  • AASA pattern too broad (/*) — hijacks unrelated paths.
  • Shared domain with untrusted user content — attacker hosts a page under example.com that your app treats as trusted.

Keychain

The Keychain is iOS’s secure credential store, backed by the Secure Enclave on modern devices. Items have protection classes that control when they’re readable:

Protection classWhen accessible
kSecAttrAccessibleWhenUnlockedDevice unlocked
kSecAttrAccessibleAfterFirstUnlockAfter first unlock post-boot
kSecAttrAccessibleWhenPasscodeSetThisDeviceOnlyOnly if passcode set, not synced
kSecAttrAccessibleAlwaysAlways (deprecated, insecure)

Common bugs: using ...Always, storing credentials in NSUserDefaults instead of Keychain, forgetting ThisDeviceOnly on tokens (iCloud Keychain syncs them), using the default accessibility when the device is locked regularly.

Pasteboard

The general UIPasteboard is shared across apps on the same device. Copying tokens or PII to the pasteboard leaks them to any app that reads the pasteboard. Use UIPasteboard with a named instance and setItems:options: with UIPasteboardOptionLocalOnly and UIPasteboardOptionExpirationDate for sensitive data.

App Group IPC

Apps from the same team ID can share data via App Groups (shared container, shared NSUserDefaults, shared Keychain access group). Security issues appear when one app in the group is compromised or when the shared container is used to pass untrusted input between processes (e.g., main app ↔ widget ↔ share extension).

App extensions

Share extensions, keyboard extensions, today widgets, and intent extensions run in separate processes with reduced entitlements. Attack surface is the NSExtensionContext boundary — validate input items, don’t blindly load remote content, and don’t store secrets in places extensions can read.

Data protection

The iOS Data Protection API encrypts files with keys derived from the user’s passcode. File protection classes:

ClassBehavior
NSFileProtectionCompleteDecrypted only while unlocked; key wiped 10s after lock
NSFileProtectionCompleteUnlessOpenStays open across lock
NSFileProtectionCompleteUntilFirstUserAuthenticationDefault; decrypted after first unlock post-boot
NSFileProtectionNoneEncrypted at rest only with device key

Most apps end up on CompleteUntilFirstUserAuthentication because background tasks need file access. Sensitive files should use Complete where feasible.


5. Insecure Storage

This is the single highest-yield category in mobile assessments. Developers underestimate the number of places data ends up on a device: keyboard caches, screenshot snapshots, web caches, analytics logs, crash reports, backups, clipboard.

Places to check on Android

/data/data/<pkg>/shared_prefs/*.xml
/data/data/<pkg>/databases/*.db          # sqlite3 to inspect
/data/data/<pkg>/files/
/data/data/<pkg>/cache/
/data/data/<pkg>/code_cache/
/sdcard/Android/data/<pkg>/
/sdcard/                                 # any app-written file

Tools: adb shell run-as <pkg> on debuggable builds; objection android shell and objection android sqlite connect.

Places to check on iOS

<container>/Documents/
<container>/Library/Preferences/*.plist  # NSUserDefaults
<container>/Library/Caches/
<container>/Library/WebKit/               # WKWebView cache
<container>/tmp/
Keychain (dump with objection or keychain-dumper)
~/Library/Developer/CoreSimulator/Devices/<uuid>/... (simulator)

Common leak patterns

  • Auth tokens in SharedPreferences / NSUserDefaults — these are plaintext plists/XML. Use EncryptedSharedPreferences (Jetpack Security) or Keychain.
  • Application screenshots — iOS snapshots the app screen on backgrounding for the task switcher; sensitive data in the background state is written to disk. Blur or overlay the sensitive view in applicationDidEnterBackground.
  • Keyboard cache — iOS caches words typed in non-secure UITextFields. Set secureTextEntry = YES for passwords and sensitive fields; set autocorrectionType = UITextAutocorrectionTypeNo.
  • Crash reports & analytics — third-party crash reporters (Crashlytics, Sentry) capture stack traces and sometimes memory state; review what you’re shipping.
  • SQLite WAL/journal files — deleted rows persist in *.db-wal and *.db-journal until a VACUUM.
  • WebView cookies and localStorageWKWebView uses the app’s container, so session cookies persist unless explicitly cleared.

6. Network Communication & TLS

Baseline: TLS 1.2+ on every connection, no plaintext HTTP, no mixed content, validate certificate chain against system trust store, use system APIs (URLSession, OkHttp) rather than hand-rolling TLS.

Android network security

Android Network Security Config (res/xml/network_security_config.xml, referenced from the manifest) controls trust anchors, cleartext permission, and pinning:

<network-security-config>
  <base-config cleartextTrafficPermitted="false">
    <trust-anchors>
      <certificates src="system"/>
    </trust-anchors>
  </base-config>
  <domain-config>
    <domain includeSubdomains="true">api.example.com</domain>
    <pin-set>
      <pin digest="SHA-256">base64hash==</pin>
      <pin digest="SHA-256">backup==</pin>
    </pin-set>
  </domain-config>
</network-security-config>

Key things to check:

  • cleartextTrafficPermitted="true" anywhere — usually debug leftover that ships to prod.
  • <debug-overrides> including user trust anchors — means Burp CA works in debug builds only, which is intended; verify it’s not in release.
  • Missing or mis-targeted <domain-config> — pinning only on the marketing domain while API calls go elsewhere.

iOS App Transport Security (ATS)

ATS (iOS 9+) enforces TLS 1.2+ and valid certs by default. NSAllowsArbitraryLoads = YES in Info.plist disables ATS entirely — look for this in third-party SDK integrations. NSExceptionDomains allows per-domain exceptions; inspect these closely for relaxed cert validation or cleartext allowances.

Man-in-the-middle testing setup

  1. Install a proxy CA cert on the test device (Burp, mitmproxy, Charles, Proxyman).
  2. Android: on API 24+ user-installed CAs are not trusted by default — you need a debug-enabled build with <debug-overrides>, or install the CA as a system cert on a rooted device (/system/etc/security/cacerts/ with the correct hash name).
  3. iOS: install the proxy profile, then enable the CA under Settings → General → About → Certificate Trust Settings.
  4. Route device traffic through the proxy (Wi-Fi settings or system-wide VPN like mitmproxy’s wireguard mode).

If traffic comes through cleanly, the app trusts arbitrary CAs and has no pinning. If connections fail, pinning is in effect — move to bypass.


7. SSL / Certificate Pinning Bypass

Pinning binds the app to a specific certificate or public key. Implementations vary:

ImplementationHow it worksBypass approach
OkHttp CertificatePinnerCompares leaf/chain SPKI hashes to a pinsetFrida hook CertificatePinner.check$okhttp
TrustManager overrideCustom X509TrustManager.checkServerTrustedHook checkServerTrusted to return normally
HostnameVerifier allow-allNot pinning — just acceptingAlready bypassed
Android Network Security Config pin-setSystem-level validationPatch the XML, disable via Frida NetworkSecurityConfig hook
iOS URLSession delegateurlSession:didReceiveChallenge: does pinningFrida hook the delegate
TrustKit / AFNetworking pinningLibrary-level SPKI pinningHook the library’s validation function
Native pinning (BoringSSL, libcurl)Lower level — Frida Stalker or Interceptor on SSL_CTX_set_verify

Frida universal bypass scripts

The community maintains scripts that hook all well-known pinning APIs:

  • frida-multiple-unpinning (akabe1) — Android, covers OkHttp, TrustManager, Conscrypt, WebViewClient, Appcelerator, Cronet.
  • fridantiroot — Android root detection + pinning.
  • ios-ssl-pinning-bypass / objection --startup-command "ios sslpinning disable".

Invocation:

frida -U -f com.example.app -l frida-multiple-unpinning.js --no-pause
objection -g com.example.app explore
> ios sslpinning disable
> android sslpinning disable

Why bypass sometimes fails

  • App uses native (C/C++) pinning in a .so — Java/ObjC hooks miss it. Hook SSL_CTX_set_verify, SSL_set_verify, or BoringSSL symbols directly, or use frida-trace to find the verification function.
  • App ships multiple HTTP stacks — pinning on one but not the other; still, some traffic requires both bypassed.
  • Certificate Transparency enforcement — some apps additionally require SCTs.
  • The app kills itself on pinning failure (anti-debug path) — patch out the kill.

Pinning is a resilience control, not a vulnerability. Pinning bypass is a stepping stone to inspect traffic; the downstream finding is whatever the API protects.


8. Reverse Engineering Workflow

Reverse engineering is a core mobile pentest skill. The three uses: (1) disabling controls that block dynamic analysis (pinning, root detection), (2) understanding app logic in black-box testing, (3) assessing MASVS-R resilience.

Android RE pipeline

APK  apktool d    smali + resources
APK  jadx-gui     readable Java decompilation
APK  unzip        classes.dex, lib/<arch>/*.so, resources.arsc
DEX  d2j-dex2jar  JAR  JD-GUI / CFR / Procyon
.so  Ghidra / IDA / radare2 / Binary Ninja
  • jadx — fastest path to readable Java, handles most obfuscation.
  • apktool — disassembles to smali for patching and rebuilding.
  • dex2jar + CFR — alternative decompilation when jadx chokes.
  • Ghidra — free, handles ARM64/ARMv7/x86 native libs and DEX.
  • simplify — smali deobfuscator for string/control-flow obfuscation.
  • JEB — commercial, strong on obfuscated DEX.

Patching flow: apktool d, edit smali, apktool b, re-sign with apksigner sign --ks ..., install.

iOS RE pipeline

iOS binaries (.ipa) are FairPlay-encrypted when downloaded from the App Store. To reverse engineer, you need a decrypted binary:

  • frida-ios-dump — pull decrypted IPA from a jailbroken device.
  • bagbak, flexdecrypt — alternative decryptors.
  • App Store binaries are decrypted at runtime by the kernel; dumping memory after launch yields the plaintext mach-O.

Once decrypted:

  • class-dump / class-dump-z — extract Objective-C class headers from the mach-O.
  • Hopper — commercial disassembler with decompiler, strong ObjC support.
  • Ghidra — free; handles ARM64 mach-O, Swift symbols are messy.
  • IDA Pro — gold standard.
  • radare2 / r2frida — open source, scriptable, integrates with Frida for dynamic work.

Swift adds friction: name mangling, generics, and extensive use of witness tables make static analysis harder than ObjC. Dynamic analysis via Frida is often more productive.

What to look for in static RE

  • Hardcoded secrets, API keys, JWT signing keys, encryption keys — grep strings and base64-decoded strings.
  • Endpoint URLs including staging, debug, and admin panels.
  • Feature flags and debug paths that should not exist in release.
  • Crypto primitives — confirm algorithms and key handling.
  • Root/jailbreak detection routines, debugger checks, pinning logic — targets for bypass.
  • Exported components and entry points.
  • Third-party SDKs and their versions — cross-reference with CVE databases.

9. Runtime Instrumentation with Frida

Frida is a dynamic instrumentation toolkit that injects a JavaScript runtime into a target process and lets you hook, trace, replace, and log function calls in real time. It supports Android (Java, Kotlin, native) and iOS (Objective-C, Swift, native) among many other platforms.

Architecture

Frida works by writing code into process memory:

  1. frida-server (on a rooted/jailbroken device) or frida-gadget (embedded in the app) hosts the runtime.
  2. On attach, Frida hijacks a thread via ptrace, allocates memory, and loads frida-agent.so / FridaGadget.dylib.
  3. The agent opens a channel back to your host running frida or a custom Python script.
  4. JavaScript you write runs inside the target, with access to APIs for hooking and memory manipulation.

Modes of operation

ModeRequiresUse case
InjectedRooted/jailbroken device, frida-server daemonInteractive testing, full device
Embedded (Gadget)Unmodified device; gadget injected into APK/IPATesting on stock devices
PreloadedGadget loads from disk via LD_PRELOAD / DYLD_INSERT_LIBRARIESStandalone automation

Key APIs

  • Interceptor — inline hook at function prologue. High flexibility, detectable by checksum-based anti-tamper because it overwrites prologue bytes.
  • Stalker — JIT-based dynamic code tracer, leaves original code untouched. Better for stealth and high-granularity tracing at the cost of performance.
  • Java — enumerate and hook Java classes and methods on Android.
  • ObjC — enumerate and hook Objective-C classes, selectors, and instances on iOS.
  • Module, Memory, NativePointer, NativeFunction — native-level primitives for reading/writing memory and calling functions.

Frida 17 removed bundled runtime bridges — if you use custom scripts that import frida-java-bridge or frida-objc-bridge, install them via frida-pm and bundle with frida-compile. Interactive CLI tools still embed the bridges.

Hook examples

Android — bypass a boolean root check:

Java.perform(function () {
  var RootCheck = Java.use('com.example.app.security.RootChecker');
  RootCheck.isDeviceRooted.implementation = function () {
    console.log('[+] isDeviceRooted called, returning false');
    return false;
  };
});

iOS — hook a jailbreak detection class method:

var JBDetect = ObjC.classes.JailbreakDetector;
Interceptor.attach(JBDetect['- isJailbroken'].implementation, {
  onLeave: function (retval) {
    console.log('[+] isJailbroken returning NO');
    retval.replace(0x0);
  }
});

Native function trace with frida-trace:

frida-trace -U -i "open" -i "read" -i "stat" -n "Example"

List modules (Frida 17 API):

for (const m of Process.enumerateModules()) {
  console.log(m.name, m.base, m.size);
}

Ecosystem tools built on Frida include objection (runtime mobile security assessment framework), Fridump (memory dumper), r2frida (radare2 + Frida bridge), Grapefruit (iOS RAI toolkit), and jnitrace (JNI method tracer).


10. Root & Jailbreak Detection Bypass

Apps use root/jailbreak detection to reduce the attack surface on compromised devices. Detection is never perfect — MASVS treats it as a resilience control, not a security boundary.

Android root detection signals

SignalWhat it checks
su binary/system/bin/su, /system/xbin/su, /sbin/su, which su
Superuser APKsSuperuser.apk, com.topjohnwu.magisk, eu.chainfire.supersu
System propertiesro.debuggable, ro.secure, service.adb.root
test-keysro.build.tags contains test-keys
Busybox / root tools/system/xbin/busybox, /system/bin/busybox
Mount state/system mounted rw
Native getuid() == 0
SafetyNet / Play IntegrityServer-side attestation (harder to bypass)

Common libraries: RootBeer, SafetyNet/Play Integrity API.

iOS jailbreak detection signals

SignalWhat it checks
File existence/Applications/Cydia.app, /Library/MobileSubstrate, /bin/bash, /etc/apt, /var/lib/apt
Suspicious URL schemescydia://, sileo://, zbra://
fork() succeedsSandboxed apps cannot fork; jailbroken ones often can
Write outside sandbox/private/jailbreak.txt — write then check
dyld inspection/usr/lib/substrate, FridaGadget.dylib, cynject, libcycript in the image list
ptrace self-attach (PT_DENY_ATTACH)Prevents debugger attachment
sysctl kinfo for debuggerP_TRACED flag
Private APIsCalling ptrace, sysctl, task_for_pid

Bypass approach

  1. Hook the high-level isRooted / isJailbroken method to return false.
  2. If the check is inlined or in native code, hook the underlying primitives — open, access, stat, fork, dlopen, NSFileManager fileExistsAtPath:.
  3. For ptrace(PT_DENY_ATTACH), hook ptrace to return 0 before it’s called.
  4. For integrity-based checks (the app computes a hash of itself and compares), either patch out the check or hook the comparison.
  5. Universal scripts: objection (android root disable, ios jailbreak disable), fridantiroot, community unpinning/antiroot scripts.

SafetyNet/Play Integrity Attestation is much harder — it runs a signed check against Google servers and ties to hardware attestation. Bypasses require spoofing a known-good keybox or using a modified boot chain (Magisk + DenyList + attestation modules), and Google rotates signatures regularly. For high-assurance apps, treat server-verified attestation as the primary defense and client-side detection as defense-in-depth.


Deep links let external entities invoke specific app screens. Handled incorrectly, they become an unauthenticated remote trigger for sensitive actions.

Three flavors:

  1. Custom scheme (myapp://host/path) — any app can register the scheme; hijackable.
  2. Implicit HTTP intent filter (<data android:scheme="https" android:host="example.com"/>) — any app can register; disambiguation dialog.
  3. App Link (verified, android:autoVerify="true") — requires /.well-known/assetlinks.json on the domain; bound to the app by signature.

Test:

adb shell am start -W -a android.intent.action.VIEW \
  -d "myapp://action/transfer?to=attacker&amount=1000" com.example.app

Issues to look for:

  • Deep link triggers authenticated action without re-auth.
  • Deep link path is used as input to WebView.loadUrl() → XSS / file:// read / universal XSS.
  • Deep link passes a url parameter into an Intent with ACTION_VIEW → open redirect / file:// bypass.
  • assetlinks.json is missing or mis-hosted → App Link silently falls back to chooser.
  • Deep link triggers export/save without a confirmation UI.

URL schemes register via CFBundleURLTypes in Info.plist. Multiple apps can claim the same scheme; ordering is implementation-defined. Universal links are the preferred modern mechanism: HTTPS URLs that open the app when installed and Safari when not.

Test:

xcrun simctl openurl booted "myapp://path?x=1"

Bugs:

  • URL handler passes query to WKWebView → XSS.
  • URL triggers privileged action without confirmation.
  • AASA file mis-specified → universal link falls back to Safari.
  • application:continueUserActivity:restorationHandler: trusts webpageURL without validation.
  • Every deep link that performs an action must re-prompt for authentication or confirmation.
  • Parse and validate every parameter as untrusted.
  • Never pass deep link parameters directly to a WebView.
  • Use App Links / Universal Links with AASA/assetlinks.json verification, not custom schemes, for anything security-relevant.
  • Log deep link invocations for anomaly detection.

12. WebView Security

WebViews embed a browser engine in the app and are a frequent source of vulnerabilities because the web security model interacts awkwardly with the native side.

Android WebView

  • setJavaScriptEnabled(true) is necessary for most content but enables XSS impact.
  • addJavascriptInterface(obj, "name") exposes a Java object to JS. Pre-API 17 this was pure RCE (JS could reach arbitrary methods via reflection); API 17+ requires @JavascriptInterface annotation but the exposed methods still form RCE surface if they do anything sensitive. Never expose Runtime.exec, file access, or authentication state.
  • setAllowFileAccess(true), setAllowFileAccessFromFileURLs(true), setAllowUniversalAccessFromFileURLs(true) — dangerous combinations allow a file:// page to read any file or make cross-origin requests.
  • setAllowContentAccess(true) — WebView can load content:// URIs.
  • shouldOverrideUrlLoading — must validate URLs before loading; failing to do so means any intent that lands in the WebView can load arbitrary origins.
  • Mixed contentsetMixedContentMode(MIXED_CONTENT_NEVER_ALLOW) is the safe default on API 21+.

Classic exploit: exported activity → deep link → WebView → JS bridge → RCE.

iOS WKWebView

UIWebView is deprecated and insecure — flag any remaining use. WKWebView runs out-of-process and is much safer:

  • WKUserContentController.add(scriptMessageHandler:name:) is the native bridge; any JS in the loaded page can post to it. Validate every message as untrusted.
  • WKWebView.configuration.preferences.javaScriptCanOpenWindowsAutomatically — control per your needs.
  • loadFileURL:allowingReadAccessToURL: — the second argument is the sandbox boundary; setting it to the container root gives JS access to everything.
  • navigationDelegate — implement decidePolicyForNavigationAction to restrict origins.

JavaScript bridge threat model

Treat the JS side as fully untrusted even if you load your own HTML. Reasons:

  • XSS in loaded content (yours or third-party).
  • MITM during load (if not TLS-pinned).
  • A compromised CDN serving the HTML.
  • A cross-origin iframe manipulating the main frame.

Never expose functions that (a) execute code or shell commands, (b) read or write arbitrary files, (c) read secrets from Keychain/Keystore, (d) return tokens, (e) perform privileged actions without re-auth.


13. Authentication, Biometrics & Session

Authentication

Authentication happens on the server — the app is just a client. The recurring mobile mistake is client-side authorization: the app hides UI based on a local role flag rather than server-enforced access control. A Frida hook flips the flag and exposes the hidden functionality. Never enforce privilege on the client.

Session tokens should live in Keychain / EncryptedSharedPreferences, scoped to the device (not synced), and rotate on sensitive events. Log out should invalidate on the server, not just clear local state.

Biometrics

Biometric authentication on mobile is a UX gate on top of a cryptographic operation, not the cryptographic operation itself. Correct use:

  • Android BiometricPrompt with a CryptoObject — a Keystore-backed key that requires biometric authentication to unlock. On successful biometric, the key becomes usable and the app signs / decrypts a server challenge. Without the CryptoObject, a Frida hook on the biometric callback bypasses the check trivially.
  • iOS LocalAuthentication + Keychain access control — store the secret in Keychain with SecAccessControl flags kSecAccessControlBiometryCurrentSet and kSecAttrAccessibleWhenUnlockedThisDeviceOnly. Reading the item forces a biometric prompt the attacker cannot hook away because the prompt is enforced by securityd, not the app.

Both platforms: tie biometric to a specific crypto key so that bypassing the UI doesn’t bypass the operation.

Session management

  • Short-lived access tokens with refresh tokens, rotated on use.
  • Server-side session invalidation on password change, explicit logout, new device.
  • Detect token theft via device binding (include a device key in token exchange).
  • Don’t ship long-lived session tokens in Keychain with kSecAttrAccessibleAlways.

14. Cryptography & Key Management

The universal crypto finding in mobile is key management, not algorithm choice. An app using AES-256-GCM with a key embedded as a byte[] constant in the binary provides zero protection. The algorithm is fine; the key is the problem.

Use the platform keystore

  • Android Keystore — keys generated inside the Keystore never leave hardware (StrongBox on supporting devices). Use KeyGenParameterSpec with setUserAuthenticationRequired, setUnlockedDeviceRequired, and setIsStrongBoxBacked.
  • iOS Secure Enclave — keys generated with kSecAttrTokenID = kSecAttrTokenIDSecureEnclave are non-extractable. Pair with SecAccessControl for biometric gating.

Algorithm do’s and don’ts

DoDon’t
AES-256-GCM, ChaCha20-Poly1305ECB mode, CBC without MAC
SHA-256/384/512, BLAKE2/3MD5, SHA-1 for security
HKDF for key derivationPassword as raw key
Argon2id / PBKDF2 with high iterations for password KDFSingle SHA-256 of password
Ed25519 / ECDSA P-256DSA, RSA-1024
SecureRandom / arc4random_buf / /dev/urandomjava.util.Random, rand()

Hardcoded keys and string obfuscation

A common finding: the app stores an AES key in resources, in a native string, or “obfuscated” via XOR or base64. These are all trivially recovered by jadx/Hopper + strings + Frida. If the key has to be in the binary, it’s effectively public. Either derive from server-provided material, gate behind user authentication, or accept the key as public.


15. Resilience / Anti-Tamper / RASP

Resilience controls (MASVS-R) raise attacker cost without fixing bugs. They belong in apps where client-side logic is a revenue target or where reverse engineering has direct financial impact (DRM, payment, games).

Typical controls

ControlWhat it doesFrida-bypassable?
Root/JB detectionRefuse to run on compromised devicesYes
Debugger detection (ptrace, isDebuggerConnected)Detect attached debuggersYes, but multiple checks slow bypass
Frida/tool detectionLook for frida-server, ports 27042/27043, gadget libs in image list, re.frida.server processYes — hook the detector
Integrity checksCompute hash of own binary / classes.dex and compareHook the comparison
String & control flow obfuscationHarder to read staticallyDynamic analysis still works
Packers / encrypted DEXBinary is unpacked at runtimeDump memory post-unpack
Native-code checksHarder to find via Java hooksHook at syscall boundary
Emulator detectionCheck for QEMU artifacts, sensor diversity, goldfish kernelHook the checks
Hook detectionCheck prologue bytes for inline hook patternsUse Stalker instead of Interceptor
Server-side attestation (Play Integrity, DeviceCheck)Remote verificationHardest — bypass requires hardware spoofing

Strong resilience characteristics

  • Multiple independent checks — bypassing one isn’t enough.
  • Native-code checks — harder to hook from Java/ObjC.
  • Checks triggered at random points during execution, not just at startup.
  • Soft failure — don’t immediately crash; degrade the UI, delay, or corrupt data so the attacker can’t trivially identify the check location.
  • Server-side component — the server refuses to serve valuable data unless attestation succeeds.

RASP vendors

Commercial RASP SDKs (Guardsquare DexGuard/iXGuard, Talsec, Promon Shield, Zimperium zShield, Appdome) bundle obfuscation, anti-tamper, anti-hook, root/JB detection, and attestation. They’re not silver bullets — determined attackers bypass all of them given time — but they raise bar from minutes to days, which is often sufficient for the threat model.


16. Tooling Reference

Static analysis

ToolPlatformNotes
MobSFAndroid/iOSFree SAST+DAST, REST API for CI, strongest for L1 baseline
jadx / jadx-guiAndroidDEX → readable Java
apktoolAndroidSmali disassembly and rebuild
dex2jar / CFR / ProcyonAndroidAlternative decompilation paths
GhidraBothFree, SRE from NSA, ARM/x86/DEX
HopperiOS (mach-O)Commercial, strong ObjC decompiler
IDA ProBothCommercial gold standard
Binary NinjaBothCommercial, strong scripting
class-dumpiOSObjC header extraction
strings / rabin2BothString extraction, symbols
OversecuredBothCommercial deep static analysis
AppKnoxBothCommercial SAST/DAST/API testing
Semgrep (mobile rules)SourcePattern-based SAST on source

Dynamic analysis & instrumentation

ToolPlatformUse
FridaBothUniversal DBI
objectionBothFrida-based exploration (explore, ios keychain dump, android shell)
DrozerAndroidIPC / component fuzzing
FridumpBothMemory dumping
r2fridaBothradare2 + Frida
GrapefruitiOSWeb UI over Frida
jnitraceAndroidTrace JNI calls
frida-ios-dumpiOSDecrypt App Store IPAs
NeedleiOS(legacy) iOS testing framework
CorelliumBothVirtualized iOS/Android for RE
Genymotion / Android Studio emulatorAndroidRoot-available test devices
Burp Suite / mitmproxy / Charles / ProxymanBothHTTP(S) MITM

Jailbreak / root tooling

  • Android: Magisk (soft-root with DenyList), Genymotion (rooted by default), rooted Pixel + AOSP builds.
  • iOS: checkra1n (semi-tethered, checkm8 devices), unc0ver, palera1n (iOS 15–16), Corellium virtual devices.

Test devices

A mobile assessment lab typically has: one rooted Android phone, one non-rooted Android phone (for release build testing), one jailbroken iPhone at the minimum iOS version the app supports, one non-jailbroken iPhone for production-like testing, and Corellium or simulators for rapid iteration across OS versions.


17. Testing Methodology

A mobile pentest maps cleanly to MASTG categories. A suggested week-long assessment:

Day 1 — setup & recon

  • Obtain APK/IPA (Play Store, TestFlight, MDM dump, or frida-ios-dump from jailbroken device).
  • Run MobSF for baseline static findings and manifest dump.
  • jadx-gui or Hopper for manual code walk — focus on entry points, URLs, keys, crypto, auth.
  • Identify: minSdk/target SDK, signing cert, permissions, exported components, URL schemes, deep links, third-party SDKs with versions.

Day 2 — storage & platform

  • Install on test device, exercise all flows, inventory files written.
  • Inspect shared_prefs, databases, Library, Documents, Keychain, Keystore.
  • Dump Keychain with objection ios keychain dump.
  • Test adb backup / iTunes backup exposure.
  • Screenshot on backgrounding — check for sensitive view capture.
  • Clipboard and keyboard cache for sensitive fields.

Day 3 — network

  • Proxy all traffic (Burp + system CA trust).
  • If pinning blocks, bypass with Frida/objection and re-verify.
  • Map every API endpoint, auth mechanism, session handling.
  • Test for auth bypass, IDOR, server-side input validation (this is usually the bulk of impact).
  • Check TLS version, cipher suites, certificate validation, cleartext traffic config.
  • Enumerate exported Android components with Drozer; invoke each with crafted intents.
  • Fuzz ContentProviders for SQLi and path traversal.
  • Test every URL scheme and universal link for unauthenticated actions and WebView injection.
  • Test WebView for JS bridge exposure, file access, universal XSS.
  • iOS: enumerate app extensions and test their boundaries.

Day 5 — resilience & report

  • Test root/JB detection → bypass; test anti-debug → bypass; measure time to bypass.
  • Test integrity checks, Frida detection.
  • If server attestation is in use, test what the server does on failure.
  • Write up findings mapped to MASVS requirements with reproduction steps, severity, and remediation.

Reporting

Each finding should include: MASVS ID, severity (CVSS or platform scheme), affected component, reproduction steps, evidence (screenshots, logs, Frida scripts, packet captures), impact narrative, remediation. Track remediation against MASVS IDs so regressions in re-test are easy to spot.


18. Notable CVEs & Real-World Incidents

Mobile-relevant issues from the sources and broader ecosystem:

YearIncidentRelevance
2014Heartbleed (CVE-2014-0160)OpenSSL in countless mobile apps and SDKs — years later, unpatched hosts still existed. Highlights supply chain risk.
2015addJavascriptInterface pre-API 17Any JS in loaded page → arbitrary Java method via reflection → RCE. Hundreds of affected apps.
2015XcodeGhostSupply chain: attacker-modified Xcode injected malicious code into legitimate apps. Hundreds of App Store apps compromised.
2016Janus (CVE-2017-13156)Android APK signing scheme v1 allowed DEX injection into signed APKs without breaking the signature. Fixed by v2 signing.
2017BlueBorneBluetooth stack RCE across Android, iOS, Linux.
2018Stagefright evolutionAndroid media parsing RCE via MMS — led to sandbox hardening.
2019iOS 12 FaceTime bugGroup FaceTime let caller hear callee before they answered. Platform IPC flaw.
2019Kids’ smartwatches (Rapid7)IoT mobile ecosystem: GPS watches accepted config via SMS, bypassing contact filters.
2020StrandHogg / StrandHogg 2.0Android task hijacking via taskAffinity / activity reparenting — overlaid legitimate apps.
2021iOS iMessage zero-click (NSO Pegasus)Integer overflow in CoreGraphics PDF parsing, full device compromise with no user interaction.
2022Play Store SharkBot / FluBot campaignsBanker trojans abusing Accessibility Services to steal credentials.
2023OWASP MASVS/MASTG refactorTest IDs renumbered; MASVS-PRIVACY added. Compliance mappings must be updated.
2024Kia web portal APICar unlock/track via API flaw keyed off license plate — illustrates backend-dominance of mobile risk.
2024SMS 2FA telecom breachUnencrypted SMS exposed — reinforces moving off SMS for MFA.
2024Location data broker breach (Candy Crush, Tinder)Terabytes of location data in app SDKs exposed — third-party SDK risk.
2024ASP.NET machine key abusePublic machine key reused by apps → code injection. Not mobile-specific but hits mobile backends.

Recurring lessons: the expensive bugs are in backends, SDKs, and WebView bridges; client-side controls fail against Frida; SMS is a broken auth factor; supply chain (Xcode, SDKs, analytics) dominates incident frequency.


19. Defensive Checklist

A condensed secure-coding and deployment checklist mapped to MASVS categories.

MASVS-STORAGE

  • No auth tokens, keys, PII in SharedPreferences / NSUserDefaults.
  • Use EncryptedSharedPreferences / Keychain with correct protection class.
  • android:allowBackup="false" or explicit exclusion of sensitive files.
  • No sensitive data on external storage.
  • Logs contain no tokens, PII, or full request bodies.
  • Crash reporter scrubs sensitive fields.
  • Sensitive views hidden / blurred on backgrounding.
  • Pasteboard usage minimized; expiration set on sensitive copies.
  • Keyboard autocorrect/caching disabled on sensitive inputs.

MASVS-CRYPTO

  • Keys generated in and never leave Android Keystore / Secure Enclave.
  • No hardcoded keys, no string-obfuscated keys used as real secrets.
  • Approved algorithms: AES-GCM, SHA-256+, Ed25519 / ECDSA P-256, Argon2id/PBKDF2.
  • SecureRandom / arc4random_buf for randomness.
  • Keys bound to user authentication where appropriate.

MASVS-AUTH

  • No client-side authorization — every privileged action server-checked.
  • Short-lived access tokens + refresh tokens; rotation on use.
  • Server-side logout invalidation.
  • Biometric auth tied to a Keystore/Enclave CryptoObject.
  • Re-authentication for sensitive actions.
  • No SMS-only 2FA for high-value accounts.

MASVS-NETWORK

  • TLS 1.2+ enforced, no cleartext traffic.
  • cleartextTrafficPermitted="false" / ATS enforced; no NSAllowsArbitraryLoads.
  • Certificate pinning for sensitive domains, with backup pins.
  • Pin rotation plan documented.
  • No pinning bypass possible via untouched WebView or third-party SDK.

MASVS-PLATFORM

  • Every component has explicit android:exported on API 31+.
  • Permission checks on every exported component.
  • PendingIntent uses FLAG_IMMUTABLE.
  • ContentProvider paths validated; parameterized SQL; no openFile traversal.
  • Deep links require re-auth for actions; parameters validated.
  • App Links / Universal Links configured with autoVerify / AASA.
  • WebView: javaScriptEnabled only if needed; no dangerous file/universal access; JS bridge surface minimized.
  • UIWebView removed; WKWebView with restrictive navigation delegate.
  • iOS URL scheme handlers validate source and parameters.

MASVS-CODE

  • Debug flags off in release (android:debuggable="false").
  • Third-party SDKs inventoried and version-pinned; CVE monitored.
  • No leftover debug endpoints in release.
  • Error messages don’t leak stack traces or internal state.
  • Compiler hardening flags enabled (-fstack-protector-strong, PIE, FORTIFY_SOURCE).

MASVS-RESILIENCE (if in scope)

  • Root/JB detection with multiple independent checks.
  • Debugger detection.
  • Frida / tool detection.
  • Integrity check of own binary.
  • String and control-flow obfuscation.
  • Server-side attestation (Play Integrity / DeviceCheck / App Attest).
  • Soft failure mode so check locations are not trivially identifiable.

MASVS-PRIVACY

  • Data minimization — collect only what’s needed.
  • Consent for non-essential telemetry.
  • PII scrubbed from logs and crash reports.
  • Third-party analytics SDKs reviewed for data exfiltration.
  • User-facing data deletion functional and complete.

Appendix A: Quick command reference

# Android
adb shell pm list packages -f | grep example
adb shell dumpsys package com.example.app
adb shell run-as com.example.app ls -la
adb pull /data/data/com.example.app/shared_prefs/
aapt dump badging base.apk
apktool d base.apk
jadx-gui base.apk
apksigner verify --print-certs base.apk

# iOS (jailbroken)
frida-ps -Uai
frida-ios-dump/dump.py "App Name"
class-dump -H App -o headers/
ldid -e App   # entitlements

# Frida / objection
frida -U -f com.example.app -l script.js --no-pause
frida-trace -U -i "recv*" -n "Example"
objection -g com.example.app explore
> android hooking list classes
> android hooking search methods isRoot
> ios keychain dump
> android sslpinning disable

# Proxy / traffic
mitmproxy --mode wireguard
mitmdump -s dump_to_file.py

# Drozer
drozer console connect
run app.package.attacksurface com.example.app
run scanner.provider.injection -a com.example.app

Appendix B: Suggested Frida unpinning libraries

  • frida-multiple-unpinning (akabe1) — Android universal.
  • fridantiroot — Android root + pinning.
  • ios-ssl-pinning-bypass scripts — iOS URLSession, TrustKit.
  • objection built-in android sslpinning disable / ios sslpinning disable.
  • Codeshare (https://codeshare.frida.re) for ad-hoc published scripts.

Always diff-check community scripts before running them against production apps — a “bypass” script that also exfiltrates data is a known supply chain risk in the Frida ecosystem.


End of guide.