Comprehensive Mobile Application Security Guide#
A practitioner’s reference for iOS and Android application security — threat models, platform attack surface, reverse engineering, runtime instrumentation, bypass techniques, testing methodology, and defensive controls. Compiled from 16 research sources.
Table of Contents#
- Fundamentals & Threat Model
- OWASP MASVS & MASTG
- Android Platform Attack Surface
- iOS Platform Attack Surface
- Insecure Storage
- Network Communication & TLS
- SSL / Certificate Pinning Bypass
- Reverse Engineering Workflow
- Runtime Instrumentation with Frida
- Root & Jailbreak Detection Bypass
- Deep Links & URL Schemes
- WebView Security
- Authentication, Biometrics & Session
- Cryptography & Key Management
- Resilience / Anti-Tamper / RASP
- Tooling Reference
- Testing Methodology
- Notable CVEs & Real-World Incidents
- Defensive Checklist
1. Fundamentals & Threat Model#
Mobile application security differs from traditional web security in three material ways. First, the attacker has the binary on their device and can take it apart at leisure — the app runs in a fundamentally hostile environment. Second, the OS provides strong sandboxing, code signing, and hardware-backed keystores that raise the bar but can be bypassed by a motivated attacker on a rooted or jailbroken device. Third, the attack surface spans the binary, the device, the local IPC boundary, the network, and the backend APIs — any of which can be the weak link.
Attacker classes:
| Class | Capability | Examples |
|---|---|---|
| Network adversary | Passive or active MITM on Wi-Fi / rogue cell | Coffee-shop sniffer, carrier implant, rogue AP |
| Co-resident app | Arbitrary app on the same device | Malicious SDK, sideloaded trojan |
| Device-local attacker | Physical access, possibly unlocked | Lost phone, border search, forensic extraction |
| Rooted/jailbroken user | Full device control + debugger | Pirate modder, bounty hunter, reverse engineer |
| Server-side attacker | Compromises backend API the app talks to | Stolen credentials, insecure direct object reference |
| Supply chain | Malicious SDK, compromised build pipeline | SolarWinds-style, Xcode Ghost |
Impact spectrum: Information disclosure → Credential theft → Account takeover → Business logic bypass → Device takeover → Fleet-wide compromise via push / OTA channels.
Security boundaries to respect:
- Process sandbox (
/data/data/<pkg>on Android, container directory on iOS) - Code signing enforcement (APK signature v2/v3, iOS mach-O code signature)
- Hardware-backed keystore (Android Keystore StrongBox, iOS Secure Enclave)
- Permission model (runtime permissions Android 6+, entitlements on iOS)
- TLS & certificate validation on the network boundary
- Exported-component boundary on Android (
android:exported) - URL scheme / universal link routing on iOS
2. OWASP MASVS & MASTG#
The Mobile Application Security Verification Standard (MASVS) and its companion Mobile Application Security Testing Guide (MASTG) are the industry reference for mobile security requirements and how to verify them. MASVS provides pass/fail requirements organized by category; MASTG gives test procedures for each.
Verification levels#
| Level | Scope | Typical target |
|---|---|---|
| L1 | Standard security baseline — no hardcoded credentials, TLS, appropriate permissions, sane local storage | Any production app |
| L2 | Defense-in-depth — certificate pinning, biometric auth correctness, strong crypto, anti-debugging | Banking, healthcare, government, payment |
| R | Resilience against reverse engineering — anti-tamper, obfuscation, root/jailbreak detection, RASP | DRM, payment, apps where client logic is a revenue target |
R is orthogonal — an app can be L1+R or L2+R. R does not fix vulnerabilities; it raises attacker cost.
MASVS categories#
| Category | What it covers |
|---|---|
| MASVS-STORAGE | Local data storage — SharedPreferences, Keychain, SQLite, caches, logs, backups, clipboard |
| MASVS-CRYPTO | Algorithm selection, key management, RNG, keystore usage |
| MASVS-AUTH | Credential handling, biometrics, session management, server-side authz |
| MASVS-NETWORK | TLS version/cipher, cert validation, pinning |
| MASVS-PLATFORM | IPC, WebView, permissions, deep links, exported components |
| MASVS-CODE | Debug flags, third-party libraries, error handling, updates |
| MASVS-RESILIENCE | Anti-debug, anti-tamper, root/JB detection, obfuscation |
| MASVS-PRIVACY | PII handling, consent, data minimization (added in later revisions) |
MASTG test IDs follow the form MASTG-TEST-NNNN per platform. For example, MASTG-TEST-0001 tests local data storage on Android; MASTG-TEST-0028 tests Android deep links; MASTG-TEST-0048 / MASTG-TEST-0091 test reverse engineering tool detection on Android and iOS respectively. The MASTG refactor in 2023 replaced the older MSTG IDs, and MASVS-PRIVACY was added in the same generation.
Compliance workflow:
- Automated SAST/DAST in CI (MobSF, Oversecured, AppKnox) to catch L1 regressions on every build.
- Manual assessment pre-release for L2 controls (pinning bypass, auth flow, runtime behavior).
- Continuous monitoring (NowSecure, Data Theorem) for SDK update regressions.
- Evidence stored per MASVS requirement ID for auditors.
3. Android Platform Attack Surface#
Android’s attack surface is broader than iOS largely because Android supports richer IPC primitives and a wider variety of OEM-modified devices.
Exported components#
Every app component (Activity, Service, BroadcastReceiver, ContentProvider) is either exported or not. A component is exported if:
- It has
android:exported="true", or - It declares an
<intent-filter>andandroid:exportedis not explicitly set (implicit export before API 31; explicit required in API 31+).
An exported component can be invoked by any app on the device with an appropriate intent. Exported components without permission checks are the most common platform finding.
Testing exported components:
# enumerate
aapt dump xmltree base.apk AndroidManifest.xml | grep -E 'activity|service|receiver|provider|exported'
# start an exported activity from adb
adb shell am start -n com.example.app/.SettingsActivity --es key value
# send broadcast
adb shell am broadcast -a com.example.app.ACTION_RESET
# query content provider
adb shell content query --uri content://com.example.app.provider/users
Drozer automates this enumeration:
run app.package.attacksurface com.example.app
run app.activity.info -a com.example.app
run app.provider.finduri com.example.app
run app.provider.query content://com.example.app/users
Intents#
Intents are Android’s primary IPC mechanism. Security pitfalls:
| Pitfall | Consequence |
|---|---|
| Implicit intent carrying sensitive data | Any app matching the filter receives the payload |
Trusting Intent.getExtras() without validation | Intent injection / activity hijacking |
Writable PendingIntent (no FLAG_IMMUTABLE) | Caller can rewrite action/data → privilege escalation |
startActivityForResult on attacker-controlled target | Result injection |
| Deserializing untrusted Parcelable extras | Parcel mismatch, type confusion, RCE in past versions |
Rule: for anything security sensitive, use explicit intents targeting your own package, set FLAG_IMMUTABLE on all PendingIntent objects on API 23+, and validate extras as untrusted input.
ContentProvider#
ContentProviders expose a URI-based CRUD interface. Common issues:
- Exported provider with no
readPermission/writePermission. - Path traversal in
openFile()— attacker passes../../../../data/data/victim/shared_prefs/auth.xml. - SQL injection in the
selectionargument — passing attacker-controlled WHERE clauses directly to SQLite. - Unrestricted
grantUriPermissionsleaking file:// URIs.
SQLite#
Android apps frequently use raw SQLite. Classic SQLi applies — use parameterized queries via ? placeholders, never string concatenation. Beware of rawQuery() with untrusted input. On disk, SQLite files live at /data/data/<pkg>/databases/ and are world-readable only to the app’s UID, but forensic tools and rooted attackers can read them.
File storage#
| Location | Readability |
|---|---|
/data/data/<pkg>/ | Only app UID (unless MODE_WORLD_READABLE used — deprecated, still seen) |
External storage (/sdcard/) | All apps with storage permission; treat as public |
getExternalFilesDir() (scoped) | App-private on modern Android but cleared on uninstall |
MediaStore | Mediated by system |
Writing sensitive data to external storage is a recurring finding. Even on Android 10+ scoped storage, logs and caches on external storage can leak tokens.
Backups#
android:allowBackup="true" (the pre-Android 12 default) means adb backup can pull the app’s private data without root. Set allowBackup="false" or provide an android:fullBackupContent rule that excludes secrets.
Logcat#
android.util.Log writes to logcat, which pre-Android 4.1 was readable by any app with READ_LOGS. Modern Android restricts this but debuggable builds, crash reports, and OEM modifications still leak logged data. Never log tokens, PII, or full requests.
Clipboard#
ClipboardManager.getPrimaryClip() is observable by any foreground app (and historically by background apps). iOS 14 and Android 12 introduced clipboard access notifications to expose this.
4. iOS Platform Attack Surface#
iOS has a narrower IPC surface than Android, but the attack surface still includes URL schemes, universal links, app extensions, Keychain, Pasteboard, and the file system within the app container.
App sandbox#
Every iOS app runs in a container at /var/mobile/Containers/Data/Application/<UUID>/ with Documents/, Library/, and tmp/ subfolders. The sandbox prevents cross-app file access but does not protect against:
- On-device attacker with physical access + backup extraction
- Jailbroken device with root access
- The app itself leaking data to logs, pasteboard, or cloud sync
URL schemes#
Custom URL schemes (myapp://...) are the legacy iOS IPC mechanism. Any app can register a scheme; multiple apps can register the same scheme (last-installed wins, historically). Applications must validate UIApplicationDelegate application:openURL:options: callbacks — do not trust the source application, do not pass the URL directly to WKWebView or UIWebView, and do not treat URL parameters as authenticated.
Classic bugs:
- URL scheme hijacking — malicious app registers the same scheme and intercepts OAuth callbacks.
- Unauthenticated actions —
myapp://transfer?to=attacker&amount=1000triggered by a web page without confirmation. - Passing URL query parameters into SQL or WebView.
Universal links#
Universal links (iOS 9+) solve the hijacking problem by binding HTTPS URLs to an app via the apple-app-site-association (AASA) file hosted on the developer’s domain. The AASA must be fetched over HTTPS from https://example.com/.well-known/apple-app-site-association. Misconfigurations:
- AASA served with wrong
Content-Typeor behind redirect — iOS silently falls back to opening Safari. - AASA pattern too broad (
/*) — hijacks unrelated paths. - Shared domain with untrusted user content — attacker hosts a page under
example.comthat your app treats as trusted.
Keychain#
The Keychain is iOS’s secure credential store, backed by the Secure Enclave on modern devices. Items have protection classes that control when they’re readable:
| Protection class | When accessible |
|---|---|
kSecAttrAccessibleWhenUnlocked | Device unlocked |
kSecAttrAccessibleAfterFirstUnlock | After first unlock post-boot |
kSecAttrAccessibleWhenPasscodeSetThisDeviceOnly | Only if passcode set, not synced |
kSecAttrAccessibleAlways | Always (deprecated, insecure) |
Common bugs: using ...Always, storing credentials in NSUserDefaults instead of Keychain, forgetting ThisDeviceOnly on tokens (iCloud Keychain syncs them), using the default accessibility when the device is locked regularly.
Pasteboard#
The general UIPasteboard is shared across apps on the same device. Copying tokens or PII to the pasteboard leaks them to any app that reads the pasteboard. Use UIPasteboard with a named instance and setItems:options: with UIPasteboardOptionLocalOnly and UIPasteboardOptionExpirationDate for sensitive data.
App Group IPC#
Apps from the same team ID can share data via App Groups (shared container, shared NSUserDefaults, shared Keychain access group). Security issues appear when one app in the group is compromised or when the shared container is used to pass untrusted input between processes (e.g., main app ↔ widget ↔ share extension).
App extensions#
Share extensions, keyboard extensions, today widgets, and intent extensions run in separate processes with reduced entitlements. Attack surface is the NSExtensionContext boundary — validate input items, don’t blindly load remote content, and don’t store secrets in places extensions can read.
Data protection#
The iOS Data Protection API encrypts files with keys derived from the user’s passcode. File protection classes:
| Class | Behavior |
|---|---|
NSFileProtectionComplete | Decrypted only while unlocked; key wiped 10s after lock |
NSFileProtectionCompleteUnlessOpen | Stays open across lock |
NSFileProtectionCompleteUntilFirstUserAuthentication | Default; decrypted after first unlock post-boot |
NSFileProtectionNone | Encrypted at rest only with device key |
Most apps end up on CompleteUntilFirstUserAuthentication because background tasks need file access. Sensitive files should use Complete where feasible.
5. Insecure Storage#
This is the single highest-yield category in mobile assessments. Developers underestimate the number of places data ends up on a device: keyboard caches, screenshot snapshots, web caches, analytics logs, crash reports, backups, clipboard.
Places to check on Android#
/data/data/<pkg>/shared_prefs/*.xml
/data/data/<pkg>/databases/*.db # sqlite3 to inspect
/data/data/<pkg>/files/
/data/data/<pkg>/cache/
/data/data/<pkg>/code_cache/
/sdcard/Android/data/<pkg>/
/sdcard/ # any app-written file
Tools: adb shell run-as <pkg> on debuggable builds; objection android shell and objection android sqlite connect.
Places to check on iOS#
<container>/Documents/
<container>/Library/Preferences/*.plist # NSUserDefaults
<container>/Library/Caches/
<container>/Library/WebKit/ # WKWebView cache
<container>/tmp/
Keychain (dump with objection or keychain-dumper)
~/Library/Developer/CoreSimulator/Devices/<uuid>/... (simulator)
Common leak patterns#
- Auth tokens in
SharedPreferences/NSUserDefaults— these are plaintext plists/XML. Use EncryptedSharedPreferences (Jetpack Security) or Keychain. - Application screenshots — iOS snapshots the app screen on backgrounding for the task switcher; sensitive data in the background state is written to disk. Blur or overlay the sensitive view in
applicationDidEnterBackground. - Keyboard cache — iOS caches words typed in non-secure
UITextFields. SetsecureTextEntry = YESfor passwords and sensitive fields; setautocorrectionType = UITextAutocorrectionTypeNo. - Crash reports & analytics — third-party crash reporters (Crashlytics, Sentry) capture stack traces and sometimes memory state; review what you’re shipping.
- SQLite WAL/journal files — deleted rows persist in
*.db-waland*.db-journaluntil a VACUUM. - WebView cookies and localStorage —
WKWebViewuses the app’s container, so session cookies persist unless explicitly cleared.
6. Network Communication & TLS#
Baseline: TLS 1.2+ on every connection, no plaintext HTTP, no mixed content, validate certificate chain against system trust store, use system APIs (URLSession, OkHttp) rather than hand-rolling TLS.
Android network security#
Android Network Security Config (res/xml/network_security_config.xml, referenced from the manifest) controls trust anchors, cleartext permission, and pinning:
<network-security-config>
<base-config cleartextTrafficPermitted="false">
<trust-anchors>
<certificates src="system"/>
</trust-anchors>
</base-config>
<domain-config>
<domain includeSubdomains="true">api.example.com</domain>
<pin-set>
<pin digest="SHA-256">base64hash==</pin>
<pin digest="SHA-256">backup==</pin>
</pin-set>
</domain-config>
</network-security-config>
Key things to check:
cleartextTrafficPermitted="true"anywhere — usually debug leftover that ships to prod.<debug-overrides>includingusertrust anchors — means Burp CA works in debug builds only, which is intended; verify it’s not in release.- Missing or mis-targeted
<domain-config>— pinning only on the marketing domain while API calls go elsewhere.
iOS App Transport Security (ATS)#
ATS (iOS 9+) enforces TLS 1.2+ and valid certs by default. NSAllowsArbitraryLoads = YES in Info.plist disables ATS entirely — look for this in third-party SDK integrations. NSExceptionDomains allows per-domain exceptions; inspect these closely for relaxed cert validation or cleartext allowances.
Man-in-the-middle testing setup#
- Install a proxy CA cert on the test device (Burp, mitmproxy, Charles, Proxyman).
- Android: on API 24+ user-installed CAs are not trusted by default — you need a debug-enabled build with
<debug-overrides>, or install the CA as a system cert on a rooted device (/system/etc/security/cacerts/with the correct hash name). - iOS: install the proxy profile, then enable the CA under Settings → General → About → Certificate Trust Settings.
- Route device traffic through the proxy (Wi-Fi settings or system-wide VPN like mitmproxy’s
wireguardmode).
If traffic comes through cleanly, the app trusts arbitrary CAs and has no pinning. If connections fail, pinning is in effect — move to bypass.
7. SSL / Certificate Pinning Bypass#
Pinning binds the app to a specific certificate or public key. Implementations vary:
| Implementation | How it works | Bypass approach |
|---|---|---|
OkHttp CertificatePinner | Compares leaf/chain SPKI hashes to a pinset | Frida hook CertificatePinner.check$okhttp |
| TrustManager override | Custom X509TrustManager.checkServerTrusted | Hook checkServerTrusted to return normally |
HostnameVerifier allow-all | Not pinning — just accepting | Already bypassed |
| Android Network Security Config pin-set | System-level validation | Patch the XML, disable via Frida NetworkSecurityConfig hook |
iOS URLSession delegate | urlSession:didReceiveChallenge: does pinning | Frida hook the delegate |
| TrustKit / AFNetworking pinning | Library-level SPKI pinning | Hook the library’s validation function |
| Native pinning (BoringSSL, libcurl) | Lower level — Frida Stalker or Interceptor on SSL_CTX_set_verify |
Frida universal bypass scripts#
The community maintains scripts that hook all well-known pinning APIs:
frida-multiple-unpinning(akabe1) — Android, covers OkHttp, TrustManager, Conscrypt, WebViewClient, Appcelerator, Cronet.fridantiroot— Android root detection + pinning.ios-ssl-pinning-bypass/objection --startup-command "ios sslpinning disable".
Invocation:
frida -U -f com.example.app -l frida-multiple-unpinning.js --no-pause
objection -g com.example.app explore
> ios sslpinning disable
> android sslpinning disable
Why bypass sometimes fails#
- App uses native (C/C++) pinning in a
.so— Java/ObjC hooks miss it. HookSSL_CTX_set_verify,SSL_set_verify, or BoringSSL symbols directly, or usefrida-traceto find the verification function. - App ships multiple HTTP stacks — pinning on one but not the other; still, some traffic requires both bypassed.
- Certificate Transparency enforcement — some apps additionally require SCTs.
- The app kills itself on pinning failure (anti-debug path) — patch out the kill.
Pinning is a resilience control, not a vulnerability. Pinning bypass is a stepping stone to inspect traffic; the downstream finding is whatever the API protects.
8. Reverse Engineering Workflow#
Reverse engineering is a core mobile pentest skill. The three uses: (1) disabling controls that block dynamic analysis (pinning, root detection), (2) understanding app logic in black-box testing, (3) assessing MASVS-R resilience.
Android RE pipeline#
APK → apktool d → smali + resources
APK → jadx-gui → readable Java decompilation
APK → unzip → classes.dex, lib/<arch>/*.so, resources.arsc
DEX → d2j-dex2jar → JAR → JD-GUI / CFR / Procyon
.so → Ghidra / IDA / radare2 / Binary Ninja
- jadx — fastest path to readable Java, handles most obfuscation.
- apktool — disassembles to smali for patching and rebuilding.
- dex2jar + CFR — alternative decompilation when jadx chokes.
- Ghidra — free, handles ARM64/ARMv7/x86 native libs and DEX.
- simplify — smali deobfuscator for string/control-flow obfuscation.
- JEB — commercial, strong on obfuscated DEX.
Patching flow: apktool d, edit smali, apktool b, re-sign with apksigner sign --ks ..., install.
iOS RE pipeline#
iOS binaries (.ipa) are FairPlay-encrypted when downloaded from the App Store. To reverse engineer, you need a decrypted binary:
- frida-ios-dump — pull decrypted IPA from a jailbroken device.
- bagbak, flexdecrypt — alternative decryptors.
- App Store binaries are decrypted at runtime by the kernel; dumping memory after launch yields the plaintext mach-O.
Once decrypted:
- class-dump / class-dump-z — extract Objective-C class headers from the mach-O.
- Hopper — commercial disassembler with decompiler, strong ObjC support.
- Ghidra — free; handles ARM64 mach-O, Swift symbols are messy.
- IDA Pro — gold standard.
- radare2 / r2frida — open source, scriptable, integrates with Frida for dynamic work.
Swift adds friction: name mangling, generics, and extensive use of witness tables make static analysis harder than ObjC. Dynamic analysis via Frida is often more productive.
What to look for in static RE#
- Hardcoded secrets, API keys, JWT signing keys, encryption keys — grep strings and base64-decoded strings.
- Endpoint URLs including staging, debug, and admin panels.
- Feature flags and debug paths that should not exist in release.
- Crypto primitives — confirm algorithms and key handling.
- Root/jailbreak detection routines, debugger checks, pinning logic — targets for bypass.
- Exported components and entry points.
- Third-party SDKs and their versions — cross-reference with CVE databases.
9. Runtime Instrumentation with Frida#
Frida is a dynamic instrumentation toolkit that injects a JavaScript runtime into a target process and lets you hook, trace, replace, and log function calls in real time. It supports Android (Java, Kotlin, native) and iOS (Objective-C, Swift, native) among many other platforms.
Architecture#
Frida works by writing code into process memory:
frida-server(on a rooted/jailbroken device) orfrida-gadget(embedded in the app) hosts the runtime.- On attach, Frida hijacks a thread via
ptrace, allocates memory, and loadsfrida-agent.so/FridaGadget.dylib. - The agent opens a channel back to your host running
fridaor a custom Python script. - JavaScript you write runs inside the target, with access to APIs for hooking and memory manipulation.
Modes of operation#
| Mode | Requires | Use case |
|---|---|---|
| Injected | Rooted/jailbroken device, frida-server daemon | Interactive testing, full device |
| Embedded (Gadget) | Unmodified device; gadget injected into APK/IPA | Testing on stock devices |
| Preloaded | Gadget loads from disk via LD_PRELOAD / DYLD_INSERT_LIBRARIES | Standalone automation |
Key APIs#
Interceptor— inline hook at function prologue. High flexibility, detectable by checksum-based anti-tamper because it overwrites prologue bytes.Stalker— JIT-based dynamic code tracer, leaves original code untouched. Better for stealth and high-granularity tracing at the cost of performance.Java— enumerate and hook Java classes and methods on Android.ObjC— enumerate and hook Objective-C classes, selectors, and instances on iOS.Module,Memory,NativePointer,NativeFunction— native-level primitives for reading/writing memory and calling functions.
Frida 17 removed bundled runtime bridges — if you use custom scripts that import frida-java-bridge or frida-objc-bridge, install them via frida-pm and bundle with frida-compile. Interactive CLI tools still embed the bridges.
Hook examples#
Android — bypass a boolean root check:
Java.perform(function () {
var RootCheck = Java.use('com.example.app.security.RootChecker');
RootCheck.isDeviceRooted.implementation = function () {
console.log('[+] isDeviceRooted called, returning false');
return false;
};
});
iOS — hook a jailbreak detection class method:
var JBDetect = ObjC.classes.JailbreakDetector;
Interceptor.attach(JBDetect['- isJailbroken'].implementation, {
onLeave: function (retval) {
console.log('[+] isJailbroken returning NO');
retval.replace(0x0);
}
});
Native function trace with frida-trace:
frida-trace -U -i "open" -i "read" -i "stat" -n "Example"
List modules (Frida 17 API):
for (const m of Process.enumerateModules()) {
console.log(m.name, m.base, m.size);
}
Ecosystem tools built on Frida include objection (runtime mobile security assessment framework), Fridump (memory dumper), r2frida (radare2 + Frida bridge), Grapefruit (iOS RAI toolkit), and jnitrace (JNI method tracer).
10. Root & Jailbreak Detection Bypass#
Apps use root/jailbreak detection to reduce the attack surface on compromised devices. Detection is never perfect — MASVS treats it as a resilience control, not a security boundary.
Android root detection signals#
| Signal | What it checks |
|---|---|
su binary | /system/bin/su, /system/xbin/su, /sbin/su, which su |
| Superuser APKs | Superuser.apk, com.topjohnwu.magisk, eu.chainfire.supersu |
| System properties | ro.debuggable, ro.secure, service.adb.root |
test-keys | ro.build.tags contains test-keys |
| Busybox / root tools | /system/xbin/busybox, /system/bin/busybox |
| Mount state | /system mounted rw |
Native getuid() == 0 | |
| SafetyNet / Play Integrity | Server-side attestation (harder to bypass) |
Common libraries: RootBeer, SafetyNet/Play Integrity API.
iOS jailbreak detection signals#
| Signal | What it checks |
|---|---|
| File existence | /Applications/Cydia.app, /Library/MobileSubstrate, /bin/bash, /etc/apt, /var/lib/apt |
| Suspicious URL schemes | cydia://, sileo://, zbra:// |
fork() succeeds | Sandboxed apps cannot fork; jailbroken ones often can |
| Write outside sandbox | /private/jailbreak.txt — write then check |
dyld inspection | /usr/lib/substrate, FridaGadget.dylib, cynject, libcycript in the image list |
ptrace self-attach (PT_DENY_ATTACH) | Prevents debugger attachment |
sysctl kinfo for debugger | P_TRACED flag |
| Private APIs | Calling ptrace, sysctl, task_for_pid |
Bypass approach#
- Hook the high-level
isRooted/isJailbrokenmethod to returnfalse. - If the check is inlined or in native code, hook the underlying primitives —
open,access,stat,fork,dlopen,NSFileManager fileExistsAtPath:. - For
ptrace(PT_DENY_ATTACH), hookptraceto return 0 before it’s called. - For integrity-based checks (the app computes a hash of itself and compares), either patch out the check or hook the comparison.
- Universal scripts: objection (
android root disable,ios jailbreak disable), fridantiroot, community unpinning/antiroot scripts.
SafetyNet/Play Integrity Attestation is much harder — it runs a signed check against Google servers and ties to hardware attestation. Bypasses require spoofing a known-good keybox or using a modified boot chain (Magisk + DenyList + attestation modules), and Google rotates signatures regularly. For high-assurance apps, treat server-verified attestation as the primary defense and client-side detection as defense-in-depth.
11. Deep Links & URL Schemes#
Deep links let external entities invoke specific app screens. Handled incorrectly, they become an unauthenticated remote trigger for sensitive actions.
Android deep links#
Three flavors:
- Custom scheme (
myapp://host/path) — any app can register the scheme; hijackable. - Implicit HTTP intent filter (
<data android:scheme="https" android:host="example.com"/>) — any app can register; disambiguation dialog. - App Link (verified,
android:autoVerify="true") — requires/.well-known/assetlinks.jsonon the domain; bound to the app by signature.
Test:
adb shell am start -W -a android.intent.action.VIEW \
-d "myapp://action/transfer?to=attacker&amount=1000" com.example.app
Issues to look for:
- Deep link triggers authenticated action without re-auth.
- Deep link path is used as input to
WebView.loadUrl()→ XSS / file:// read / universal XSS. - Deep link passes a
urlparameter into an Intent withACTION_VIEW→ open redirect / file:// bypass. assetlinks.jsonis missing or mis-hosted → App Link silently falls back to chooser.- Deep link triggers export/save without a confirmation UI.
iOS URL schemes & universal links#
URL schemes register via CFBundleURLTypes in Info.plist. Multiple apps can claim the same scheme; ordering is implementation-defined. Universal links are the preferred modern mechanism: HTTPS URLs that open the app when installed and Safari when not.
Test:
xcrun simctl openurl booted "myapp://path?x=1"
Bugs:
- URL handler passes query to
WKWebView→ XSS. - URL triggers privileged action without confirmation.
- AASA file mis-specified → universal link falls back to Safari.
application:continueUserActivity:restorationHandler:trustswebpageURLwithout validation.
Deep link defensive rules#
- Every deep link that performs an action must re-prompt for authentication or confirmation.
- Parse and validate every parameter as untrusted.
- Never pass deep link parameters directly to a WebView.
- Use App Links / Universal Links with AASA/assetlinks.json verification, not custom schemes, for anything security-relevant.
- Log deep link invocations for anomaly detection.
12. WebView Security#
WebViews embed a browser engine in the app and are a frequent source of vulnerabilities because the web security model interacts awkwardly with the native side.
Android WebView#
setJavaScriptEnabled(true)is necessary for most content but enables XSS impact.addJavascriptInterface(obj, "name")exposes a Java object to JS. Pre-API 17 this was pure RCE (JS could reach arbitrary methods via reflection); API 17+ requires@JavascriptInterfaceannotation but the exposed methods still form RCE surface if they do anything sensitive. Never exposeRuntime.exec, file access, or authentication state.setAllowFileAccess(true),setAllowFileAccessFromFileURLs(true),setAllowUniversalAccessFromFileURLs(true)— dangerous combinations allow afile://page to read any file or make cross-origin requests.setAllowContentAccess(true)— WebView can loadcontent://URIs.shouldOverrideUrlLoading— must validate URLs before loading; failing to do so means any intent that lands in the WebView can load arbitrary origins.- Mixed content —
setMixedContentMode(MIXED_CONTENT_NEVER_ALLOW)is the safe default on API 21+.
Classic exploit: exported activity → deep link → WebView → JS bridge → RCE.
iOS WKWebView#
UIWebView is deprecated and insecure — flag any remaining use. WKWebView runs out-of-process and is much safer:
WKUserContentController.add(scriptMessageHandler:name:)is the native bridge; any JS in the loaded page can post to it. Validate every message as untrusted.WKWebView.configuration.preferences.javaScriptCanOpenWindowsAutomatically— control per your needs.loadFileURL:allowingReadAccessToURL:— the second argument is the sandbox boundary; setting it to the container root gives JS access to everything.navigationDelegate— implementdecidePolicyForNavigationActionto restrict origins.
JavaScript bridge threat model#
Treat the JS side as fully untrusted even if you load your own HTML. Reasons:
- XSS in loaded content (yours or third-party).
- MITM during load (if not TLS-pinned).
- A compromised CDN serving the HTML.
- A cross-origin iframe manipulating the main frame.
Never expose functions that (a) execute code or shell commands, (b) read or write arbitrary files, (c) read secrets from Keychain/Keystore, (d) return tokens, (e) perform privileged actions without re-auth.
13. Authentication, Biometrics & Session#
Authentication#
Authentication happens on the server — the app is just a client. The recurring mobile mistake is client-side authorization: the app hides UI based on a local role flag rather than server-enforced access control. A Frida hook flips the flag and exposes the hidden functionality. Never enforce privilege on the client.
Session tokens should live in Keychain / EncryptedSharedPreferences, scoped to the device (not synced), and rotate on sensitive events. Log out should invalidate on the server, not just clear local state.
Biometrics#
Biometric authentication on mobile is a UX gate on top of a cryptographic operation, not the cryptographic operation itself. Correct use:
- Android BiometricPrompt with a
CryptoObject— a Keystore-backed key that requires biometric authentication to unlock. On successful biometric, the key becomes usable and the app signs / decrypts a server challenge. Without theCryptoObject, a Frida hook on the biometric callback bypasses the check trivially. - iOS LocalAuthentication + Keychain access control — store the secret in Keychain with
SecAccessControlflagskSecAccessControlBiometryCurrentSetandkSecAttrAccessibleWhenUnlockedThisDeviceOnly. Reading the item forces a biometric prompt the attacker cannot hook away because the prompt is enforced bysecurityd, not the app.
Both platforms: tie biometric to a specific crypto key so that bypassing the UI doesn’t bypass the operation.
Session management#
- Short-lived access tokens with refresh tokens, rotated on use.
- Server-side session invalidation on password change, explicit logout, new device.
- Detect token theft via device binding (include a device key in token exchange).
- Don’t ship long-lived session tokens in Keychain with
kSecAttrAccessibleAlways.
14. Cryptography & Key Management#
The universal crypto finding in mobile is key management, not algorithm choice. An app using AES-256-GCM with a key embedded as a byte[] constant in the binary provides zero protection. The algorithm is fine; the key is the problem.
Use the platform keystore#
- Android Keystore — keys generated inside the Keystore never leave hardware (StrongBox on supporting devices). Use
KeyGenParameterSpecwithsetUserAuthenticationRequired,setUnlockedDeviceRequired, andsetIsStrongBoxBacked. - iOS Secure Enclave — keys generated with
kSecAttrTokenID = kSecAttrTokenIDSecureEnclaveare non-extractable. Pair withSecAccessControlfor biometric gating.
Algorithm do’s and don’ts#
| Do | Don’t |
|---|---|
| AES-256-GCM, ChaCha20-Poly1305 | ECB mode, CBC without MAC |
| SHA-256/384/512, BLAKE2/3 | MD5, SHA-1 for security |
| HKDF for key derivation | Password as raw key |
| Argon2id / PBKDF2 with high iterations for password KDF | Single SHA-256 of password |
| Ed25519 / ECDSA P-256 | DSA, RSA-1024 |
SecureRandom / arc4random_buf / /dev/urandom | java.util.Random, rand() |
Hardcoded keys and string obfuscation#
A common finding: the app stores an AES key in resources, in a native string, or “obfuscated” via XOR or base64. These are all trivially recovered by jadx/Hopper + strings + Frida. If the key has to be in the binary, it’s effectively public. Either derive from server-provided material, gate behind user authentication, or accept the key as public.
15. Resilience / Anti-Tamper / RASP#
Resilience controls (MASVS-R) raise attacker cost without fixing bugs. They belong in apps where client-side logic is a revenue target or where reverse engineering has direct financial impact (DRM, payment, games).
Typical controls#
| Control | What it does | Frida-bypassable? |
|---|---|---|
| Root/JB detection | Refuse to run on compromised devices | Yes |
Debugger detection (ptrace, isDebuggerConnected) | Detect attached debuggers | Yes, but multiple checks slow bypass |
| Frida/tool detection | Look for frida-server, ports 27042/27043, gadget libs in image list, re.frida.server process | Yes — hook the detector |
| Integrity checks | Compute hash of own binary / classes.dex and compare | Hook the comparison |
| String & control flow obfuscation | Harder to read statically | Dynamic analysis still works |
| Packers / encrypted DEX | Binary is unpacked at runtime | Dump memory post-unpack |
| Native-code checks | Harder to find via Java hooks | Hook at syscall boundary |
| Emulator detection | Check for QEMU artifacts, sensor diversity, goldfish kernel | Hook the checks |
| Hook detection | Check prologue bytes for inline hook patterns | Use Stalker instead of Interceptor |
| Server-side attestation (Play Integrity, DeviceCheck) | Remote verification | Hardest — bypass requires hardware spoofing |
Strong resilience characteristics#
- Multiple independent checks — bypassing one isn’t enough.
- Native-code checks — harder to hook from Java/ObjC.
- Checks triggered at random points during execution, not just at startup.
- Soft failure — don’t immediately crash; degrade the UI, delay, or corrupt data so the attacker can’t trivially identify the check location.
- Server-side component — the server refuses to serve valuable data unless attestation succeeds.
RASP vendors#
Commercial RASP SDKs (Guardsquare DexGuard/iXGuard, Talsec, Promon Shield, Zimperium zShield, Appdome) bundle obfuscation, anti-tamper, anti-hook, root/JB detection, and attestation. They’re not silver bullets — determined attackers bypass all of them given time — but they raise bar from minutes to days, which is often sufficient for the threat model.
16. Tooling Reference#
Static analysis#
| Tool | Platform | Notes |
|---|---|---|
| MobSF | Android/iOS | Free SAST+DAST, REST API for CI, strongest for L1 baseline |
| jadx / jadx-gui | Android | DEX → readable Java |
| apktool | Android | Smali disassembly and rebuild |
| dex2jar / CFR / Procyon | Android | Alternative decompilation paths |
| Ghidra | Both | Free, SRE from NSA, ARM/x86/DEX |
| Hopper | iOS (mach-O) | Commercial, strong ObjC decompiler |
| IDA Pro | Both | Commercial gold standard |
| Binary Ninja | Both | Commercial, strong scripting |
| class-dump | iOS | ObjC header extraction |
| strings / rabin2 | Both | String extraction, symbols |
| Oversecured | Both | Commercial deep static analysis |
| AppKnox | Both | Commercial SAST/DAST/API testing |
| Semgrep (mobile rules) | Source | Pattern-based SAST on source |
Dynamic analysis & instrumentation#
| Tool | Platform | Use |
|---|---|---|
| Frida | Both | Universal DBI |
| objection | Both | Frida-based exploration (explore, ios keychain dump, android shell) |
| Drozer | Android | IPC / component fuzzing |
| Fridump | Both | Memory dumping |
| r2frida | Both | radare2 + Frida |
| Grapefruit | iOS | Web UI over Frida |
| jnitrace | Android | Trace JNI calls |
| frida-ios-dump | iOS | Decrypt App Store IPAs |
| Needle | iOS | (legacy) iOS testing framework |
| Corellium | Both | Virtualized iOS/Android for RE |
| Genymotion / Android Studio emulator | Android | Root-available test devices |
| Burp Suite / mitmproxy / Charles / Proxyman | Both | HTTP(S) MITM |
Jailbreak / root tooling#
- Android: Magisk (soft-root with DenyList), Genymotion (rooted by default), rooted Pixel + AOSP builds.
- iOS: checkra1n (semi-tethered, checkm8 devices), unc0ver, palera1n (iOS 15–16), Corellium virtual devices.
Test devices#
A mobile assessment lab typically has: one rooted Android phone, one non-rooted Android phone (for release build testing), one jailbroken iPhone at the minimum iOS version the app supports, one non-jailbroken iPhone for production-like testing, and Corellium or simulators for rapid iteration across OS versions.
17. Testing Methodology#
A mobile pentest maps cleanly to MASTG categories. A suggested week-long assessment:
Day 1 — setup & recon#
- Obtain APK/IPA (Play Store, TestFlight, MDM dump, or
frida-ios-dumpfrom jailbroken device). - Run MobSF for baseline static findings and manifest dump.
jadx-guiorHopperfor manual code walk — focus on entry points, URLs, keys, crypto, auth.- Identify: minSdk/target SDK, signing cert, permissions, exported components, URL schemes, deep links, third-party SDKs with versions.
Day 2 — storage & platform#
- Install on test device, exercise all flows, inventory files written.
- Inspect
shared_prefs,databases,Library,Documents, Keychain, Keystore. - Dump Keychain with
objection ios keychain dump. - Test
adb backup/ iTunes backup exposure. - Screenshot on backgrounding — check for sensitive view capture.
- Clipboard and keyboard cache for sensitive fields.
Day 3 — network#
- Proxy all traffic (Burp + system CA trust).
- If pinning blocks, bypass with Frida/objection and re-verify.
- Map every API endpoint, auth mechanism, session handling.
- Test for auth bypass, IDOR, server-side input validation (this is usually the bulk of impact).
- Check TLS version, cipher suites, certificate validation, cleartext traffic config.
Day 4 — platform IPC & deep links#
- Enumerate exported Android components with Drozer; invoke each with crafted intents.
- Fuzz ContentProviders for SQLi and path traversal.
- Test every URL scheme and universal link for unauthenticated actions and WebView injection.
- Test WebView for JS bridge exposure, file access, universal XSS.
- iOS: enumerate app extensions and test their boundaries.
Day 5 — resilience & report#
- Test root/JB detection → bypass; test anti-debug → bypass; measure time to bypass.
- Test integrity checks, Frida detection.
- If server attestation is in use, test what the server does on failure.
- Write up findings mapped to MASVS requirements with reproduction steps, severity, and remediation.
Reporting#
Each finding should include: MASVS ID, severity (CVSS or platform scheme), affected component, reproduction steps, evidence (screenshots, logs, Frida scripts, packet captures), impact narrative, remediation. Track remediation against MASVS IDs so regressions in re-test are easy to spot.
18. Notable CVEs & Real-World Incidents#
Mobile-relevant issues from the sources and broader ecosystem:
| Year | Incident | Relevance |
|---|---|---|
| 2014 | Heartbleed (CVE-2014-0160) | OpenSSL in countless mobile apps and SDKs — years later, unpatched hosts still existed. Highlights supply chain risk. |
| 2015 | addJavascriptInterface pre-API 17 | Any JS in loaded page → arbitrary Java method via reflection → RCE. Hundreds of affected apps. |
| 2015 | XcodeGhost | Supply chain: attacker-modified Xcode injected malicious code into legitimate apps. Hundreds of App Store apps compromised. |
| 2016 | Janus (CVE-2017-13156) | Android APK signing scheme v1 allowed DEX injection into signed APKs without breaking the signature. Fixed by v2 signing. |
| 2017 | BlueBorne | Bluetooth stack RCE across Android, iOS, Linux. |
| 2018 | Stagefright evolution | Android media parsing RCE via MMS — led to sandbox hardening. |
| 2019 | iOS 12 FaceTime bug | Group FaceTime let caller hear callee before they answered. Platform IPC flaw. |
| 2019 | Kids’ smartwatches (Rapid7) | IoT mobile ecosystem: GPS watches accepted config via SMS, bypassing contact filters. |
| 2020 | StrandHogg / StrandHogg 2.0 | Android task hijacking via taskAffinity / activity reparenting — overlaid legitimate apps. |
| 2021 | iOS iMessage zero-click (NSO Pegasus) | Integer overflow in CoreGraphics PDF parsing, full device compromise with no user interaction. |
| 2022 | Play Store SharkBot / FluBot campaigns | Banker trojans abusing Accessibility Services to steal credentials. |
| 2023 | OWASP MASVS/MASTG refactor | Test IDs renumbered; MASVS-PRIVACY added. Compliance mappings must be updated. |
| 2024 | Kia web portal API | Car unlock/track via API flaw keyed off license plate — illustrates backend-dominance of mobile risk. |
| 2024 | SMS 2FA telecom breach | Unencrypted SMS exposed — reinforces moving off SMS for MFA. |
| 2024 | Location data broker breach (Candy Crush, Tinder) | Terabytes of location data in app SDKs exposed — third-party SDK risk. |
| 2024 | ASP.NET machine key abuse | Public machine key reused by apps → code injection. Not mobile-specific but hits mobile backends. |
Recurring lessons: the expensive bugs are in backends, SDKs, and WebView bridges; client-side controls fail against Frida; SMS is a broken auth factor; supply chain (Xcode, SDKs, analytics) dominates incident frequency.
19. Defensive Checklist#
A condensed secure-coding and deployment checklist mapped to MASVS categories.
MASVS-STORAGE#
- No auth tokens, keys, PII in
SharedPreferences/NSUserDefaults. - Use EncryptedSharedPreferences / Keychain with correct protection class.
-
android:allowBackup="false"or explicit exclusion of sensitive files. - No sensitive data on external storage.
- Logs contain no tokens, PII, or full request bodies.
- Crash reporter scrubs sensitive fields.
- Sensitive views hidden / blurred on backgrounding.
- Pasteboard usage minimized; expiration set on sensitive copies.
- Keyboard autocorrect/caching disabled on sensitive inputs.
MASVS-CRYPTO#
- Keys generated in and never leave Android Keystore / Secure Enclave.
- No hardcoded keys, no string-obfuscated keys used as real secrets.
- Approved algorithms: AES-GCM, SHA-256+, Ed25519 / ECDSA P-256, Argon2id/PBKDF2.
-
SecureRandom/arc4random_buffor randomness. - Keys bound to user authentication where appropriate.
MASVS-AUTH#
- No client-side authorization — every privileged action server-checked.
- Short-lived access tokens + refresh tokens; rotation on use.
- Server-side logout invalidation.
- Biometric auth tied to a Keystore/Enclave
CryptoObject. - Re-authentication for sensitive actions.
- No SMS-only 2FA for high-value accounts.
MASVS-NETWORK#
- TLS 1.2+ enforced, no cleartext traffic.
-
cleartextTrafficPermitted="false"/ ATS enforced; noNSAllowsArbitraryLoads. - Certificate pinning for sensitive domains, with backup pins.
- Pin rotation plan documented.
- No pinning bypass possible via untouched
WebViewor third-party SDK.
MASVS-PLATFORM#
- Every component has explicit
android:exportedon API 31+. - Permission checks on every exported component.
-
PendingIntentusesFLAG_IMMUTABLE. - ContentProvider paths validated; parameterized SQL; no
openFiletraversal. - Deep links require re-auth for actions; parameters validated.
- App Links / Universal Links configured with
autoVerify/ AASA. - WebView:
javaScriptEnabledonly if needed; no dangerous file/universal access; JS bridge surface minimized. -
UIWebViewremoved;WKWebViewwith restrictive navigation delegate. - iOS URL scheme handlers validate source and parameters.
MASVS-CODE#
- Debug flags off in release (
android:debuggable="false"). - Third-party SDKs inventoried and version-pinned; CVE monitored.
- No leftover debug endpoints in release.
- Error messages don’t leak stack traces or internal state.
- Compiler hardening flags enabled (
-fstack-protector-strong, PIE, FORTIFY_SOURCE).
MASVS-RESILIENCE (if in scope)#
- Root/JB detection with multiple independent checks.
- Debugger detection.
- Frida / tool detection.
- Integrity check of own binary.
- String and control-flow obfuscation.
- Server-side attestation (Play Integrity / DeviceCheck / App Attest).
- Soft failure mode so check locations are not trivially identifiable.
MASVS-PRIVACY#
- Data minimization — collect only what’s needed.
- Consent for non-essential telemetry.
- PII scrubbed from logs and crash reports.
- Third-party analytics SDKs reviewed for data exfiltration.
- User-facing data deletion functional and complete.
Appendix A: Quick command reference#
# Android
adb shell pm list packages -f | grep example
adb shell dumpsys package com.example.app
adb shell run-as com.example.app ls -la
adb pull /data/data/com.example.app/shared_prefs/
aapt dump badging base.apk
apktool d base.apk
jadx-gui base.apk
apksigner verify --print-certs base.apk
# iOS (jailbroken)
frida-ps -Uai
frida-ios-dump/dump.py "App Name"
class-dump -H App -o headers/
ldid -e App # entitlements
# Frida / objection
frida -U -f com.example.app -l script.js --no-pause
frida-trace -U -i "recv*" -n "Example"
objection -g com.example.app explore
> android hooking list classes
> android hooking search methods isRoot
> ios keychain dump
> android sslpinning disable
# Proxy / traffic
mitmproxy --mode wireguard
mitmdump -s dump_to_file.py
# Drozer
drozer console connect
run app.package.attacksurface com.example.app
run scanner.provider.injection -a com.example.app
Appendix B: Suggested Frida unpinning libraries#
frida-multiple-unpinning(akabe1) — Android universal.fridantiroot— Android root + pinning.ios-ssl-pinning-bypassscripts — iOSURLSession, TrustKit.objectionbuilt-inandroid sslpinning disable/ios sslpinning disable.- Codeshare (
https://codeshare.frida.re) for ad-hoc published scripts.
Always diff-check community scripts before running them against production apps — a “bypass” script that also exfiltrates data is a known supply chain risk in the Frida ecosystem.
End of guide.