
Alan WestHardware attestation locks out legitimate users when treated as a binary check. Here's how to build a tiered trust model that actually works.
Last month I got a bug report that made me close my laptop and go for a walk. A paying user couldn't log in. Their device was rooted? Not according to them. Custom ROM? Yes. A modern, security-hardened Android build with verified boot and hardware-backed keys. The kind of setup that's arguably more secure than a stock device.
My app rejected them anyway. Why? Because somewhere along the way, I had wired up the strictest integrity verdict I could find and called it a day. Classic mistake.
If you've shipped any mobile app that talks to a backend, you've probably run into the same trap. Let's dig into why hardware attestation locks out legitimate users, and what to actually do about it.
You add an integrity check to gate sensitive operations — login, payments, key recovery, whatever. The API gives you a verdict. You check the strongest tier. Ship it.
Then the support tickets roll in:
And here's the kicker: the people getting blocked are often the most security-conscious users you have. They're running verified boot. Their keys live in a real TEE. The cryptographic chain is solid. But your app treats them like an attacker because a single boolean came back false.
Hardware attestation was designed to answer one question: "is this key stored in hardware that I trust?" That's it. A clean, useful primitive.
The problem is that platform-level integrity APIs bolt a lot of extra opinions on top:
These are policy decisions dressed up as security guarantees. A device can have rock-solid hardware-backed keys and fail these checks — because the checks aren't really about hardware security, they're about ecosystem control.
When your code does this:
// DON'T DO THIS
if (verdict.deviceIntegrity != STRONG_INTEGRITY) {
return AuthResult.Rejected
}
You're not asking "can I trust this device's cryptographic operations?" You're asking "is this device on the vendor's preferred list?" Those are different questions, and conflating them is how you end up rejecting legitimate users.
The fix is to build a tiered trust model. Treat attestation as one signal among many, and gate operations based on actual risk — not on a single boolean from a black box.
Instead of relying solely on the platform's verdict, validate the hardware-backed key attestation directly. On Android this means parsing the X.509 certificate chain from a hardware-backed Keystore key and checking the attestation extension.
fun verifyKeyAttestation(certChain: List<X509Certificate>): AttestationResult {
// Walk the chain back to a known root
val root = certChain.last()
if (!isKnownAttestationRoot(root)) {
return AttestationResult.UnknownRoot
}
// The leaf cert contains the attestation extension (OID 1.3.6.1.4.1.11129.2.1.17)
val leaf = certChain.first()
val extension = leaf.getExtensionValue("1.3.6.1.4.1.11129.2.1.17")
?: return AttestationResult.NoAttestation
val parsed = parseAttestationExtension(extension)
// securityLevel tells us where the key actually lives
return when (parsed.keymasterSecurityLevel) {
SECURITY_LEVEL_STRONGBOX -> AttestationResult.StrongBox
SECURITY_LEVEL_TRUSTED_ENVIRONMENT -> AttestationResult.Tee
SECURITY_LEVEL_SOFTWARE -> AttestationResult.SoftwareOnly
else -> AttestationResult.Unknown
}
}
This tells you what you actually need to know: where the private key lives. A TEE-backed key is a TEE-backed key, regardless of which OS is running on top.
Google publishes the Android Keystore attestation root certificates for verification. Use those.
Not every action needs maximum assurance. Build a matrix:
enum class TrustTier { Strong, Standard, Minimal }
fun requiredTier(operation: Operation): TrustTier = when (operation) {
Operation.Login -> TrustTier.Standard
Operation.ViewBalance -> TrustTier.Standard
Operation.TransferUnderLimit -> TrustTier.Standard
Operation.TransferOverLimit -> TrustTier.Strong
Operation.ChangeRecoveryEmail -> TrustTier.Strong
Operation.ReadOnlyPublicData -> TrustTier.Minimal
}
A user who can't pass Strong-tier checks should still be able to log in and see their account. They just hit step-up authentication for high-risk operations.
Device attestation is one input. On the server, combine it with everything else you know:
def assess_risk(session):
score = 0
# Attestation signal — graded, not binary
if session.attestation == 'strongbox':
score += 40
elif session.attestation == 'tee':
score += 30
elif session.attestation == 'software':
score += 10
# Behavioral signals carry real weight
if session.device_known_for_account(days=30):
score += 25
if session.ip_in_user_history():
score += 15
if session.geo_consistent_with_recent():
score += 10
# Negative signals
if session.velocity_anomaly():
score -= 30
if session.is_known_bad_asn():
score -= 20
return score
A score above your threshold gets through. Below it, you challenge — TOTP, WebAuthn, email confirmation. You almost never need to hard-reject.
If you really care about phishing-resistant auth and device binding, the standardized answer is WebAuthn. It uses the same hardware-backed keys, gives you cryptographic proof of possession, and doesn't depend on a single vendor's integrity verdict.
// Client-side registration — relies on the platform authenticator's hardware
const credential = await navigator.credentials.create({
publicKey: {
challenge: serverChallenge,
rp: { name: 'My App' },
user: { id: userId, name: email, displayName: name },
pubKeyCredParams: [{ type: 'public-key', alg: -7 }], // ES256
authenticatorSelection: {
authenticatorAttachment: 'platform',
userVerification: 'required',
residentKey: 'preferred',
},
// attestation: 'none' is fine for most apps — you get the hardware binding
// without locking out users whose attestation cert isn't on an allow-list
attestation: 'none',
},
});
Using attestation: 'none' is the key detail. You still get hardware-backed key storage and the phishing-resistance benefits. You just don't gate on a specific vendor's signature being present.
A few habits that save you from this whole class of bug:
The deeper lesson here is that security and ecosystem control got entangled, and we shipped libraries that conflate them. As app developers we don't have to play along. The cryptographic primitives — hardware-backed keys, attestation chains, WebAuthn — work fine on their own. Use those directly, and you get real security without telling your most careful users to go away.