Phone and tablet security

The latest iPhone update, iOS 17.3, provides Stolen Device Protection and Security Delay, both of which require Face ID or Touch ID. Apple are pushing it strongly, and all and sundry seem to be strongly in favour of it too. The aim of this part of the update seems to be very worthwhile, but do the cons still outweigh the pros?

1 Like

To answer that would require knowing what the benefits of those particular functions are (what problem are they solving) and what the security and privacy costs are (including knowing how they are implemented).

My phone isnā€™t (yet) badgering me to do that update.

Putting aside those particular functions, it is not unheard of for Appleā€™s approach to security to make it ā€œimpossibleā€ for a user to get back in to his or her own device and/or ā€œimpossibleā€ for an executor to get in to a device of the deceased.

1 Like

iOS facial recognition is entirely local - nothing is uploaded to Apple or anywhere else. It is stored in your deviceā€™s Secure Enclave.

I do not know how other brands do this stuff.

1 Like

Note that biometric data never leaves the phone (or PC). Itā€™s also converted in a way that canā€™t be reversed to the original fingerprint or face image, and stored encrypted on the phone. So even if someone hacked into your phone and got hold of that data, they would not be able to use it to identify you anywhere else.

The way biometrics are used for signing is that you present your fingerprint / face / PIN etc to the device to authorise it to access the stored passkey for a particular service. The passkey itself never leaves the phone: it is the ā€˜privateā€™ part of a public-private key pair to which the serviceā€™s server holds the ā€˜publicā€™ key.

This is Appleā€™s explanation of how passkeys work.

From that article (with my emphasis):

Passkeys are built on the WebAuthentication (or ā€œWebAuthnā€) standard, which uses public key cryptography. During the account registration process, the operating system creates a unique cryptographic key pair to associate with an account for that app or website. These keys are generated by the device, securely and uniquely, for every account.

One of these keys is public and is stored on the server. This public key is not a secret. The other key is private and is what is needed to actually sign in. The server never learns what the private key is. On Apple devices that support Touch ID or Face ID, these authentication methods can be used to authorise use of the passkey, which then authenticates the user to the app or website. Shared secrets are not transmitted and the server does not need to protect the public key. This makes passkeys very strong, easy-to-use credentials that are highly phishing-resistant. And platform vendors have worked together within the FIDO Alliance to make sure passkey implementations are compatible cross-platform and can work on as many devices as possible.

This mechanism is far more secure than passwords.

4 Likes

How does that prevent a stolen image of the owner being presented to the phone to unlock it? Can they reliably distinguish between that and a live human face? Similarly for fingerprints or retinal patterns, what stops recorded images being used?

Being able to fool face unlock with a photo, or fool a fingerprint reader with a 3D-printed fingerprint created from a scan, is another matter. I wouldnā€™t say that phones are now immune from that type of attack, but it isnā€™t as easy as it once was.

More importantly, though, bear in mind that even if a particular phone would be easily fooled by a photo or fingerprint image, the ā€œmalicious actorā€ must first get hold of that phone.

2 Likes

From the link I posted earlier:

Face ID matches against depth information, which isnā€™t found in print or 2D digital photographs. Itā€™s designed to protect against spoofing by masks or other techniques through the use of sophisticated anti-spoofing neural networks. Face ID is even attention-aware, and Face ID with a mask will always confirm attention. Face ID recognises if your eyes are open and your attention is directed towards the device. This makes it more difficult for someone to unlock your device without your knowledge (such as when you are sleeping).

Photos do not show depth. They wonā€™t show that you blink. The system also uses an infrared light, which presumably detects heat (confirms youā€™re not dead).

Again, this is only Apple - you will need to do your own research regarding other brands.

Apple has similarly taken care with its Touch ID.

That article does not tell the reader an awful lot about how it works, but I am pretty confident it has been improved greatly from when it was first introduced and could be fooled by a model of a fingerprint. Even in 2013 the process to fool the sensor was rather complex.

https://www.ccc.de/en/updates/2013/ccc-breaks-apple-touchid

4 Likes

It uses the infrared light to capture an infrared based image of your face. With both the visible light and infrared light images it ā€œtransforms the depth map and infrared image into a mathematical representation and compares that representation to the enrolled facial dataā€. Why wearing a mask, makeup, having grown some facial hair, or even sunglasses can still allow FaceID to work.

Each time you unlock your device, the TrueDepth camera recognises you by capturing accurate depth data and an infrared image. This information is matched against the stored mathematical representation to authenticate"

It has been known for awhile that there are differences in the textures between the red and green colour channels of printed or scanned images of a person and a live person.

Whether that is used in the iPhone is not made clear but could well be part of the system.

2 Likes

These are commendable properties of an implementation ā€¦ but none of those statements is verifiable.

And what would happen if Apple is served with a superinjunction by a FISA court? (Or the equivalent under Australian law.)

So from the sounds of it, and with reference to the original question, this is just Apple being Apple. You may not be able to choose to use another authentication method, even if you understand the pros and cons.

What need is there of theft? In this day and age of ubiquitous surveillance, your image is everywhere.

As a fun option, it is well known that criminals install a surveillance camera on an ATM in order to capture your PIN, while the bank installs a surveillance camera on an ATM in order to capture your face. Criminals could easily change their approach to grab your face. So thatā€™s two copies straight off the bat - live, close-up facial images.

Supermarkets install a surveillance camera on the self-checkout - again live and fairly close-up.

If you want to go full paranoid, you need to build your own computing device, including designing and fabricating the chips, developing all the firmware and software, and anything that connects to it. Good luck.

For the rest, thereā€™s Mastercard. Oops - I mean, thereā€™s a degree of trust that we have to assume in order to function in the modern world.

5 Likes

Thank you one and all for the above information.

The ā€˜Learn moreā€¦ā€™ link that can be accessed on the phone prior to installing the update has 8 lines of type only and provides very little information about the update. In my ignorance, I didnā€™t even know if, once installed, the Face or Touch ID can be turned off. It can be according to my wifeā€™s updated phone; there are 8 screens of information that are available once the iPhone has been updated.

It seems that the use of Stolen Device Protection and Security Delay is a reasonable option, or am I mistaken?

Yes, it is worth enabling that option, and the Find My function. If someone steals your phone, it will be harder for them to break into it, and you have more time to notice the loss and use Find My to lock the phone remotely.

3 Likes

If it is enrolled in the Find My App an iPhone can be erased remotely if need be, iPads can also benefit from this service, I canā€™t speak to Macs but assume it is the same.

While this may not result in return of the device, it helps to secure the userā€™s information. Another benefit of the app is that a lost device can be noted as lost and if an Apple device passes nearby (and the device that is lost has power), the location is recorded and sent as a notification to Apple and the user if they have access to the app or iCloud on another device. I have used this lost function, and my device was located and sent back to me, I locked it and used the custom message function to provide some contact detail.

If the app isnā€™t enabled it is way more difficult to recover a lost or misplaced device. I highly recommend the app is enabled.

4 Likes

It has been known that a company (or a politician for that matter) will make a statement where:

  • the statement is the exact opposite of the truth (say one thing, do the opposite)
  • the statement is an exaggeration or distortion of the truth
  • the statement omits important details from the truth (possibly in the interests of getting a 10 second sound bite)
  • the statement furthers their interests, not yours

and thatā€™s putting aside the unusual possibility (limited under Australian law to technology companies) that the statement that a company makes is knowingly false but they are being forced to make it by the government.

Any product claim by the manufacturer should be independently verifiable. Otherwise you are saying ā€œtrust meā€ about the product claim.

This is doubly true of biometrics because, by being ā€œforcedā€ to use biometrics, you are being forced to use the same ā€œpasswordā€ everywhere. If one falls, they all fall. Maybe Apple does have the resources and the expertise to do this properly but Apple will be just as broken if some other biometrics implementation proves not to be as robust (and, in any case, see later).

If you ask me, before you are allowed to handle biometric information, you should have to be independently verified / audited for how you handle that information, right down to the minutest detail. I donā€™t realistically expect the government to follow up on that.

So to come right back to the original question:

Apple is imposing an artificial restriction on you. There is no inherent reason why you canā€™t use the new functionality but without using biometrics.

I would push back. Donā€™t let Apple dictate terms to you. The market has to be sent a message by customers. If you cave in to their artificial restriction then you are missing an opportunity to send a message to Apple. However this is typical of Appleā€™s ā€œwalled prisonā€ / ā€œwe know bestā€ approach.

Relying solely on biometrics, where that is the case, also gets away from the idea of multi-factor authentication i.e. the compromise of one type of credential does not lead to compromise of the device. So even if I ever enabled ā€œsomething that you areā€ on a device, I would only do so in conjunction with ā€œsomething that you knowā€ or ā€œsomething that you haveā€.

It is also important to balance security measures against the threat faced. Pretty much all people face the threat of their device being stolen by a vanilla criminal or the threat of losing the device through misadventure. For that threat, you only need to protect the information on the device (for which disk encryption is adequate) and ideally protect themselves against the cost of the loss of the physical device (partially mitigate by keeping a separate record of the IMEI).

Easier said than done but parts of the problem have been solved.

ā€œsecure enclaveā€ is a two-edged sword.

On the one hand, it makes it more difficult for a compromise of one part of the system to spread to others and e.g. steal sensitive material such as biometrics, keys or passwords.

On the other hand, it makes it more difficult / impossible for anyone to audit anything about how it works - so bugs are less likely to be discovered by security researchers. But this is really ā€œsecurity through obscurityā€. This is also anathema to open source, since even the machine code (binary) may be obscured (encrypted).

I note also from iOS - Wikipedia

In 2020, security flaws in the SEP were discovered, causing concerns about Apple devices such as iPhones.

As an aside, a secure enclave can be implemented in one of two ways:

  • a separate, dedicated, cryptographic processor (and that in turn can either be integrated, on-chip or physically isolated in a separate chip) - it looks as if Apple is using the ā€œintegrated, on-chipā€ approach for the separate processor - which makes it even harder to verify that it is secure
  • execute on (one of) the main CPUs - I believe this is Intelā€™s approach and several bugs have been found

Curiously, Intel has deprecated its secure enclave implementation (known as SGX, Software Guard Extensions), at least for consumer-level Intel CPUs. (This doesnā€™t affect Apple of course because they engineer their own ARM architecture chips - having started off not using Intel CPUs, then used Intel CPUs for a few years, and have now moved away from Intel again.)

As noted elsewhere, the more you lock stuff up in silicon, the harder it is to fix when the inevitable security bug is discovered.

1 Like

Any claim about system-level security is intrinsically unverifiable when looking at an individual manufactured device or version of software. No independent audit will be able to look at absolutely every possible security risk, it can only focus on whether security principles have been adequately applied.

Yes, Apple is relying to some extent upon security through obscurity, which tends to be A Bad Thing - as Kaspersky has demonstrated (with some difficulty).

On the bright side, Apple did introduce devices for security researchers a few years ago. These have reduced restrictions, and allow the researcher (who must already be vetted by Apple and report their findings to Apple before anyone else) to dig a little deeper than previously possible.

I must have missed something. It is possible to design and create your own silicon?

We definitely have not solved the problem of complex code without unintended ā€˜featuresā€™ or ā€˜bugsā€™. ā€˜Provableā€™ code to date has extremely limited application.

1 Like

Youā€™re going a bit further than I was. I just wanted product claims to be independently verifiable. (So even undetected bugs are OK provided that the bug is not picked up in the analysis by the independent verifier. The point is that such independent verification of the product claim become possible.)

However an assertion by Apple that ā€œthey proved mathematically that their enclave is secureā€ would just be more ā€œtrust meā€. They would have to release both the proof and the code, otherwise itā€™s just another unverifiable product claim.

Ironically, because of the severely limited functionality of a separate dedicated processor for a secure enclave for the purposes that we are discussing here (storing sensitive data securely), it might actually be possible to provide such a mathematical proof.

All that said, provably secure code can just move the bug somewhere else i.e. the proof itself can have errors in it (which doesnā€™t necessarily mean that the code has errors in it).

To add further irony, plenty of software and hardware makers have implemented ā€˜securityā€™ and ā€˜provably secure encryption algorithmsā€™ insecurely. It is very easy to use the mathematically secure tool and mess up the implementation.

Never trust anyone who says ā€œmilitary-grade encryptionā€, as this is meaningless in itself.

This has been recently shown, as one of the approved ā€˜quantum-secureā€™ algorithms has an error in implementation that permits timing attacks. Fortunately an implementation rather than fundamental error.

(For any nerds in the community, two of the quantum-safe algorithms are named, respectively, Crystals-Kyber and Crystals-Dilithium.)

2 Likes