The latest iPhone update, iOS 17.3, provides Stolen Device Protection and Security Delay, both of which require Face ID or Touch ID. Apple are pushing it strongly, and all and sundry seem to be strongly in favour of it too. The aim of this part of the update seems to be very worthwhile, but do the cons still outweigh the pros?
To answer that would require knowing what the benefits of those particular functions are (what problem are they solving) and what the security and privacy costs are (including knowing how they are implemented).
My phone isnāt (yet) badgering me to do that update.
Putting aside those particular functions, it is not unheard of for Appleās approach to security to make it āimpossibleā for a user to get back in to his or her own device and/or āimpossibleā for an executor to get in to a device of the deceased.
Note that biometric data never leaves the phone (or PC). Itās also converted in a way that canāt be reversed to the original fingerprint or face image, and stored encrypted on the phone. So even if someone hacked into your phone and got hold of that data, they would not be able to use it to identify you anywhere else.
The way biometrics are used for signing is that you present your fingerprint / face / PIN etc to the device to authorise it to access the stored passkey for a particular service. The passkey itself never leaves the phone: it is the āprivateā part of a public-private key pair to which the serviceās server holds the āpublicā key.
This is Appleās explanation of how passkeys work.
From that article (with my emphasis):
Passkeys are built on the WebAuthentication (or āWebAuthnā) standard, which uses public key cryptography. During the account registration process, the operating system creates a unique cryptographic key pair to associate with an account for that app or website. These keys are generated by the device, securely and uniquely, for every account.
One of these keys is public and is stored on the server. This public key is not a secret. The other key is private and is what is needed to actually sign in. The server never learns what the private key is. On Apple devices that support Touch ID or Face ID, these authentication methods can be used to authorise use of the passkey, which then authenticates the user to the app or website. Shared secrets are not transmitted and the server does not need to protect the public key. This makes passkeys very strong, easy-to-use credentials that are highly phishing-resistant. And platform vendors have worked together within the FIDO Alliance to make sure passkey implementations are compatible cross-platform and can work on as many devices as possible.
How does that prevent a stolen image of the owner being presented to the phone to unlock it? Can they reliably distinguish between that and a live human face? Similarly for fingerprints or retinal patterns, what stops recorded images being used?
Being able to fool face unlock with a photo, or fool a fingerprint reader with a 3D-printed fingerprint created from a scan, is another matter. I wouldnāt say that phones are now immune from that type of attack, but it isnāt as easy as it once was.
More importantly, though, bear in mind that even if a particular phone would be easily fooled by a photo or fingerprint image, the āmalicious actorā must first get hold of that phone.
Face ID matches against depth information, which isnāt found in print or 2D digital photographs. Itās designed to protect against spoofing by masks or other techniques through the use of sophisticated anti-spoofing neural networks. Face ID is even attention-aware, and Face ID with a mask will always confirm attention. Face ID recognises if your eyes are open and your attention is directed towards the device. This makes it more difficult for someone to unlock your device without your knowledge (such as when you are sleeping).
Photos do not show depth. They wonāt show that you blink. The system also uses an infrared light, which presumably detects heat (confirms youāre not dead).
Again, this is only Apple - you will need to do your own research regarding other brands.
Apple has similarly taken care with its Touch ID.
That article does not tell the reader an awful lot about how it works, but I am pretty confident it has been improved greatly from when it was first introduced and could be fooled by a model of a fingerprint. Even in 2013 the process to fool the sensor was rather complex.
It uses the infrared light to capture an infrared based image of your face. With both the visible light and infrared light images it ātransforms the depth map and infrared image into a mathematical representation and compares that representation to the enrolled facial dataā. Why wearing a mask, makeup, having grown some facial hair, or even sunglasses can still allow FaceID to work.
Each time you unlock your device, the TrueDepth camera recognises you by capturing accurate depth data and an infrared image. This information is matched against the stored mathematical representation to authenticate"
It has been known for awhile that there are differences in the textures between the red and green colour channels of printed or scanned images of a person and a live person.
Whether that is used in the iPhone is not made clear but could well be part of the system.
These are commendable properties of an implementation ā¦ but none of those statements is verifiable.
And what would happen if Apple is served with a superinjunction by a FISA court? (Or the equivalent under Australian law.)
So from the sounds of it, and with reference to the original question, this is just Apple being Apple. You may not be able to choose to use another authentication method, even if you understand the pros and cons.
What need is there of theft? In this day and age of ubiquitous surveillance, your image is everywhere.
As a fun option, it is well known that criminals install a surveillance camera on an ATM in order to capture your PIN, while the bank installs a surveillance camera on an ATM in order to capture your face. Criminals could easily change their approach to grab your face. So thatās two copies straight off the bat - live, close-up facial images.
Supermarkets install a surveillance camera on the self-checkout - again live and fairly close-up.
If you want to go full paranoid, you need to build your own computing device, including designing and fabricating the chips, developing all the firmware and software, and anything that connects to it. Good luck.
For the rest, thereās Mastercard. Oops - I mean, thereās a degree of trust that we have to assume in order to function in the modern world.
The āLearn moreā¦ā link that can be accessed on the phone prior to installing the update has 8 lines of type only and provides very little information about the update. In my ignorance, I didnāt even know if, once installed, the Face or Touch ID can be turned off. It can be according to my wifeās updated phone; there are 8 screens of information that are available once the iPhone has been updated.
It seems that the use of Stolen Device Protection and Security Delay is a reasonable option, or am I mistaken?
Yes, it is worth enabling that option, and the Find My function. If someone steals your phone, it will be harder for them to break into it, and you have more time to notice the loss and use Find My to lock the phone remotely.
If it is enrolled in the Find My App an iPhone can be erased remotely if need be, iPads can also benefit from this service, I canāt speak to Macs but assume it is the same.
While this may not result in return of the device, it helps to secure the userās information. Another benefit of the app is that a lost device can be noted as lost and if an Apple device passes nearby (and the device that is lost has power), the location is recorded and sent as a notification to Apple and the user if they have access to the app or iCloud on another device. I have used this lost function, and my device was located and sent back to me, I locked it and used the custom message function to provide some contact detail.
If the app isnāt enabled it is way more difficult to recover a lost or misplaced device. I highly recommend the app is enabled.
It has been known that a company (or a politician for that matter) will make a statement where:
the statement is the exact opposite of the truth (say one thing, do the opposite)
the statement is an exaggeration or distortion of the truth
the statement omits important details from the truth (possibly in the interests of getting a 10 second sound bite)
the statement furthers their interests, not yours
and thatās putting aside the unusual possibility (limited under Australian law to technology companies) that the statement that a company makes is knowingly false but they are being forced to make it by the government.
Any product claim by the manufacturer should be independently verifiable. Otherwise you are saying ātrust meā about the product claim.
This is doubly true of biometrics because, by being āforcedā to use biometrics, you are being forced to use the same āpasswordā everywhere. If one falls, they all fall. Maybe Apple does have the resources and the expertise to do this properly but Apple will be just as broken if some other biometrics implementation proves not to be as robust (and, in any case, see later).
If you ask me, before you are allowed to handle biometric information, you should have to be independently verified / audited for how you handle that information, right down to the minutest detail. I donāt realistically expect the government to follow up on that.
So to come right back to the original question:
Apple is imposing an artificial restriction on you. There is no inherent reason why you canāt use the new functionality but without using biometrics.
I would push back. Donāt let Apple dictate terms to you. The market has to be sent a message by customers. If you cave in to their artificial restriction then you are missing an opportunity to send a message to Apple. However this is typical of Appleās āwalled prisonā / āwe know bestā approach.
Relying solely on biometrics, where that is the case, also gets away from the idea of multi-factor authentication i.e. the compromise of one type of credential does not lead to compromise of the device. So even if I ever enabled āsomething that you areā on a device, I would only do so in conjunction with āsomething that you knowā or āsomething that you haveā.
It is also important to balance security measures against the threat faced. Pretty much all people face the threat of their device being stolen by a vanilla criminal or the threat of losing the device through misadventure. For that threat, you only need to protect the information on the device (for which disk encryption is adequate) and ideally protect themselves against the cost of the loss of the physical device (partially mitigate by keeping a separate record of the IMEI).
Easier said than done but parts of the problem have been solved.
āsecure enclaveā is a two-edged sword.
On the one hand, it makes it more difficult for a compromise of one part of the system to spread to others and e.g. steal sensitive material such as biometrics, keys or passwords.
On the other hand, it makes it more difficult / impossible for anyone to audit anything about how it works - so bugs are less likely to be discovered by security researchers. But this is really āsecurity through obscurityā. This is also anathema to open source, since even the machine code (binary) may be obscured (encrypted).
In 2020, security flaws in the SEP were discovered, causing concerns about Apple devices such as iPhones.
As an aside, a secure enclave can be implemented in one of two ways:
a separate, dedicated, cryptographic processor (and that in turn can either be integrated, on-chip or physically isolated in a separate chip) - it looks as if Apple is using the āintegrated, on-chipā approach for the separate processor - which makes it even harder to verify that it is secure
execute on (one of) the main CPUs - I believe this is Intelās approach and several bugs have been found
Curiously, Intel has deprecated its secure enclave implementation (known as SGX, Software Guard Extensions), at least for consumer-level Intel CPUs. (This doesnāt affect Apple of course because they engineer their own ARM architecture chips - having started off not using Intel CPUs, then used Intel CPUs for a few years, and have now moved away from Intel again.)
As noted elsewhere, the more you lock stuff up in silicon, the harder it is to fix when the inevitable security bug is discovered.
Any claim about system-level security is intrinsically unverifiable when looking at an individual manufactured device or version of software. No independent audit will be able to look at absolutely every possible security risk, it can only focus on whether security principles have been adequately applied.
Yes, Apple is relying to some extent upon security through obscurity, which tends to be A Bad Thing - as Kaspersky has demonstrated (with some difficulty).
On the bright side, Apple did introduce devices for security researchers a few years ago. These have reduced restrictions, and allow the researcher (who must already be vetted by Apple and report their findings to Apple before anyone else) to dig a little deeper than previously possible.
I must have missed something. It is possible to design and create your own silicon?
We definitely have not solved the problem of complex code without unintended āfeaturesā or ābugsā. āProvableā code to date has extremely limited application.
Youāre going a bit further than I was. I just wanted product claims to be independently verifiable. (So even undetected bugs are OK provided that the bug is not picked up in the analysis by the independent verifier. The point is that such independent verification of the product claim become possible.)
However an assertion by Apple that āthey proved mathematically that their enclave is secureā would just be more ātrust meā. They would have to release both the proof and the code, otherwise itās just another unverifiable product claim.
Ironically, because of the severely limited functionality of a separate dedicated processor for a secure enclave for the purposes that we are discussing here (storing sensitive data securely), it might actually be possible to provide such a mathematical proof.
All that said, provably secure code can just move the bug somewhere else i.e. the proof itself can have errors in it (which doesnāt necessarily mean that the code has errors in it).
To add further irony, plenty of software and hardware makers have implemented āsecurityā and āprovably secure encryption algorithmsā insecurely. It is very easy to use the mathematically secure tool and mess up the implementation.
Never trust anyone who says āmilitary-grade encryptionā, as this is meaningless in itself.
This has been recently shown, as one of the approved āquantum-secureā algorithms has an error in implementation that permits timing attacks. Fortunately an implementation rather than fundamental error.
(For any nerds in the community, two of the quantum-safe algorithms are named, respectively, Crystals-Kyber and Crystals-Dilithium.)