What's the point of the two-factor verification code?

The original purpose of the password was to validate to a system that you were the owner of a userid. It was a secret that only you should know. And when asked to provide that secret, you could provide it.

The model was something that identifies you uniquely, and a secret that only you should know. 2FA.

Passwords are just not fit for purpose anymore. They are easily guessable in many cases, and often not secret for the purpose of authentication. They are stored in browsers, or password managers, so that the user no longer has to know their secret.

So we move to a different authentication model. Add in something you physically possess, and that nobody else could possess as long as you have it. A code generating token for one-time password, or fingerprint, or face scan. Keep the old style password as well and then you have MFA. Ditch the password and you have 2FA of a different type.

2 Likes

I absolutely agree that passwords that are badly chosen are easily guessable, just as RSA doodads are quite insecure if left unattended on a coffeeshop table. Bad practice defeats even the best of systems. That being the case, if we have to allow bad practice to be the norm, are there any system options that defeat bad practice? Are there any system features that cannot be screwed up, no matter the ill-will, ignorance, or down-right stupidity of the user?

Yes, I agree -who with knowledge of ICT history could do otherwise - that some people choose and manage their passwords badly. But that is not so for everyone. So far as I personally are concerned, only one of my very many passwords has been compromised (that I know of), and that was through hacking of online database. In the meantime, Google keeps trying to send me 2FA pins to whatever phone I using to log into their account. With friends like that, why have enemies?

So: do you have quality evidence that the use of passwords by itself causes poor security behaviour. That is to say, if a careful, clued-up user starts using passwords, do they suddenly and instantly turn into a security disaster? More particularly, do you have quality evidence that passwords invite or provoke worse behaviour than other alternative security options?

1 Like

I am talking about authentication, that is verifying who you are. Not about security.

A key can open the door to your house. It says nothing about are you the rightful owner of the house being entered. Maybe an assumption that there is only one key and only the owner has that one key. Really?

Same with passwords stored on a browser. Are you the only user of that device?

Right. Authentification, not security. Hmmm. (Mind wanders amongst birth certificates, drivers licences, and other forms of evidence that I am me - and the possibility that both of us might be the product of identity theft. And if we are both the result of indentity theft, and hence fake, who cares whether either of us can prove that our computers or phones are real?).

I think it might be helpful if you would be so kind as to explain exactly how you can prove exactly which user amongst many is using a given computer. In days of morse code, expert users could recognise each other by their ‘fist’ - signalling style - but typing on a computer? And anyway, why is this important, if nothing to do with security?

A story: over the years, I have had dealings with many people with connections to ASIO and other places with strange sets of initials. One had a laptop - apparently of a normal, everyday sort - that, he said, was set up to home straight home to Spook HQ, automatically sending a secret signal to tell the people at the other end that it was who it said it was. To have that signal sent, all he had to do was to turn on the power and open the cover. Indeed, no matter who opened the cover, Spook HQ would know that the laptop had come to life. So how did he tell Spook HQ who exactly had opened the cover? The answer included entering a password. And why did they care? Call it authentication, call it security, or call it Shirley; it all came down to the same thing: avoiding trouble.

Already explained in my previous post. By using something that only you possess.

Biometrics. Finger print, face scan, individual secret password that is stored in your head and nowhere else.

1 Like

OK, thank you; problem solved. With a sophisticated enough biometrics system, such as might guard Fort Knox, I reckon that this will work a treat. If a password is involved, don’t forget to make it at least 20 - preferably 30 - characters, and change it every year. On the other hand, if you include DNA and iris scans in your biometrics, you probably won’t need a password at all. Just try not to drop dead before you close the account, because if you are the only person with access to it, it will remain locked forever.

And so, in answer to the original question of this thread, “What’s the point of the two-factor verification code”, we have our answer: none! With strong biometrics, or a great, ever-changing individual password in your head, nothing else is needed.

Case closed.

Not quite yet case closed.

Now this is just talk at this stage, not reality.

Assume it’s keen on transitioning to using passkeys instead of ……?
Not new technology. But are we (consumers) ready for it?
Passkeys lock your ID to a device and the device locks itself to your ID?

Lots of unanswered questions about the practical for 100% of Australians? A further question re how biometrics such as Face ID will be used?

Noted there is an expert panel of esteemed professionals advising on the way forward. Perhaps a Vinnies CEO Sleepout should be a mandatory part of their brief?

1 Like

Well certainly I am not. My mobile phone doesn’t have a fingerprint scanner. And no way am I going to use facial recognition as a login.
I prefer my access to online systems completely free of such stuff.

However, I am a big fan of system generated time limited one time passwords rather than the venerable old user selected and maintained passwords.

Also a big fan of public key authentication. Passkeys are an implementation of that idea, but not quite there yet despite a lot of work and push from many big players in the online world.

2 Likes

No doubt being talked about by the same mob as dreamed up Robodebt. My confidence in Government in ICT systems designed for public use is about zero - or, at least, if it is aimed at any of the public groups that deal with Centrelink. A mere glance at the list of panel members chosen to guide this project fills me with dismay.

An excellent idea :bulb:. Also, a sleep out at some computer classes for seniors and people with disabilities; to discover just how things work outside the Silicon Elite.

2 Likes

I use a couple of different forms of 2FA. Generally the first factor is something I know (a password) and the second is something I have (an authentication program on my phone, or my face proving that I am who I claim to be).

While I could keep all my passwords on my phone, I am paranoid so don’t. Regardless, this would fulfil the two factors as the phone would be something I have and the password for the password manager would be something I know.

FIDO2, coming soon to an Internet near you, merges these two factors using some nifty cryptographic gubbins. All the big tech companies are buying in bigly, but at the moment there is no cross-compatibility so for instance you can’t use your Android vault on an Apple phone.

I am a fan of 2FA for the moment, and (in the future) of dropping passwords in favour of proving who I am/what I have. Regardless of how you manage your passwords, the weakest link is the user.

1 Like

True story!

The job of creating a secure e-system is done by an engineer. There are two types of engineers: software engineers, who handle the code, and social engineers who handle the users. Mostly the media talks about the black-hat social engineers who attempt to trick the users into making a security mistake, such as divulging their passwords. There are, however, white-hat social engineers who do the opposite: figuring out how to make it hard for users to do the wrong thing. A hard job, being this kind of social engineer, involving designing a system that even people with faulty memories, short attention spans, and a disinclination for following rules can’t muck up.

1 Like

The scammers are becoming more adept at impersonation, as well as at breaking into systems. They may in many instances be aided by human failings. Isn’t there also a technical component (rigour and quality variable) in the attempts at impersonation?

I continue to receive messages (SMS) and emails from those businesses and services I regularly use containing imbedded links. In some instances the links are necessary to complete an action. Are the white hatted social engineers out to a long lunch at the Pig & Whistle?

4 Likes

As you said earlier, the weakest link is the user.

A user who doesn’t understand the reason for the procedure and/or finds it difficult / fiddly / time-consuming /etc is likely to try to cut corners. That’s yet another challenge for the social engineer: try, as far as possible, to make secure procedures simple and easy for an average end user to understand and comply with.

1 Like

It appears the scammers are quite adept at those techniques!

1 Like

Yes - the ‘white hat’ social engineers need to study and emulate scammer techniques … :wink:

Many of the worst breaches come about because someone senior ‘didn’t want the hassle’ of all that security and ignored the rules - presumably because they only apply to the peons.

1 Like

I’m sorry; I wouldn’t know. The Pig & Whistle has been booked out solid for years now. Possibly by white-hat social engineers, now that you come to mention it. :grinning:

There are, of course, a few of them sculling around, if one looks hard enough. Here’s how to recognise a good white-hat social engineer (aka WHSE), gleaned from some 20 years of studying their writings:

  • Obviously, being white-hat, they are out to help, not hinder, other people; and , being engineers, they are into the design of systems;
  • Systems have to be used by other people, not just themelves, so they rarely if ever talk about their own personal preferences. Rather, they focus on the needs of other people. In effect, their aim is to create systems that other people will find easy to use, and, if at all possible, enjoyable to use.
  • Every man and his dog has an opinion about other people, what they are capable of, and what they enjoy doing, but good WHSEs aren’t interested in opnions; they like evidence. So they read relevant research papers, and conduct tests, where possible using best scientific procedures such as sample groups, and double blind administration. So they end up recommending system features not because they are fashionable, or marvellously gimmicky, or personally popular, but because they are proven to work (meaning: not just for themselves, but for other people).

So why do I think good WHSEs’ are a rare breed? If one looks around the Internet a lot, one can find lots of people offering advice about safe surfing; good Internet practice. Actually, lots and lots of it. Websites that offer login facilities generally have rules - passwords with certain kinds and numbers of characters. Forums and blogs where people pontificate on the best ways of doing things. But rarely is there any consistency, and even more rarely is there any reference to research or evidence. By and large, it is a potpourrie of random opinion, mostly ill-informed and half-baked.

Within this potpourrie, however, can be found a small group of people who focus on systems, not methods or gadgets; on other people’s preferences and abilities, not their own; and on research and evidence, not just guesswork or personal preference. These are the good WHSE. And, because they base their work on research, what they say is remarkably similar, and remarkably consistent, and has been for the past 20 years.

So what have they been saying for the past 20 years?

Their first message is that an absoloutely safe system is an impossibility. There is, and always will be a vulnerability hard-baked into every system that connects to the Internet. And double if people intereact with it. If it has humans have access to the system, then sooner or later it will become compromised. This is not a design fault, but rather a design feature. Live with it.

The second message is that safety is best planned for by making human access methods part of the system - not, that is, some tacked-on procedure. That means providing suitable instructions, training, supervision, and all the other bits and pieces that go into a human system. That means, don’t do as Government Departments like to do, treating human users as hostile, alien elements that must be protected against.

The third message begins with the idea behind two-factor authentication (2FA): something I have, and something I know. The ‘something I have’ needs to be baked into the system in some way that is essentially untouchable by the user (e.g. a certain phone number); the ‘something I know’ needs to be something unguessable, or at very least hard to guess, by other people, and also be unique to a given system; and also ever-changing, to prevent force attacks. Oh, and all of that must happen within human brains which are designed by evolution not to be able to do it. So the rule is: the greater the required level of security, the more complex the login and stayin systems must be, and hence the greater the level of needed training will be.

Any Internet-connected system designed without reference to individual human abilities and preferences is wide-open vulnerable; likewise any system that expects humans to behave in certain ways without comprehensive and effective training.

3 Likes

The good ones do, and always have. The ineffective ones get promoted to manager?

1 Like

Your incisive list of WHSE qualities makes it immediately obvious why they’re rare. Real saints are. :wink:

Yes.