Meltdown and Spectre: ‘worst ever’ CPU bugs affect virtually all computers

I mentioned it before and will say it again - the threat here is to servers rather than end users. If you have malware on your machine you have already lost the battle. It should not matter if you use virtualisation, unless you rent out bits of your computer to other users who may be malicious.

The other thing to note is that disabling the functions that are affected by these issues greatly reduces your computer’s performance - 20% or more are the figures I have been hearing.

While in general I am extremely keen to ensure systems are secure, there are occasions when the benefits are not worth the pain. This is, I suggest, one of those unless you are responsible for maintaining web-facing servers.

2 Likes

The TSX support can just be disabled and there shouldn’t be any or very little performance hit. This is probably more useful as a home user as most don’t use Hyper-V anyway.

2 Likes

Except where an end user is using some kind of sandboxing.

A special case of that - applicable to a home user - would be running Microsoft Windows in a VM for one legacy application that you can’t get rid of, while running everything else in Linux in a separate VM. Some of these vulnerabilities have the potential to break that separation. However there are now so many speculative execution vulnerabilities that I wouldn’t like to say which ones can break out of a VM and which ones can’t.

Needless to say that in this day and age, a threat to a server is a threat to an end user. A very large amount of our private data is stored on servers. Our employment may depend on the correct operation of servers. So while it may not be our problem to fix, it is still our problem.

I don’t think any users really have the option of not installing the fix. If you insist on staying on the current Intel microcode then sooner or later you will be forced to upgrade by some other vulnerability or other bug that really does directly impact your home computer even in simple usage scenarios. It is not as if you can fork or branch Intel’s microcode so that you can pick and choose which fixes you take.

2 Likes

It looks like TSX is used for coordinating multiple cores. If you have a highly parallelised algorithm, designed to use lots of cores simultaneously and correct operation of the algorithm requires frequent synchronisation between cores then you might notice the hit. For general tasks probably not.

This kind of algorithm is often suggested as being the way of the future, as progress on the speed of an individual core flatlines.

2 Likes

Yes, but in the case of a server you are referring to one of those multitudinous threats that the average person can do nothing to diminish. Patching this fault on your home computer will not reduce the threat to you.

Again, if you have bad software on your machine you are already in major trouble. The speculative execution set of bugs - I mean intentional design decisions that have major flaws - do nothing to change that, except for making a black hat’s task more difficult than if they have other tools at their disposal and installed on your machine.

2 Likes

Almost everyone runs a small server, it is the router function, it may also be a Home Theatre (HT) system. We are not immune just not always the juiciest target for the hacks unless the hackers are making themselves a botnet. If they are after a botnet then any machine that can access the net is a target. I use both Hyper V and VirtualBox for VMs (I have several VMs). I have a server in house for HT, plus it is an Exchange server, it runs on Win Server 2016 LTSB atm but upgrading to Server 2019 in January. Others use Linux which isn’t as affected unless they virtualise Win 10 or Win 7 or similar. These bugs affect me and a number of others I know.

3 Likes

It is not so much whether it is a server as whether there are multiple users of the computer who are not all mutually trusting.

This reaches its worst when you have a cloud service handing out VMs to all comers - your (virtual) server is running on the same physical hardware with other people and companies about whom you know absolutely nothing.

A home server is not so bad. Usually all the users know each other and trust each other.

A work server is not so good because usually not all the users trust each other i.e. some content is supposed to be kept secret from some users, but speculative execution flaws may allow a ‘low’ level user to access ‘high’ level content.

2 Likes

There is some discussion but not POC yet that RDP or telnet access could be enough to load the ZL & ZL 2 MDS attacks. What we currently see is that it is very difficult but if you use a Cloud server service you are likely open to attack. Besides this just because there is no current publicly disclosed POC doesn’t mean some haven’t used a technique to use the malware…NSA aren’t always keen to advise of any workarounds/holes they have found and I’m sure that could be said for a number of State actors.

Of course with the Microcode updates ZL has been largely put to bed but even with the Microcode ZL 2 is able to still work. AMD are not subject to these vulnerabilities and Intel really needs to hardware fix these holes as currently the software fixes are mostly performance destroying. The 10 series CPUs were thought to be impervious to these attacks by ZL 2 but they aren’t. They only recently were released which is of even greater worry that no architecture changes were made to mitigate these attacks. But MS see even the issue of running a VM on a local PC is a risk and that is why they have suggested disabling the TSX extensions for Hyper V VMs.

2 Likes

It takes several years for a CPU to move from design lab to production. I would be surprised if Intel was able to make anything more than the most minimal of changes before shipping the 10 series. (Just look at the trouble the company is having moving past 14nm apart from a couple of mobile 10nm chips.)

1 Like

I would have liked Intel to add functionality to a generation to allow the user to disable Speculative Execution. We are almost 2 years down the track and we are still being exposed to new exploits. I understand that the performance loss could be substantial but for some people (particularly in the business / cloud space) security and integrity of data should be more important than performance - and the comparison of performance is largely against an illusion, achieved by compromising security. In any case, it would only be an option to disable it. Users can still choose to trade integrity / security for performance (and home users might legitimately make such a choice). Having the option is insurance against next month’s Speculative Execution flaw, and the one after, …

(Disabling TSX is not addressing the underlying problem. It is playing whackamole.)

However as @postulative says, even 2 years might not be enough for significant design changes.

More and more businesses are doing this, even smaller ones.

True

2 Likes

I once was privy to a processor where the prototype had safe and dangerous modes. The differences were in speculative execution. It was found dangerous mode was indeed dangerous, but not because of security but because it was so aggressive to get the last clocks of performance it made execution errors yielding incorrect answers.

Fixing dangerous mode took about 3 months from discovery to a fixed prototype. Incorrect answers are one thing - not to be done; obscure security holes are others where the companies get into risk management territory. Performance suffered a few percent when dangerous mode was fixed. The production processors did not have the 2 speculative options and were rock solid as far as any testing or customer reports could document. Testing is what it is and is usually statistical not absolute, especially in very highly complex circuitry.

Intel, having a much larger volume than the processor I reference (that I will not identify) is clearly into the profit and risk management territory having already fielded a dodgy divide and myriad security holes, so what is another? On the other hand they are pressured for each generation to deliver more of everything than the previous that unfortunately seems to include ‘holes’.

3 Likes

Which of course reminds me of the Ford Pinto’s colourful history - and Ford’s failure to understand the risks associated with a cost-benefit analysis.

1 Like

Well nearly two years on what is the verdict? Were these bugs the worst ever? Did they affect virtually all computers? Was it all a storm in a teacup? If so why?

1 Like

They affected and still do affect nearly every PC out there, and as Apple use Intel for CPUs even those. The fixes to date are mostly software and so impact “speed” of processing. AMD are immune to ZL attacks because of the way they deal with speculative processing at the hardware level. This is not an OS problem it occurs well before that in the CPU…altering the OS and microcode tries to stop the issue early enough that the OS isn’t compromised and so your data isn’t, the fixes however decrease the efficiency of the CPU thus slowing down processing.

Is it the worst? I guess we can say it hasn’t been fixed yet so no real answer can yet be given. Added to this is that new attacks are being generated around these faults even when some “old” holes are patched. It hasn’t ended yet as no hardware fix has been implemented. We are busy plugging holes in a wicker dam with our fingers at the moment.

2 Likes

I wonder in some ways whether some of the more advanced exploits are out of the league of ‘bad people’ anyway - researchers maybe overtaken computer criminals - who knows.

Either way, the thing people fear the most has probably already sold them out without them knowing … the known exploits maybe just a convenient if unexpected diversion causing people to debate only that which they are aware of …

4 Likes

Could be that the holes are sponsored by the NSA, who have the master keys :wink:

I don’t understand that. Is there no body of evidence showing how much harm has taken place over the two years? Why would fixing it now answer the question if it hasn’t been answered yet?

If it is such a problem why hasn’t it been fixed? I can see Intel ignoring you and I but if there was a substantial cost surely the big buyers would hammer them til it was cleaned up.

1 Like

It has affected but not everyone discloses the impacts. Is it the worst? That’s the hard part but I would say it was one of the worst if not the worst, it has gone on for years before the last two where it became public knowledge. It is a sort of known problem now with some responses but not everything has been fixed yet and new holes are being publicly found but I am sure others are found but not publicly disclosed because it suits certain actors to have their back doors, so no hammering is done… It isn’t just the vulnerabilities which are the problem but the impact the fixes have on “productivity”. Will something worse come along, possibly or probably but as yet not found or perhaps not disclosed.

Why hasn’t it been fixed? My guess is money and the need to sell at the least cost to the producer but greatest profit. Intel and AMD want to make money and the fixes can be expensive to find and fund. If a business is happy to accept some cheapness to buy they will often put up with some impact versus cost. Most who make the purchase decisions are not the ones ultimately concerned with security. Often why we see poor security implementation thus breaches is this cost versus risk. Easier to apologise than pay what is needed to protect. Also nothing is perfect so always some risk so they decide on cost most times.

1 Like

For those who might be a bit smug about finding hacks and the abilities of ‘NSA types’ in collusion with chip manufacturers, consider this:

A certain processor ‘responded favourably’ when -

Value [X] was loaded in register [X1]
Value [Y] was loaded in register [Y1]
Memory Instruction [INSTRUCT1] was issued and went into speculative execution
Logical Instruction [INSTRUCT2 for X and Y] was issued that caused a memory fault and a bank busy.
The processor responded predictably although not as a programmer might expect.

Not microcode and not so easy to stumble upon nor even discover unless caught in progress or just lucky. An ultimate back door if one can get a compliant code running in one of ‘those processors’. Historic in this case, but is it?

1 Like

Just how far back do we need to go to get to a processor which did not expose the user?

Even so, exposure still came through the hand of the software writers?

And before that? If you had the key to the computer room, and you had the key to the disc safe, and you knew where the big black three phase power switch was, and you knew the sequence required to toggle in the address of the boot loader, and …?

You probably also had a head full of very long hair too, although it was mostly an optional requirement.

Spectre and Meltdown might be memorable. The worst of the worst bugs is probably the one that allows users to share, download, copy, click on or install anything on their home computing or smart device.

1 Like