The researchers are much cleverer than the computer criminals but, once the exploit is public, the computer criminals suddenly don’t need to be as clever.
There are two aspects to that.
- Straight out release of proof-of-concept code by the researchers.
- (Requiring greater cleverness but still nowhere near as much as finding the exploit in the first place) Once the fix is released, the exploit can be reverse engineered from the fix.
Don’t forget that the computer criminals include a lot of governments, who are well-resourced and who employ lots of clever people.
That depends on what your metric for worst is.
If your metric is “most number of computers on the planet that are vulnerable” then for some Speculative Execution defects, I think the answer is yes, for two reasons …
a) vulnerability cuts across CPU implementations e.g. Intel and AMD and ARM (is that 99% of all general purpose computers on the planet?)
b) vulnerability at the CPU level means that all operating systems are vulnerable e.g. Windows and Linux and MacOS.
In the early days of this vulnerability even home computers that are not doing anything silly were still vulnerable.
If your metric is “cost” then maybe not but
a) it is very hard to ascertain all the cost of real world exploits because hackers don’t tend to register the event with the police or place ads in the newspaper, and
b) “cost” was prevented by focused and determined generation of patches and application of patches. (Generating patches in itself is a cost, and also an opportunity cost because while e.g. Linux kernel and compiler developers were tied up generating bandaids by the thousands, they weren’t doing something positive.)
Yes, new exploit variants are still being found so we can’t even call time on this one yet.
“worst ever” bug will ultimately be seen to be “puffery”, not intended to have an objective, agreed metric.