Software is Wasting Your Cores

Back in February, Steve Hogan made the case for getting a multi-core system. Rob Cheng’s experience shows that dual-core systems aren’t always faster though. It’s possible for a multi-core system to outperform a single-core system, but you’re not likely to see a desktop operating system or many applications that can take advantage of it. There are good reasons for that problem, and they aren’t going away any time soon.

Users like to measure performance by response time, the amount of time that it takes to get a particular thing done. Multi-core systems, on the other hand, are good at throughput; give them a big pile of unrelated things to do and they can take work off the pile in parallel, getting it all done much faster than a single processor possibly could. If a particular piece of work uses just one core, a multi-core system won’t improve response time.

“Wait,” you might say, “Why can’t the computer take the one thing that I want to do and split it into pieces so that each core can do part of the work?” Why indeed. Writing an application that way is probably an order of magnitude harder than writing software the way it’s done today. It’s made more difficult because programming languages and operating systems don’t provide the tools to split work off into parts that can be distributed to multiple cores.

For example, notice that above I said “a pile of unrelated things.” If the pieces of work to be done have shared data, distributing the work to multiple cores is a pain. It’s not just a question of CPUs, either; memory can be a point of contention. Imagine if several cores all want to write to the same area of memory. They may clobber each other’s data if they don’t coordinate efforts, but even if they do cooperate they are unable to cache the memory results locally because other CPUs are changing them. Then, once work is complete on all cores, the program has to recombine those pieces into the single thing that you wanted done in the first place. That takes a lot of synchronization and coordination.

There are, of course, applications that just love multi-core systems, but they tend to be ones that run on servers and not desktops. Databases, file servers, and web servers are good examples. In those cases the requests do tend to be independent; once a request is done on a core it can be sent off and another one grabbed from the pile. For those situations, you often can get close to having N times the performance out of an N-core system–but only if you’re measuring the throughput of the overall system.

There are some rumors that Microsoft’s future ground-up rewrite of Windows, code named Midori, might make it easier for desktop applications to take advantage of multi-core systems. I am not so sure that it will be faster, though, because Midori is supposed to be based off the .NET Framework and will still need to run existing Windows applications in some sort of emulation mode. If that thing ends up being fast, it may only be because a 16-core system succeeds in hiding Midori’s software bloat.

Stop Responding to Threats.
Prevent Them.

Want to get monthly tips & tricks?

Subscribe to our newsletter to get cybersecurity tips & tricks and stay up to date with the constantly evolving world of cybersecurity.

Related Articles