Snow Leopard, OpenCL, the kidney, and historical contingency
The next release of Apple’s OS is code-named Snow Leopard. It will include OpenCL:
OpenCL (Open Compute Library), makes it possible for developers to efficiently tap the vast gigaflops of computing power currently locked up in the graphics processing unit (GPU). With GPUs approaching processing speeds of a trillion operations per second, they’re capable of considerably more than just drawing pictures. OpenCL takes that power and redirects it for general-purpose computing.
That’s interesting: in the history of computing, people first added GPUs to offload a certain kind of processing work onto specialized CPUs. Now we’re taking some of that processing power back.
The kidney, if I remember Dawn’s description of it right, works in stages. One stage adds too much water into the urine, but a later stage takes some back out. Why the kludge? Probably for historical reasons: if you’re a fish, being wasteful with water is not a problem. Crawl up onto land, and suddenly it is a problem. How to solve it? Add a patch to undo what an earlier stage does.
OK, so they’re not exact parallels. But stories like that are a way to remind myself of that useful quote, originally (I believe) from Kenneth Boulding, made a meme by Gerald Weinberg: “Things are the way they are because they got that way.”
Forgetting that is a problem, because we all tend to be Panglossians, thinking the thing we’re looking at is not only tuned to a problem but also optimally tuned. It may be bad but, well, I guess that’s just because bad is the best it can be.
If we realize it got to be what it is by a historical path, influenced by chance, we can be more ready to make change—and more ready for two reasons: not only is a historical accident unlikely to be really optimal, our change will be only one in a long history of changes and so it’s not that important that it be right. We can also realize that we have at least two options: to add something on, or to take something away—and that the latter approach is more likely to be under-explored and so more fruitful.
June 16th, 2008 at 10:52 am
Actually, we’re not as much “taking it back” as we are finding that the massive-parallel architecture is well suited for tasks other than graphics, such as physics or audio; with GPGPU, we can do things that are basically impossible on the CPU, no matter if that’s a 10yr old CPU or tomorrow’s monster.
June 18th, 2008 at 2:30 am
The Kidney analogy works - OCL is about taking advantage of GPU steaming machine architecture: a fast local memory hierarchy, and high compute density without complexity to do random memory access or operation reframing. Not all algorithms will do well on GPUs. You have to create a language to deal with the requirements of multi-core synchronization. There’s lots to worry about as well - divergent warp formation, bugs, warp scheduling, load balancing, bugs, divergent hardware implementations, bugs, and other bugs.
September 24th, 2008 at 4:51 am
The GPU wouldn’t be able to function as a CPU in its own right, so it is more like a specialised set of circuitry that performs the same operation on large banks of data in parallel. The programmable aspect is being able to specify what those operations will be, and operating on lots of data has become more normal with the large amounts of memory and disk these days.
However, when it comes to decisions and loops and the current majority of tasks that depend on human input, the CPU is still much faster than the GPU. Look at an algorithm like SIFT on the GPU, and you’ll see that the pattern matching was moved from the GPU back to the CPU for a 15x-20x speed increase because the CPU can cope with branch predictions and cache flushes.
So we have the CPU for fast general purpose operations, as a front-end controller for more specialised hardware like the GPU. It is in effect multiple architectures to fit problems into the type of processing that suits them best. Large amounts of data in parallel is almost the opposite of normal CPU operations, so the CPU and GPU cover good proportions of the computing landscape.