OpenCL has been announced last year, but there is still no implementation available. I’d like to know why, because I still expect it to start a little revolution once it is supported by all the major chip manufacturers. It should be huge, but all the Internet seems to say about it (according to my google alerts subscription) is “Apple introduces some weird/interesting API with the next OS X blah blah“. Am I really misjudging the importance of this?Right now, there are millions of PCs in peoples homes, being used for multimedia and office applications but consisting of hardware which would have been considered high-end only a few years ago. Multi-core CPUs and massively parallel GPUs, not being used to their full capacity at any time. On the other side are gaming platforms, not doing much most of the time, but being pushed to their limit when executing modern PC games. Inbetween there are (few) computing-intensive applications like Folding@Home, video encoding/decoding and modern GUIs with lots of eye candy.
But there could be more.
The problem with CPUs and GPUs (and other types of processors, like the Cell for example) is that there are huge differences in programming them. All the major programming languages are compiled to run on a CPU. GPUs were developed as hardware implementations of specific graphics APIs like DirectX and OpenGL. For many years they didn’t do much more than
- transform geometry
- interpolate vertices and textures
- do some basic lighting equations (Gouraud Shading).
Then programmable shaders came along and some people (mostly researchers at universities who wanted to build low-cost supercomputers) started reducing their computing-intensive problems to graphics-specific ones. Writing general-purpose GPU programs is pretty cumbersome though. However, GPU manufacturers noticed that customers started using their parallel processors for other things than computer graphics, so they decided to give them SDKs which made development a little easier. Right now, you can trade “cramming everything into textures and shaders” with “being bound to one specific manufacturer“. Not bad for researchers who need some cheap computing power for some project that will only run on specific hardware, but pretty much useless for developers who want their software to run on as many platforms as possible. That’s why PhysX never really took off, even though it now runs on nVidia-GPUs. But GPGPU computing is on the rise, nevertheless.
On the other side, we have CPUs with an increasing number of cores. Per-core-speed seems to be hitting some kind of wall, so we get more cores instead. Unfortunately many developers find it hard to use parallel execution – unsurprisingly, since programming is mostly a left-brain exercise.
Trying to develop applications that make the most of multi-core CPUs and GPUs with almost a hundred shader-units (with older ones having different units for vertex- and pixel/fragment-shading) is not only hard, it also makes you jump through all kinds of hoops, trying to find an appropriate abstraction layer to different APIs.
Now imagine that abstraction layer being already there, supported by the manufacturers and giving you a unified interface to all kinds of processing units. Through the interface, they only differ in type of memory access, task/data parallelism and so on. You write the control stuff in the programming language of your choosing and the parallel stuff in a special language which can be compiled to all processors supported by that interface. That would be OpenCL.
There are two obstacles standing in its way to success:
- Hardware vendors wanting to bind your application to their hardware
- People who don’t know what they are missing
As far as I can tell, Intel is the main obstacle here. They are trying to push their Larrabee platform and use it to make nVidia and AMD go out of business by combining CPUs and GPUs on a single chip and giving developers a special API to access that. Hey, and maybe that will work. Game (and other) developers will be happy to leave the existing hardware base behind and develop exclusively for one – initially inferior – platform, preferably using Intel’s special C++ compiler. nVidia will whither and die since nobody will use GPUs anymore and AMD will cease to matter as well because even though they have all they need to make something similar, each of their engineers will be hit by an asteroid or something.
Or maybe support for OpenCL will grow, along with applications which scale incredibly well using cheap hardware. Game engines will provide developers with amazing graphics-, AI- and physics-capabilities, maybe even let the player choose between ray-tracing and rasterizing – actually paving the way for powerful multi-core CPUs. New applications will emerge, with uses nobody thought of before. 50 years ago, would you have foreseen all possible uses for something that is little more than an incredibly powerful calculator?
Unfortunately, just supporting OpenCL isn’t all that is needed to start this kind of revolution. The API is very low-level, with the programmer having to implement his own memory management system. That’s probably not something somebody using a garbage collector all the time is likely to do. But I’m sure there will be higher-level APIs, hopefully open-source and cross-platform, that will make all that a bit easier and manageable. And then I think we will see a lot of new ideas, richer games and probably even more demand for hardware.
edit: So nVidia uses CUDA as a primary interface to their GPUs while exposing as much as they can through OpenCL (but with CUDA behind it). Works for me!