They used to call them 3D accelerators, and eventually “graphics processing units” or GPUs, but these days graphics chips are so programmable that we might as well start calling them “general processing units.” (DVDs play the same game with “digital video disc” taking on the wider “digital versatile disc” moniker .) Graphics chips are becomming more powerful and more programmable—they’re becomming a lot like having just an extra processor in your system.
This has two main consequences:
Graphics programmers will start to explore alternate rendering paradigms like ray-tracing or voxel rendering. Ars Technica has a great interview with Tim Sweeney of Epic Games on this subject. Sweeney’s point is that with really programmable and really powerful GPUs, the rendering APIs like Direct3D and OpenGL become a crutch and limit innovation and creativity in creating graphics. Developers will just write their own renderers using other established or hybrid paradigms. GPUs become little more than extra processing power.
Secondly, GPU hardware gets general enough to get repurposed for other processor-intensive tasks. I remember my computer architecture teacher telling me in 2005 about biology researchers using GPUs to accelerate floating-point intensive simulations. Now I’ve just read, in another Ars Technica piece, about a commercial software package that uses GPUs to accelerate password cracking.
We’ve known since Alan Turing’s time that all information processing hardware is equivalent up to some small overhead. Building specialized hardware for specialized tasks can yield performance gains, but only for large tasks where this overhead is non-trivial. The so called “physics accelerators” which are starting to appear on high end gaming machines seem to be doomed to failure for this reason. Hardware accelerated audio was just a blip on most gamers’ radar 8 years ago. Why create a dedicated processor for physics/audio/graphics when you could instead just upgrade to a CPU with more cores…