Last Thursday, August 12, 2010, NVIDIA (NVDA) reported its results for the quarter ending August 1, 2010, its second fiscal quarter of 2011. The results could be taken as a glimpse into July for the semiconductor industry since most other companies in the space reported second quarters ending June 30th. There is a concern that the recent boom in chip sales could slow appreciably because of slowing (but not really reduced) end demand by businesses and consumers.
Here I want to discuss the big trends engulfing Nvidia and the future of computer processing, including graphics processing. If you are interested in details of Nvidia's 2nd quarter, see my NVIDIA Q2 Analyst Conference Call summary.
Since graphics on personal computers began to be important in the late 1980s there has been a divergence between the central processing unit (CPU) and the graphics processing unit (GPU). Fans of history may recall that before that there was a similar divergence between the CPU and the floating point processor (FPU); you could pay extra for a computer with an Intel 8087, if you wanted to do math, or use math to accelerate graphics calculations.
This divergence between general computing needs and floating point math and graphics needs is fundamental. In an era that may be coming to a close Intel and AMD (and once upon a time, Motorola) produced most of the CPUs for the industry while ATI (now part of AMD) and Nvidia made the graphics processing unit. Many computers had no GPU, allowing the graphics work to be done by the CPU, which is fine for slow work like word processing. Intel, Nvidia and AMD also all made motherboard chipsets that included some GPU capabilities. As usual, those became more capable with time. Your $1000 GPU card of 2000 is not as powerful as the lowest end Intel graphics of today. On the other hand graphics demand has gone up with both the introduction of high definition video, fast rendering needs of games, and graphic content production systems. You don't want Intel graphics for any of those.
Conversely, the powerful GPUs from Nvidia and AMD can now be used to accelerate many processes that CPUs do more slowly. Mostly scientists, engineers, and graphics designers are taking advantage of these capabilities, but they will trickle down towards the mainstream. Moms may use them to convert an HD video to lower definition, or vice-versa.
For the most part FPUs were killed when floating point processing started being integrated into CPUs like Intel's Pentium. As transistor sizes shrank, and Windows became the common operating system, that became a natural way to give users more for their dollar.
2010 is the year that GPUs and CPUs are being merged onto a single chip. Unless you are an industry insider, you won't see the products until 2011.
Because high end GPUs actually use more silicon space than the current CPUs, there will continue to be life for independent GPUs, typically on video cards, for at least a few years. Video and graphics content makers, scientists and engineers will want workstations that include both a high end GPU and a high end CPU card. But the rest of us will probably migrate to the new combined cards (AMD calls them APUs, advanced processing units) between 2011 and 2015.
While Intel might be behind AMD in graphics, I would not want to bet on its losing a lot of market share in the new combined CPU/GPU arena. Intel's profits dwarf AMD's revenues, and their R&D budget is correspondingly higher. They will find ways of keeping their market share, most likely.
For Nvidia there is a bigger problem: they don't make GPUs at all. But before addressing that, let's look at the new tide coming in: mobile processing, mostly based on ARM-based processors.
Mobile devices are rapidly becoming more powerful, as you can see from the iPad, iPhone, Droid, etc. The key is low voltages with corresponding low power consumption, plus the usual shrinking of transistor sizes making it possible to integrate more processing capabilities into the processing units.
It is conceivable that just as PCs wiped out minicomputers and even, for the most part, mainframes, the new mobile CPUs could start invading the notebook, desktop, and even server computer space. It is already being tried, the reasoning being that a swarm of small, low-energy units can make ideal virtual machines for serving web pages.
Nvidia has the Tegra platform, which you'll be seeing a lot of later this year. It integrates an ARM CPU, an Nvidia GPU, and motherboard chipset functions on a single chip. It is a beautiful piece of work.
It also appears to be winning out over AMD in the high end GPU for non-graphic computations market.
I am not ready to declare the long line of evolution from 8086 chips to be over. They may just absorb what is best, and morph to meet changing user needs. But the competitive dynamics are certainly going to change in the next few years. If Nvidia's Tegra is a competitor, so are chips from Apple, Marvell, Broadcom, Qualcomm, TI, and many others. In fact too many companies have pinned their hopes on ARM processors. Even if ARM becomes the architecture of the future, it is likely that a couple of these companies will gain a marketing or manufacturing advantage, and the field will narrow over time.
Nvidia's best hopes for future profits are probably in specialized graphics processors for high-intensity computing. These may no longer be necessary for the average person's desktop or notebook, but high-end GPUs are subject to much less competition and therefore have higher profit margins on each unit sold.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment