The K1 has 4 Kepler GPUs, each running up to 8 users, for a total of up to 32 users. The K2 has 2 GPUs and can support up to 16 users, though they are "high-end" Keplers, for high-end graphics.
With this hardware from NVIDIA, a central location can house shared workstations in racks, each loaded with graphics memory that NVIDIA makes, and run a bunch of cheap terminals. You may need a good display or two, but companies can save having to buy expensive graphic workstations. This is known as "virtualized workstations."
If you are into a more monogamous one user, one workstation relationship, you will be wondering why on Earth any user would want to share a workstation. The benefit of sharing clearly lies not with you, dear user, but on the shared server side. Think big installations. The one server can be cared for in one place, easing maintenance of it, by a dedicated and trained staff. So with beehives of graphical activity, such as animation and rendering services, it would make sense for the IT staff to have the servers all under one roof, so to speak, for easy access, rather than running around for each little cubby where the wild-eyed, ponytailed special-effects creating wizard has caused unnecessary mayhem loading an unauthorized program and brought down the network, ignoring every one of the for-your-own-good rules the IT staff issues from time after time.
Milan Diebel of NVIDIA was kind enough to show me this technology at the Siemens PLM event. I asked Milan if the reason for sharing was cost, but Milan indicated that was not so. "At $3750 for a K2, its not so much a pricing issue," he says. "Data cetralization adds tremendous value in terms of improved collaboration, productivity and IP protection, not to mention the benefits for IT for centralized care, management and maintenance of hardware...that is what makes sense for the bigger companies."
Playing user advocate, I looked closely for a model on a virtualized workstation compared to a regular workstation to see if the rotation of a CAD assembly lags but I can't really see it. This problem, known as latency, has been known to plague remote computing systems. NVIDIA claims to have reduced latency with patented technology.
It looked good to me, but I can be easily impressed. The next step is to get this in the hands of a discerning CAD user for real testing.
For more information from NVIDA, see http://www.nvidia.com/object/grid-boards.html