Wikipedia:Reference desk/Archives/Computing/2019 October 26

Source: Wikipedia, the free encyclopedia.
<
Computing
Computing desk
< October 25 << Sep | October | Nov >> Current desk >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is a
transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk
pages.


October 26

Dual GPU and Dual CPU: Should I put the two GPUs on the same CPU or on different CPUs?

I am planning a PC for use as a CAD workstation.
The first priority is CAD, but It wouldn't break my heart if it played games well.
My question: Should I put the two GPUs on the same CPU or on different CPUs?
I have searched, but could not find anyone who has tried it both ways.

Specs:
Gigabyte C621-WD12 motherboard.
(Half the PCIE stots are on CPU1, the other half are on CPU2)
Two Xeon Gold 6244 processors.
Two Nvidia Quadro RTX 8000 GPUs.
192GB RAM.
--Guy Macon (talk) 01:34, 26 October 2019 (UTC)[reply]

I would guess that 1 GPU per CPU would work out better. But get those stoats out of there before they chew through any wires ! :-) SinisterLefty (talk) 02:02, 26 October 2019 (UTC)[reply]
It's hard to make an educated guess. If the part of the program that issues video commands program runs on one CPU I would guess that same CPU would win. If multiple threads are issuing video commands I would guess that separate CPUs would win. Then there is the aspect of some programs offloading computation work to the GPUs, which might act differently from using the GPUs to generate video. --Guy Macon (talk) 04:03, 26 October 2019 (UTC)[reply]
I have experience with dual CPUs but not dual graphics cards. You might read this. It wouldn't be hard to try both setups and benchmark. Bubba73 You talkin' to me? 02:17, 26 October 2019 (UTC)[reply]
I had just read that before posting here. Unlike the case with gaming, we know that multiple Nvidia Quadro cards (the Quadro is optimized for CAD) are always worth having wnen you are doing high-end CAD work. I was surprised that nobody seems to have ever compared the dual GPU same CPU and dual GPU different CPU configuration for gaming or cad. --Guy Macon (talk) 04:03, 26 October 2019 (UTC)[reply]
They mention 3D gaming, but not CAD, so I didn't know if the program could benefit from dual GPUs. Bubba73 You talkin' to me? 04:30, 26 October 2019 (UTC)[reply]
Presumably the same caveats apply as to games. That is, rotating a 3D shaded model could benefit from 2 GPUs if the CAD system is smart enough to have each GPU render every other frame, but if poorly coordinated, it could end up worse than one GPU. Thus, you would need to look at reviews for that particular CAD system to see how well it handles dual GPUs. Surprisingly, they didn't mention the problem of heat. At the 500 watts mentioned, that's a lot of heat to dissipate. The onboard fans also look like they would be less effective if the GPUs were in adjacent slots. SinisterLefty (talk) 13:14, 26 October 2019 (UTC)[reply]
Heat and CAD is an interesting trade-off. A lot of times you will look at a CAD card like the Quadro and see that there is a gaming card with similar specs for a third of the price. But the first time you tell your PC to spend all night rendering or autorouting you see the difference; the gaming GPU throttles as it overheats while the CAD GPU stays at full speed. Drivers are also an interesting aspect. There are drivers that are optimized for the major CAD GPUs included with most high-end mechanical CAD systems, and they invariably work well when you put in multiple cards. Alas, I work with electronic CAD, and driver support for autorouting PC Boards is often a bit spotty compared to driver software for rendering mechanical designs.
Once all the parts arrive and I put together the new system, I will try various configurations and see what performs best. Other trade-offs are crazy expensive to investigate. For example, consider these two CPU choices:
  • Xeon Gold 6244: $2925.00 ea., 8 Cores 16 Threads 3.60 GHz all cores/4.40 GHz single core
  • Xeon Gold 6240Y $2726.00 ea., 18 Cores 36 Threads 2.60 GHz all cores/3.90 GHz single core
It would cost me an extra $6000 to try both in a dual CPU system. On my current system what I do usually ends up running on 10-20 cores, so 16 cores at 3.60 GHz seems like a better choice than 36 cores at 2.60 GHz. But that is just a guess. Maybe the CAD will use a lot more cores if they are available.
Granted, the money people are willing to pay for a good PC board design makes these kind of high end processors pay for themselves, but is whatever benefit I might get by knowing which CPU is best instead of guessing worth $6K? No. --Guy Macon (talk) 17:51, 26 October 2019 (UTC)[reply]
OK, so you don't already have the hardware. And the best number of cores to run on can be tricky. If it is CPU bound, by guess is that the higher number of cores at a slower speed would be best, if it can use them all. If memory bandwidth is a bottleneck, then likely the reverse. I was testing programs on systems with two 8-core Xeons (hyperthreaded), running different numbers of threads. In CPU-intensive stuff, 32 threads was the best. But on memory-intensive stuff, the perfomance went down for more than 16 threads because of non-uniform memory access, from one Xeon to the memory of the other. Bubba73 You talkin' to me? 21:46, 26 October 2019 (UTC)[reply]
At those prices, it's easy to see why testers don't try out every possible configuration and report the results.
Are you able to put the GPUs as far apart on the same CPU as on different ones ? If not, then the heat issue may well make the difference.
Also, if you want to do rendering while away from the PC, versus real-time, then you might do better to go with different PCs entirely (2 or more, including your current system as one). You could go with lower-priced PCs, and run multiple renderings at once (one per PC). We really don't know how well your CAD system handles multiple GPUs, whether on the same CPU or not, but we can guarantee that you can use two GPUs on two different PCs without any type of "collision". Plus you wouldn't be completely dead in the water if a PC dies on you, and the heat from a pair of GPUs will be less of an issue in two different boxes. However, you would need to consider if you would be required to buy an extra license(s) for the CAD system or if it allows a few copies at the same site for this purpose. You would also need a network that would allow you to transfer the completed renderings quickly, but I am guessing you already have that (presumably storing backups on a standalone hard drive). A KVM switch would eliminate the need to buy duplicate monitors, keyboards, and mice. SinisterLefty (talk) 19:52, 26 October 2019 (UTC)[reply]
I will look into that. I have been using "rendering" as a verbal shorthand for "stuff the GPU does at night while I sleep", but in reality what it is doing is constantly ripping up and retrying different designs for printed circuit boards. I often end up autorouting a four-layer version and a six layer version and then running electrical simulation software on each to see what my noise margins are. Two PCs would be great for doing that. Good suggestion. --Guy Macon (talk) 20:58, 26 October 2019 (UTC)[reply]
Guy Macon, I strongly recommend each GPU be driven by it's own CPU. This allows both GPUs to be serviced at the same time, with each CPU handling each GPU independently.
Note: I am not aware if Windows can actually take advantage of this. I do know that Linux can (And does so quite well with NVIDIA's proprietary drivers.. when they work, anyways.)
talk) 18:12, 26 October 2019 (UTC)[reply
]
Further note: This does depend on what software you're using. The software itself has to be smart enough to drive the GPUs independently. IF it is not, then it will make minimal difference.
talk) 18:13, 26 October 2019 (UTC)[reply
]
Thanks! I was thinking the same thing. A lot of the time it turns out that the GPU ends up loafing because the CPU can't feed it work fast enough. I have uses a lot of multicore PCs, but I haven't used multiple processors since the Pentium Pro days, and I have never used multiple GPUs on a system I own (they are common on engineering workstations in places I have worked). Hey, nothing like relearning everything all over again! If only I could erase what I know about 6502 assembly language on a Commodore 64 to make room for the new stuff... :)   BTW, I spend around 25% of my time in Windows 10 and 75% in Slackware Linux, except when I am on a job site in China, where I do everything using Tails Linux on a locally-purchased PC. --Guy Macon (talk) 20:46, 26 October 2019 (UTC)[reply]

The Quadro isn't that much different from the GTX/RTX aside from being a lot more expensive and omitting bogus software license restrictions that supposedly don't allow you to use the gaming versions in a data center. If you're building this workstation for home, that won't affect you. And with that much cash going to the CAD system, you can afford a separate box for gaming, which will make life simpler in terms of software hygeine etc. Finally unless you're dead set on that specific hardware, check the new AMD stuff, including the forthcoming Threadrippers that should be announced in the next few weeks. The GPU's are another matter: the AMD hardware is getting competitive with NVidia again, but the software isn't really there yet other than for gaming. 173.228.123.207 (talk) 01:35, 27 October 2019 (UTC)[reply]

A lot of people report a different experience. See https://forums.tomshardware.com/threads/solidworks-gaming-pc.789000/#post-6443286 as one example. I personally have experienced the "called the CAD vendor tech support, get told 'call us back when you are running on approved hardware and drivers' " effect. I am not a big gamer. A bit of minecraft or stockfish, maybe, when I am dead in the water waiting for something to happen on the job. but when I do electronic design, it pays well enough to make it worth my while to not only have the best CAD workstation I can get, but to have an identical spare system and good backups so I can switch over in less than half an hour. No, it is Nvidia Quadro, Intel Xeon, and a Gigabyte motherboard optimized for CAD instead of gaming for me. --Guy Macon (talk) 06:15, 27 October 2019 (UTC)[reply]
(EC, written before Guy Macon's latest reply) While it's true that a most of the differences between the
Titan RTX, which is almost the same thing as the RTX 6000 except for ECC and other market segmentation differences). Certain Quadro's although not the ones the OP is looking at, also have far better double precision floating point performance [2]. Finally, AFAIK it's very difficult, if possible, to use the Quadro drivers with Geforce cards. Even the Titan RTX. You use to be able to either hack the drivers or flash a Quadro BIOS, but I think this doesn't generally work nowadays probably because there are enough differences unlike in the past where often the cards were really the same thing. A fair amount of workstation software including CAD software is built around the professional cards and drivers with minimal testing and support and definitely no certification for the Geforce drivers and cards. Nvidia may also artificial limit features on Geforce drivers see [3] for one no longer correct example irrelevant to the OP.) I'm not sure if this applies to the OP's current plans, but it's something they've brought up before so I wouldn't be surprised if it does. Actually, there's a resonable chance some AMD professional cards will be a better bet than Geforce ones for certain use cases (where CUDA support doesn't matter obviously). Don't get me wrong, I'm not saying you should never use the Geforce cards in a non gaming setting. There are plenty of cases where it makes sense. Even more so if you're just doing something on the side. Also, if interacting directly with the card, probably the drivers aren't quite as important. But the OP's comments suggest this doesn't apply to them and the risks and potential pitfalls from spending half or whatever on the GPUs probably isn't worth it. (I'm confident the OP already knows this, but I felt it may be helpful to explain why it probably doesn't make sense for them.) Nil Einne (talk) 06:51, 27 October 2019 (UTC)[reply
]
The above describes my thinking pretty much exactly. I never really looked at the specs on the gaming cards -- I knew that the driver support isn't there -- and thus didn't notice the lack of ECC, which alone would be a deal breaker for me. --Guy Macon (talk) 07:14, 27 October 2019 (UTC)[reply]