This week NVIDIA introduced the $ 149 USD Turing-powered GTX 1650 graphics card. On launch day I picked up the ASUS GeForce GTX 1650 4GB dual-fan edition (Dual-GTX1650-O4G) graphics card for Linux testing and now out the initial GTX 1650 Linux performance benchmarks under Ubuntu compared to an assortment of lower-end and older AMD Radeon and NVIDIA GeForce graphics cards
For $ 149 + USD, the GeForce GTX 1650 features 896 CUDA cores, 1485MHz base clock with 1665MHz boost clock, 4GB or GDDR5 video memory, Volta-based NVENC video capabilities (not the newer Turing NVENC, but still good enough compared to older generations of NVIDIA GPUs), and has just a 75 Watt TDP meaning no external PCI Express power connector is required.
In the case of the ASUS Dual-GTX1650-O4G, I $ 160 USD on launch day though there were other models indeed hitting the $ 149 price point. This particular ASUS SKU uses the same 1485MHz base clock but its GPU boost clock can reach 1725MHz compared to the 1665MHz reference clock. There is also an ASUS GPU Boost Clock mode under Windows to reach 1755MHz. No manual overclocking was attempted with this graphics card since you can read about GPU overclocking on plenty of other websites while we focus on the Linux support and performance aspects.
The ASUS GeForce GTX 1650 Dual-Fan Edition features outputs for DVI-D , HDMI 2.0b, and DisplayPort 1.4. The GTX 1650 does support driving three displays simultaneously. This ASUS graphics card with two fans is a standard dual-slot factor and the card measures in at 20.4 x 11.5 x 3.7 cm.
This GTX 1650 graphics card was working fine under Linux in conjunction with the new NVIDIA 430.09 beta Linux driver. The initial round of tests was from Ubuntu 19.04 x86_64 with the Linux 5.0 kernel. No problems were encountered in the time spent thus benchmarking a variety of OpenGL and Vulcan Linux games (including Steam Play / DXVK titles) and some OpenCL / CUDA compute workloads.