Affects Version/s: 19w42a, 19w45b, 19w46b, 1.15 Pre-release 1, 1.15 Pre-Release 2, 1.15.1, 1.15.2 Pre-release 2, 1.15.2, 20w07a, 20w08a, 20w09a, 20w10a, 20w11a, 20w21a, 1.16 Pre-release 5, 1.16 Release Candidate 1, 1.16, 1.16.1, 20w29a, 1.16.2 Pre-release 1, 1.16.2 Release Candidate 2, 1.16.2, 1.16.3, 1.16.4 Pre-release 1, 1.16.4 Pre-release 2, 1.16.4, 20w45a, 20w49a
Fix Version/s: None
Environment:up-to-date Windows 10 Home version 1909, nVidia drivers 441.20 and Java 8 update 231 64-bit
Mojang Priority:Very Important
The GPUs are not boosting to 3D clocks, someone in the rendering team has to enable a flag or change a tag somewhere in the code to give the GPUs the memo that this particular OpenGL load is 3D rendering instead of general computing or crypto mining. Then we can perhaps start talking about optimization, which is way worse even with the GPU boosting correctly. Spoiler alert: >50% drop.
-First, a baseline test of 1.14.4 with no tweaks or shenanigans. Minecraft Seed for the latest round of tests is "test2", fresh generated world with standard settings. About 200FPS after a minute of waiting until GPU clocks stabilize up. 44%GPU utilization at 2025Mhz.
-then 1.15-pre1 with no tweaks or shenanigans, reusing the world from 1.14.4 due to the bugfixing of world generation resulting in different visuals in some places, want a 100% correlated test here within reason (villagers or mobs popping in sight). About 110FPS, 38%GPU utilization at 1395Mhz (which is 2D-mode clocks)
-then you can force the driver to boost the GPU to Maximum Performance via the nVidia Control Panel (right-click in windows desktop): you have to manually add the javaw.exe in the Manage 3D Configuration tab to do so, AND manually select the GPU (1660Ti in this example) as the predetermined OpenGL Renderer. The workaround does not work with only one of the two fixes applied; and unfortunately I can't help Radeon users because I don't have one of their GPUs. Adding a screenshot of my own for visual reference, sorry it's in spanish but I think I'd have to change my OS language and reinstall the drivers to show you in english.
This workaround results in peaks of about 155-160FPS with 39% GPU utilization at 2025Mhz. GPU frequency usually dips below because there seems to be a "ghost" framerate limiter so the game hovers around 150±5FPS; perhaps it's a frequency-related bottleneck from the new rendering engine. I have the framerate unlimited as you can see in the F3 tags top left. Somehow, the illusion breaks when you look around as the GPU goes insane and it downclocks for no apparent reason other than refreshing the frame buffer and probably the texture buffer too; it dips well below the 60FPS mark which is quite a painful disparity between the top and bottom range of the framerate during playtime. In 1.14.4 it dips from 200FPS to 120ish during normal gameplay or 100FPS if I really try hard to spin around with the mouse.
I wanted to try a harder graphics load so I loaded my multiplayer server with snapshot 19w34a (last one before the new rendering engine) and stood still for a minute until everything loaded in, then took a screenshot. 164 FPS, GPU at 1815Mhz 40% use
After that, I updated the same server to 1.15-pre1 and tried to apply my workaround from nVidia Control Panel to get the GPU to boost up in the client PC. Unfortunately, for whatever reason it won't work even if I restart the PC and apply the settings multiple times. I guess the game is happy to provide over 30FPS and doesn't bother to ask for more resources from the GPU lol. Waited 5 minutes to see if anything changed but nope: 76 FPS, GPU at 1095Mhz and chilling at 23% usage
So, when you're actually playing the game on a multiplayer server, you can expect at least a halving of the FPS from the old rendering engine to the new one. In this particular instance, dropping from 164 to 76FPS means a [1-(76/164)] x 100 = 53.6% performance drop in static conditions. I wanted to try dynamic conditions by logging and recording a test drive of a minecart rail facing the same spot, but unfortunately the game crashed on me several times while moving around in the world so I eventually gave up.
TL;DR: in the same benchmarking conditions, 1.14.4 was giving 200FPS, 1.15-pre1 gives 110FPS if you're an average user; or 150FPS if you know how to force your GPU as the OpenGL renderer and bruteforce Maximum Performance or 3D clocks-mode on it, but then the game stutters insanely when you actually look around and try to play instead of benchmark it. That's a -25% direct performance improvement in the absolute best case scenario. The workaround may or may not work, it seems that in graphics-intensive scenarios it doesn't so performance can tank from 160s to 70s FPS, over 50%.
GPU Frequency can be observed through gpuZ, hwinfo64, Afterburner or Rivatuner RTSS; there's plenty of material online if you google any of those names to check how your GPUs are behaving and/or toggling overlays with real time reporting. It would be nice if Minecraft could retrieve the info itself and display it in the F3 menu, it would make these issues more ovbious in the future.