Due to the fix of
MC-201769, the depth and size of the resulting NBT is now estimated before NBT modification by calculating the depths and sizes of target and source NBTs. If the estimated depth is too deep (> 512) or the estimated size is too large (> 2097152 bits or bytes?), the exception ERROR_DATA_TOO_DEEP or ERROR_DATA_TOO_LARGE will be thrown respectively.
However, this approach comes at a cost. Since the calculation of NBT depth/size is recursive down to the leaf NBTs, the time complexity of the NBT modification is now proportional to the sum of the deep size of the target and source NBTs. This makes NBT modification too
Using my benchmark harness, the following functions are benchmarked in 1.19.3 Pre-release 2 and 1.19.3 Pre-release 3 respectively. These functions repeatedly append and remove 0 to the list tag with different elements. Note that resizing the internal ArrayList does not affect the benchmark results because if resizing happens, it is done only once during the warmup phase and never during the measurement phase.
As the results suggest, in 1.19.3 Pre-release 2, we could append an element to a list tag in constant time, no matter how many elements it had. On the other hand, in 1.19.3 Pre-release 3, the larger the target NBT size, the linearly worse the performance of the NBT modification. We can no longer append an element to a list tag in a storage in constant time.
In general, the larger the entire size of the storage to be modified, the higher the cost of modifying the NBTs in that storage.