Uploaded image for project: 'Minecraft: Java Edition'
  1. Minecraft: Java Edition
  2. MC-242385

Inconsistency on the buffer size calculation in the chunk packet data

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Fixed
    • 23w35a
    • 1.18 Pre-release 5, 1.18 Pre-release 6, 1.18 Pre-release 7, 1.18 Pre-release 8, 1.18 Release Candidate 1, 1.18 Release Candidate 2, 1.18 Release Candidate 3, 1.18 Release Candidate 4, 1.18, 1.18.1 Pre-release 1, 1.18.1 Release Candidate 1, 1.18.1 Release Candidate 2, 1.18.1 Release Candidate 3, 1.18.1, 1.19.4, 23w12a, 1.20.1
    • Community Consensus
    • Networking
    • Low
    • Platform

      There seems to be an inconsistency on the calculation of the buffer size on chunk packets. This relates, more specifically, to the following two lines of ClientboundLevelChunkPacketData:

      public ClientboundLevelChunkPacketData(LevelChunk chunk) {
      
          /* snip */
      
          this.buffer = new byte[calculateChunkSize(chunk)];
          extractChunkData(new FriendlyByteBuf(this.getWriteBuffer()), chunk);
      
          /* snip */
      
      }
      

      As of this issue, the calculation seems to be sometimes resulting in a buffer that is larger than the data that is written to it, generally by a margin of 12~16 bytes on a default vanilla map (see attached image).

      After some digging, I narrowed down the problem to these two specific locations. The first one is located at GlobalPalette.

      public void read(FriendlyByteBuf buf) {
      }
      
      public void write(FriendlyByteBuf buf) {
      }
      
      public int getSerializedSize() {
         return FriendlyByteBuf.getVarIntSize(0);
      }
      

      Although nothing is read or written, getSerializedSize still returns the size for a VarInt-encoded zero, which equals 1 byte.

      The second one is located at PalettedContainer.Data.

      public int getSerializedSize() {
          return 1 + this.palette.getSerializedSize() + FriendlyByteBuf.getVarIntSize(this.storage.getSize()) + this.storage.getRaw().length * 8;
      }
      
      public void write(FriendlyByteBuf buf) {
          buf.writeByte(this.storage.getBits());
          this.palette.write(buf);
          buf.writeLongArray(this.storage.getRaw());
      }
      

      The write method encodes the bits, the palette, and the raw data array, prefixed with its own length. However, getSerializedSize calculates this size prefix using the result from storage.getSize() instead of the raw data array's length. This size doesn't seem to reflect the actual size of the raw data array, and seems to always be either 4096 or 64 (depending if it's a block state or biome palette).
      This will always result in an allocation of 2 bytes for the size prefix of the blocks state's raw data array, even if its size encodes to a single byte. Analogously, it will always result in a 1 byte allocation for the biome's raw data array, even if the actual size results in more.

      Since the client makes no attempt to validate if all the bytes from the buffer were read after receiving the chunk packet, it passes unnoticed, since the buffer always seem to be bigger than the data read. Since there's no validation, I also can't know whether this is intentional or not, but I though I should report anyway.

            panda4994 [Mojang] Panda
            WinX64 WinX64
            Votes:
            10 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved:
              CHK: