Uploaded image for project: 'Minecraft: Java Edition'
  1. Minecraft: Java Edition
  2. MC-178208

Nether chunks resetting in 1.15.2

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Resolved
    • Resolution: Duplicate
    • Affects Version/s: 1.15.2
    • Fix Version/s: None
    • Labels:
      None
    • Environment:
      OS: Linux (Debian 9 Stretch)
      Java: oracle-java8 (1.8.0_181 64-bit)
      System Memory: 16GB
      Minecraft server: Vanilla 1.15.2
      Minecraft server allocated memory: 4GB
    • Confirmation Status:
      Unconfirmed
    • Category:
      (Unassigned)

      Description

      The Issue

      About five days ago I started seeing the following error pop up in my multiplayer server's logs:

      [14:48:17] [chunk IO worker/WARN]: Failed to read chunk [-17, -5] java.util.zip.ZipException: invalid literal/lengths set at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
      ...

      (I've attached several days of our server log files for reference.)

      Whenever this happens, the chunk in question appears to be reset to default / regenerated in the nether. These are frequently-traveled chunks too, not remote chunks that may not have been loaded for a long time. For example, we have a network of tunnels in the nether ceiling that we noticed are randomly getting filled in again. This only occurs in the nether, not the Overworld or End as far as we've seen.

      It's interesting to note that many (but not all) of the errors occur around large chunk-loading events, like respawning from death, or using /tp. We've also seen several chunks that have been reset multiple times in this short period, which seems to indicate some sort of pattern, but I don't know what it is.

      Server Background

      For some background, this server was started on version 1.12.2, and has since been updated to 1.13.2, 1.14.4, and now 1.15.2. Because nether terrain generation has changed since 1.12 (adding nether ravines) a few of the "regenerated" chunks have very obvious edges at the chunk borders, or exposed lava pockets that haven't received a block update and aren't flowing.

      The server is running from an SSD, not from ramdisk. I'm using the vanilla server.

      What I've Tried So Far

      I checked my SSD health to verify that's not the cause - it's got 99% lifetime remaining. I haven't noticed any corrupt files elsewhere in the system, or in other regions besides the nether.

      File permissions on the nether region files are all rw accessible for the server user.

      This morning I reset our nether to a backup prior to the errors occurring, and haven't seen any chunk load errors since, but I'll be keeping my eye on it.

      Sorta-Related Issues

      This issue seems similar to MC-90109, but since this only occurs in the nether, I don't think it's a duplicate of MC-2548 (which was reported for the overworld).

      MC-152398 and MC-49008 are also very similar, but only occurred on a server crash or restart.

      MC-169027 had similar symptoms, but was caused by filesystem permission issues.  My nether region files are all user-writable and owned by the user running the server.

      List of Chunks that have Died So Far

      2020-04-06-1.log.gz:[20:50:19] [chunk IO worker/WARN]: Failed to read chunk [-6, -27]
      2020-04-06-1.log.gz:[20:59:03] [chunk IO worker/WARN]: Failed to read chunk [-6, -27]
      2020-04-06-1.log.gz:[21:55:25] [chunk IO worker/WARN]: Failed to read chunk [-6, -27]
      2020-04-06-1.log.gz:[22:16:03] [chunk IO worker/WARN]: Failed to read chunk [-6, -9]
      2020-04-06-1.log.gz:[22:19:32] [chunk IO worker/WARN]: Failed to read chunk [-9, -14]
      2020-04-07-1.log.gz:[16:22:01] [chunk IO worker/WARN]: Failed to read chunk [-10, -6]
      2020-04-07-1.log.gz:[20:37:52] [chunk IO worker/WARN]: Failed to read chunk [-5, -2]
      2020-04-07-1.log.gz:[21:19:58] [chunk IO worker/WARN]: Failed to read chunk [-6, -3]
      2020-04-07-1.log.gz:[21:56:38] [chunk IO worker/WARN]: Failed to read chunk [-8, -1]
      2020-04-08-1.log.gz:[11:20:40] [chunk IO worker/WARN]: Failed to read chunk [-2, -2]
      2020-04-08-1.log.gz:[15:49:05] [chunk IO worker/WARN]: Failed to read chunk [-3, -1]
      2020-04-08-1.log.gz:[18:32:19] [chunk IO worker/WARN]: Failed to read chunk [-2, -3]
      2020-04-09-1.log.gz:[14:48:17] [chunk IO worker/WARN]: Failed to read chunk [-17, -5]
      2020-04-09-1.log.gz:[15:16:09] [chunk IO worker/WARN]: Failed to read chunk [-4, -8]
      2020-04-09-1.log.gz:[16:01:43] [chunk IO worker/WARN]: Failed to read chunk [-10, -3]
      2020-04-09-1.log.gz:[17:11:16] [chunk IO worker/WARN]: Failed to read chunk [-7, -13]
      2020-04-09-1.log.gz:[17:26:38] [chunk IO worker/WARN]: Failed to read chunk [-8, -6]
      2020-04-09-1.log.gz:[20:49:10] [chunk IO worker/WARN]: Failed to read chunk [-4, -5]
      2020-04-09-1.log.gz:[21:35:38] [chunk IO worker/WARN]: Failed to read chunk [-8, -1]

        Attachments

        1. 2020-04-06-1.log
          16 kB
        2. 2020-04-07-1.log
          24 kB
        3. 2020-04-08-1.log
          18 kB
        4. 2020-04-09_21.17.25.png
          2020-04-09_21.17.25.png
          1.71 MB
        5. 2020-04-09_21.17.27.png
          2020-04-09_21.17.27.png
          1.69 MB
        6. 2020-04-09-1.log
          28 kB
        7. 2020-04-10_14.19.52.png
          2020-04-10_14.19.52.png
          335 kB

          Issue Links

            Activity

              People

              Assignee:
              Unassigned Unassigned
              Reporter:
              MageLuingil Daniel Matthies
              Votes:
              0 Vote for this issue
              Watchers:
              0 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved: