Uploaded image for project: 'Minecraft: Java Edition'
  1. Minecraft: Java Edition
  2. MC-34464

Performance Inefficiency Caused by Unnecessary Reads and Unbuffered IO on Region Files



    • Bug
    • Status: Resolved
    • Resolution: Incomplete
    • Minecraft 1.5.2, Minecraft 1.6.2, Minecraft 1.6.4, Minecraft 1.7.4, Minecraft 1.7.6, Minecraft 1.7.10, Minecraft 1.8-pre3
    • None
    • Agnostic to Environment, but Java SE 6 and Java SE 7, Windows 8, 64-Bit
    • Unconfirmed
    • Survival


      Noticed while looking at the file format for the Region files that the way they are accessed is somewhat inefficient. Essentially now the Region file is opened as a RandomAccessFile version and data is written and read directly to this. By itself, this would have been so slow that Minecraft would have been unplayable, but it is saved by the fact that there is a custom ChunkBuffer on the output which consolidates the writes and that chunks are read manually into buffers before being passed on into InflaterInputStreams and such. There are a few problems with this approach however:
      1. Any time a chunk has been requested with getChunkDataInputStream, the file is seeked to the correct offset, then the entire chunk is read (requiring a blocking and slow disk IO) into an inefficient buffer in the RandomAccessFile, and then into another specially made buffer which is read by the InflaterInputStream and then goes into yet another buffer before it becomes actually usable data. This is slow.
      2. Direct reads and writes of a few bytes at a time are very slow because the buffer in RandomAccessFile isn’t very smart. These are over all pretty small, but every little bit helps.
      3. It’s not very thread safe. Other ways of reading the file (like memory mapped IO) could be thread safe between chunks. Java even allows for locking part of a file.

      How to Reproduce:
      It’s present in any game play that uses the Region file format.

      Possible Solutions:
      Probably the best solution would be to directly memory map the file. Overhead from the mapping would be minimal because the files are usually in the ~4MB range and always more than 16kB, about ideal for this. In addition, because chunks of the file are often revisited, they could usually stay in memory, providing a performance improvement similar to what is seen by people using RAMDisk. The way the files are accessed too (in about 4kB chunks, jumping around a lot) is ideal for using a memory mapped IO approach. Essentially the idea would be to memory map a the channel of the RamdomAccessFile, then pass this view directly into the InflaterInputStream, removing any copying and allowing buffering to be done by the OS.

      Memory mapped files are also thread safe and can be portioned up and locked in sections using the FileChannel.
      There should not be any problem with memory mapping multiple Region files to the same memory space, even on 32-bit systems as long as no more than a couple hundred are mapped at a time.
      Here is a benchmark and some implementations on MemoryMapped IO vs. other systems.
      I’m not sure how much this would improve performance, but it would almost definitely be significant.

      Other solutions include caching the uncompressed chunk data or changing the file save system completely (which I have seen rumors of), but even if both of those are done, having them based on a memory mapped IO system would probably speed things up.


        Issue Links



              Unassigned Unassigned
              jeremydeath Jeremy Hunt
              3 Vote for this issue
              4 Start watching this issue