....
File f = new File("USA.txt");
FileInputStream fin = new FileInputStream(f);
FileChannel fc = fin.getChannel();
long size = fc.size();
ByteBuffer bb = ByteBuffer.allocate(2048);
long start = System.currentTimeMillis();
while ( (fc.read(bb)) > 0 ){
bb.flip();
String s = new String(bb.array());
}
long finish = System.currentTimeMillis();
fc.close();
.....
With about a 100 runs, the two stack up pretty evenly to read in a file of about 3 MB.
Note that the first few runs see NIO slightly slower, and then it hits a step and speeds up again. Perhaps being a lot closer to the OS, you can actually see the contents of the file move through different cache layers as we hit it more.. Not sure.
This wasn't what I had expected. Perhaps the overhead on string creation was dwarfing the real gains in NIO. Or perhaps NIO performs better when the files to ingest are larger. So, I ran the benchmarks again with a file of about 200 MB.
NIO saves us 1 second over BufferedReader, which executes at approximately 5 seconds. Depending on whose side you are on, BufferedReader costs only 20% more, or NIO generates a savings of 25%. Regardless, NIO is faster. But, it is a little more complex too.
I don't know that the speed boost (specifically for File IO) warrants a switch to NIO. File locking sure seems nice, and it would be interesting to try and use NIO over a network to see if things change more dramatically there. Another point to keep in mind is that I used the allocate method in my NIO code, not the allocateDirect, which can optimize even further. However, in my scenario, the allocateDirect did not execute without a runtime exception.
To sum things up, NIO ran faster. The larger the file size, the greater the impact. For small files though, NIO could quite easily be overkill, both on the performance end, and the code maintenance end.
No comments:
Post a Comment