我正在做一个非常简单的测试:
-
拥有一个包含随机二进制信息的大文件,大小约为6Gb
-
算法使“SeekCount”重复循环
-
每次重复都要执行以下操作:
-
计算文件大小范围内的随机偏移
-
寻找那个偏移
-
读取小块数据
C编号:
public static void Test()
{
string fileName = @"c:\Test\big_data.dat";
int NumberOfSeeks = 1000;
int MaxNumberOfBytes = 1;
long fileLength = new FileInfo(fileName).Length;
FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read, 65536, FileOptions.RandomAccess);
Console.WriteLine("Processing file \"{0}\"", fileName);
Random random = new Random();
DateTime start = DateTime.Now;
byte[] byteArray = new byte[MaxNumberOfBytes];
for (int index = 0; index < NumberOfSeeks; ++index)
{
long offset = (long)(random.NextDouble() * (fileLength - MaxNumberOfBytes - 2));
stream.Seek(offset, SeekOrigin.Begin);
stream.Read(byteArray, 0, MaxNumberOfBytes);
}
Console.WriteLine(
"Total processing time time {0} ms, speed {1} seeks/sec\r\n",
DateTime.Now.Subtract(start).TotalMilliseconds, NumberOfSeeks / (DateTime.Now.Subtract(start).TotalMilliseconds / 1000.0));
stream.Close();
}
然后在中进行相同的测试
C++:
void test()
{
FILE* file = fopen("c:\\Test\\big_data.dat", "rb");
char buf = 0;
__int64 fileSize = 6216672671;//ftell(file);
__int64 pos;
DWORD dwStart = GetTickCount();
for (int i = 0; i < kTimes; ++i)
{
pos = (rand() % 100) * 0.01 * fileSize;
_fseeki64(file, pos, SEEK_SET);
fread((void*)&buf, 1 , 1,file);
}
DWORD dwEnd = GetTickCount() - dwStart;
printf(" - Raw Reading: %d times reading took %d ticks, e.g %d sec. Speed: %d items/sec\n", kTimes, dwEnd, dwEnd / CLOCKS_PER_SEC, kTimes / (dwEnd / CLOCKS_PER_SEC));
fclose(file);
}
执行次数:
-
C#:100-200次读取/秒
-
C++:250 000次读取/秒(25万次)
问题:
为什么C++在文件读取这样一个微不足道的操作上比C#快数千倍?
附加信息:
-
我玩了流缓冲区,并将它们设置为相同的大小(4Kb)
-
磁盘已去碎片化(0%碎片化)
-
操作系统配置:Windows 7、NTFS、一些最新的现代500Gb HDD(如果调用正确,则为WD)、8GB RAM(尽管几乎没有使用)、4核CPU(利用率几乎为零)