UANOG
Threads by month
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2005 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2004 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2003 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2002 -----
- December
- November
- October
- September
- August
- July
- June
- May
December 2010
- 16 participants
- 11 discussions
Сравниваем вот такие диски
То что под цитатой - это
OWC Mercury Extreme Pro SSD c sandforce 1200
Ну и Kingstone и, по-моему, jmicron (kingstone не признаются что
именно)
Тесты без FS
OWC
# seeker /dev/sdd1 128 180
Seeker v3.0+Fedora, 2009-06-17,
http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdd1 [78156162 blocks, 40015954944 bytes,
37 GB, 38162 MB, 40 GiB, 40015 MiB]
[512 logical sector size, 512 physical sector size]
[128 threads]
Wait 180 seconds....................................................................................................................................................................................
Results: 4775 seeks/second, 0.209 ms random access time
(56320 < offsets < 40015925760)
Kingstone
# seeker /dev/sdd1 128 180
Seeker v3.0+Fedora, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sdd1 [125043376 blocks, 64022208512 bytes, 59 GB, 61056 MB, 64 GiB, 64022 MiB]
[512 logical sector size, 512 physical sector size]
[128 threads]
Wait 180 seconds....................................................................................................................................................................................
Results: 3729 seeks/second, 0.268 ms random access time (103424 < offsets < 64022127616)
У sandforce производительность выше, время доступа меньше
Далее тесты с FS
FS в обеих тестах - ext4 without journal
По OWC есть тесты с разными FS, но тут отсутствуют
Указанная в linux оказалась самой производительной
AT> Резюме по SSD с контроллером sandforce
AT> Они применимы для серверов
AT> Деградации производительности, как у
AT> дешевых(пользовательских) SSD не наблюдается
Они применимы для серверов
Деградации производительности, как у
дешевых(пользовательских) SSD не наблюдается
НО !!! производительность В РАЗЫ хуже чем у sandforce
это при том, что я на kingstone специально форматировал FS с
предложенным blocksize 4k -
mkfs.ext4 -b 4096 -m0 -i 32768 -T ext4ssd /dev/sdd1
Тип ext4ssd - это ext4 с отключенным журналом
# dbench -D /mnt/2 128; tiobench --numruns 3 --dir /mnt/2 --threads 1; tiobench --numruns 3 --dir /mnt/2 --threads 8; tiobench --numruns 3 --dir /mnt/2 --threads 32; tiobench --numruns 3 --dir /mnt/2 --threads 128; tiobench --block 16384 --numruns 3 --dir /mnt/2 --threads 128
Operation Count AvgLat MaxLat
----------------------------------------
NTCreateX 1718720 0.175 1143.580
Close 1261840 0.011 48.071
Rename 72741 2.273 2227.605
Unlink 347536 3.895 2410.606
Qpathinfo 1556345 0.144 1442.756
Qfileinfo 271069 0.005 35.046
Qfsinfo 285580 0.062 52.894
Sfileinfo 139906 0.047 34.169
Find 601606 0.230 836.155
WriteX 847619 82.341 22907.235
ReadX 2693276 0.078 391.332
LockX 5584 0.011 1.608
UnlockX 5584 0.011 4.148
Flush 120366 38.947 2365.874
Throughput 89.134 MB/sec 128 clients 128 procs max_latency=22907.241 ms
AT> Operation Count AvgLat MaxLat
AT> ----------------------------------------
AT> NTCreateX 6575096 1.219 553.789
AT> Close 4827860 0.092 404.058
AT> Rename 278698 1.790 468.432
AT> Unlink 1329055 2.243 469.951
AT> Qpathinfo 5964493 0.662 518.153
AT> Qfileinfo 1039820 0.032 255.713
AT> Qfsinfo 1093201 0.466 374.434
AT> Sfileinfo 535934 0.230 339.968
AT> Find 2304503 1.656 569.721
AT> WriteX 3248878 12.463 3825.347
AT> ReadX 10316311 0.097 473.059
AT> LockX 21440 0.020 54.085
AT> UnlockX 21440 0.024 51.432
AT> Flush 460886 27.229 506.487
AT> Throughput 341.39 MB/sec 128 clients 128 procs max_latency=3825.354 ms
Этот тест просто убивает. Правда на ЭТОЙ FS при тестировании
sandforce мой комп стал "похрюкивать" при произгывании
музыки, на kingstone процессор был загружен тоже достаточно
плотно, но комп отзывался вполне на уровне
И, кстати, параллельно весящий в обеих случаях iostat -k 1
для OWC показывал ~4k tps с пиками до 5-ти, а для kingstone
около 800 с пиками до 1.6k, тоже показатель
No size specified, using 2000 MB
Run #3: /usr/bin/tiotest -t 1 -f 2000 -r 4000 -b 4096 -d /mnt/2 -T
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
2.6.36-1.fc15.i686 2000 4096 1 137.28 38.75% 0.083 17.96 0.00000 0.00000 354
Random Reads
2.6.36-1.fc15.i686 2000 4096 1 10.89 11.91% 1.068 8.09 0.00000 0.00000 91
Sequential Writes
2.6.36-1.fc15.i686 2000 4096 1 45.82 15.42% 0.249 4380.25 0.00000 0.00000 297
Random Writes
2.6.36-1.fc15.i686 2000 4096 1 10.92 9.243% 0.031 0.18 0.00000 0.00000 118
AT> No size specified, using 2000 MB
AT> Run #3: /usr/bin/tiotest -t 1 -f 2000 -r 4000 -b 4096 -d /mnt/2 -T
AT> Unit information
AT> ================
AT> File size = megabytes
AT> Blk Size = bytes
AT> Rate = megabytes per second
AT> CPU% = percentage of CPU used during the test
AT> Latency = milliseconds
AT> Lat% = percent of requests that took longer than X seconds
AT> CPU Eff = Rate divided by CPU% - throughput per cpu load
AT> Sequential Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 1 185.74 39.06% 0.061 21.48 0.00000 0.00000 475
AT> Random Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 1 30.56 14.86% 0.376 3.25 0.00000 0.00000 206
AT> Sequential Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 1 186.13 55.86% 0.057 779.63 0.00000 0.00000 333
AT> Random Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 1 16.58 12.62% 0.023 0.10 0.00000 0.00000 131
Чтение запись в один поток. Kingston проигрывает по всем
показателям, особенно по непрерывной записи (правда на
сервер это скорее редкость)
No size specified, using 2000 MB
Run #3: /usr/bin/tiotest -t 8 -f 250 -r 500 -b 4096 -d /mnt/2 -T
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
2.6.36-1.fc15.i686 2000 4096 8 116.83 182.8% 0.713 1300.18 0.00000 0.00000 64
Random Reads
2.6.36-1.fc15.i686 2000 4096 8 12.28 96.40% 6.297 606.12 0.00000 0.00000 13
Sequential Writes
2.6.36-1.fc15.i686 2000 4096 8 32.53 98.16% 2.714 16487.44 0.04922 0.00000 33
Random Writes
2.6.36-1.fc15.i686 2000 4096 8 6.44 33.15% 0.059 24.08 0.00000 0.00000 19
AT> No size specified, using 2000 MB
AT> Run #3: /usr/bin/tiotest -t 8 -f 250 -r 500 -b 4096 -d /mnt/2 -T
AT> Unit information
AT> ================
AT> File size = megabytes
AT> Blk Size = bytes
AT> Rate = megabytes per second
AT> CPU% = percentage of CPU used during the test
AT> Latency = milliseconds
AT> Lat% = percent of requests that took longer than X seconds
AT> CPU Eff = Rate divided by CPU% - throughput per cpu load
AT> Sequential Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 8 183.70 234.3% 0.488 1283.50 0.00000 0.00000 78
AT> Random Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 8 35.77 158.9% 2.128 261.58 0.00000 0.00000 23
AT> Sequential Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 8 189.72 521.3% 0.437 3192.64 0.00000 0.00000 36
AT> Random Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 8 15.59 97.90% 0.047 24.28 0.00000 0.00000 16
У sandforce каналов явно больше. На 8-ми потоках практически
отсутствует деградация производительность. Kingston опять не
порадовал, особенно запись
No size specified, using 2000 MB
Run #3: /usr/bin/tiotest -t 32 -f 62 -r 125 -b 4096 -d /mnt/2 -T
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
2.6.36-1.fc15.i686 2000 4096 32 113.55 764.2% 2.865 3219.01 0.00000 0.00000 15
Random Reads
2.6.36-1.fc15.i686 2000 4096 32 12.22 229.2% 22.770 1861.47 0.00000 0.00000 5
Sequential Writes
2.6.36-1.fc15.i686 2000 4096 32 21.91 252.2% 14.825 57511.08 0.26028 0.02422 9
Random Writes
2.6.36-1.fc15.i686 2000 4096 32 7.22 119.8% 0.060 38.68 0.00000 0.00000 6
AT> No size specified, using 2000 MB
AT> Run #3: /usr/bin/tiotest -t 32 -f 62 -r 125 -b 4096 -d /mnt/2 -T
AT> Unit information
AT> ================
AT> File size = megabytes
AT> Blk Size = bytes
AT> Rate = megabytes per second
AT> CPU% = percentage of CPU used during the test
AT> Latency = milliseconds
AT> Lat% = percent of requests that took longer than X seconds
AT> CPU Eff = Rate divided by CPU% - throughput per cpu load
AT> Sequential Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 32 194.31 925.1% 1.687 3179.25 0.00000 0.00000 21
AT> Random Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 32 36.41 292.9% 6.292 1045.39 0.00000 0.00000 12
AT> Sequential Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 32 185.29 2048.% 1.664 8627.27 0.00395 0.00000 9
AT> Random Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 32 13.06 314.4% 0.051 41.95 0.00000 0.00000 4
No size specified, using 2000 MB
Run #3: /usr/bin/tiotest -t 128 -f 15 -r 31 -b 4096 -d /mnt/2 -T
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
2.6.36-1.fc15.i686 2000 4096 128 90.60 2210.% 13.109 16256.86 0.05412 0.00000 4
Random Reads
2.6.36-1.fc15.i686 2000 4096 128 11.78 478.3% 77.988 3264.03 0.00000 0.00000 2
Sequential Writes
2.6.36-1.fc15.i686 2000 4096 128 18.44 794.8% 55.296 233249.28 0.52165 0.15035 2
Random Writes
2.6.36-1.fc15.i686 2000 4096 128 5.56 294.9% 0.348 1230.47 0.00000 0.00000 2
AT> No size specified, using 2000 MB
AT> Run #3: /usr/bin/tiotest -t 128 -f 15 -r 31 -b 4096 -d /mnt/2 -T
AT> Unit information
AT> ================
AT> File size = megabytes
AT> Blk Size = bytes
AT> Rate = megabytes per second
AT> CPU% = percentage of CPU used during the test
AT> Latency = milliseconds
AT> Lat% = percent of requests that took longer than X seconds
AT> CPU Eff = Rate divided by CPU% - throughput per cpu load
AT> Sequential Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 128 212.64 2386.% 4.188 20913.51 0.02828 0.00000 9
AT> Random Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 128 37.38 915.3% 19.787 1098.53 0.00000 0.00000 4
AT> Sequential Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 128 174.50 6801.% 5.716 28443.94 0.07202 0.00000 3
AT> Random Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 4096 128 14.87 1054.% 0.036 17.33 0.00000 0.00000 1
вот вышеуказанные тесты - это уже похоже на сервер - 128
потоков, и сновая на случайном чтении записи kingston сильно
проигрывает
No size specified, using 2000 MB
Run #3: /usr/bin/tiotest -t 128 -f 15 -r 31 -b 16384 -d /mnt/2 -T
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Sequential Reads
2.6.36-1.fc15.i686 2000 16384 128 71.91 1455.% 68.286 23691.20 0.39795 0.00000 5
Random Reads
2.6.36-1.fc15.i686 2000 16384 128 37.33 438.6% 89.256 4588.92 0.00000 0.00000 9
Sequential Writes
2.6.36-1.fc15.i686 2000 16384 128 17.01 546.8% 232.219 252841.60 1.84000 0.69417 3
Random Writes
2.6.36-1.fc15.i686 2000 16384 128 11.21 600.2% 0.732 2392.62 0.02520 0.00000 2
AT> No size specified, using 2000 MB
AT> Run #3: /usr/bin/tiotest -t 128 -f 15 -r 31 -b 16384 -d /mnt/2 -T
AT> Unit information
AT> ================
AT> File size = megabytes
AT> Blk Size = bytes
AT> Rate = megabytes per second
AT> CPU% = percentage of CPU used during the test
AT> Latency = milliseconds
AT> Lat% = percent of requests that took longer than X seconds
AT> CPU Eff = Rate divided by CPU% - throughput per cpu load
AT> Sequential Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 16384 128 244.19 1982.% 13.474 20661.02 0.11231 0.00000 12
AT> Random Reads
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 16384 128 128.91 777.1% 19.972 1327.90 0.00000 0.00000 17
AT> Sequential Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 16384 128 177.49 5378.% 22.371 26251.02 0.28971 0.00000 3
AT> Random Writes
AT> 2.6.36-0.34.rc6.git3.fc15.i6 2000 16384 128 42.89 1818.% 0.114 55.21 0.00000 0.00000 2
В результате, в ssd с mlc, но разными контроллерами
Да, наверное есть проблемы у sandforce, но ни в тестах, ни в
реальной (уже 2 недели) эксплуатации я пока на них не
нарвался
Kingston (наверное, аналогично и Intel) в принципе в сервера
можно ставить, но рост производительности далеко не такой
как у sandforce
Но, в принципе, таких проблем, как у меня с Model Number:
TS128GSSD25S-M, когда при записи замирает комп у него нету
В принципе, для рабочих машин kingston достаточно не плохое
решение, чтение у него вполне себе вменяемо работает, а вот
запись к сожалению подкачала
--
Best regard, Aleksander Trotsai aka MAGE-RIPE aka MAGE-UANIC
My PGP key at ftp://blackhole.adamant.ua/pgp/trotsai.key[.asc]
"Одномерный массив" - это предистория "Матрицы"?
3
12