Go Back > Other Stuff > Open Projects > y-cruncher

Thread Tools
Old 2019-10-30, 18:33   #1
Mysticial's Avatar
Sep 2016

2×5×37 Posts
Default Parallel Disk I/O

The topic of parallel disk I/O has been a recurring topic and has just come up again. So I figured I create a thread to keep everything together.

First of all, there's multiple forms of parallel disk I/O:
  1. Parallel access to different files and physical drives.
  2. Parallel access to different parts of the same file.
  3. Parallellizing a large sequential access to a single file.

#1 has been supported since antiquity. The approach is fairly straightforward. Given a large object, stripe it across multiple drives as separate files and file handles. Then for each logical access (offset + bytes), determine which portion lies on which file and perform the accesses to each file in parallel on separate threads.

Performance scales almost linearly provided that all the drives have the same performance and there are no hardware bottlenecks.

#2 is a topic that came up earlier this year during Google's 31 trillion digit Pi world record. Right now, y-cruncher's only disk I/O parallelism is built into the RAID striping. But computationally, all disk access is still serial.

The network topology used by Google computation had separate channels for ingress and egress. This means that there is benefit to performing reads and writes in parallel. In order to utilize this (in a more general sense), the computation itself needs to be able to issue parallel accesses to different parts of the swap file.

Those who have played with v0.7.8's new swap mode menu may have noticed the new "parallel disk access" option which is currently locked to "no parallelism". This is precisely what the option is for. Parallel disk access has been added to the internal API.

But neither the computation nor the far memory implementations support it yet. The computation still issues disk access serially and the implementations have a global lock to force any concurrent requests to run serially.

#3 is an even newer topic that has come up in the context of NVMe raid.

As mentioned above, y-cruncher's only disk I/O parallelism is built into its RAID striping. Thus if you don't use the built-in RAID, you get only one thread to perform disk access. Historically this hasn't been a problem since it was virtually impossible for any amount of disk access to saturate a single worker thread. But this isn't the case anymore with modern high-end systems - namely RAID of multiple NVMe SSDs.

If you RAID0 a bunch of NVMe SSDs and expose it as a single path to y-cruncher, it will only get one worker thread with its default 64 MB buffer. In short, a single thread with such a small buffer cannot keep up with 10+ GB/s of data. Increasing the buffer size may help, but it doesn't solve the problem.

Right now, the work-around is to not do RAID yourself. Instead, expose the drives individually to y-cruncher so it can micromanage them and parallelize the work across multiple threads.


Thus we now raise the question of parallelizing disk access with the same file (and same file handle) as this is required to support both #2 and #3.

The low level disk access APIs are:
  • Windows: ReadFile()/WriteFile()
  • Linux: read()/write()

These are all single-threaded API calls. On Windows, parallel access to the same file can be achieved using the "overlapped" property flags. Thus #2 is supported. By extension, #3 can be supported by breaking up a large disk access into smaller segments split across different threads.

What about Linux? I haven't really looked into this yet.

In any case, there are some loose ends which need to be examined.
  • Is there any real performance gain to parallelizing within the same file or even different files on the same physical device? Or will the OS itself serialize everything?
  • How safe is parallel access to the same file? In the case of Windows with the "overlapped" property, parallel access is supposedly safe as long as the regions don't overlap. But what if the regions are adjacent and land on the same sector?
  • The sector alignment thing above is moot since it's impossible when raw I/O is enabled. But that defers the problem to y-cruncher's own sector alignment code. So I need to keep a synchronized sparse map of sector locks? Ugh... (granted, I'm already doing nastier things for the checksums)

This is all thinking out loud for now. In reality, I'm not going to have any time to implement anything for a while.

Last fiddled with by Mysticial on 2019-10-30 at 19:40
Mysticial is offline   Reply With Quote

Thread Tools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Disk starting to go Chuck Hardware 8 2013-05-20 06:40
Which bits of gmp-ecm are now parallel? fivemack GMP-ECM 5 2010-09-05 06:49
Bootable DOS Disk Straw Software 1 2008-02-08 12:31
No disk writing Max Software 22 2006-10-27 21:26
disk activity tha Software 1 2004-06-08 09:53

All times are UTC. The time now is 13:53.

Tue Jan 31 13:53:11 UTC 2023 up 166 days, 11:21, 0 users, load averages: 1.34, 1.10, 0.95

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔