The Linux Kernel's SCSI Subsystem - page 3
Introducing the Linux Kernel's SCSI Subsystem
The weakest part of the Linux kernel's SCSI subsystem, most people seem to agree, is the error handling functionality. And yet I was surprised to discover that most SCSI device drivers don't actually make use of error handling, or at least in the formal sense used here. So while the error handler needs a partial or total overhaul, this issue is not necessarily the top one in the SCSI team's "To Do" list.
Actually, I lied, the error handler's Application Program Interface (API) will get an overhaul because right now it is far too unwieldy.
Initial SCSI device initialization, however, is a problem. Not so much on the smaller scale, but on the enterprise scale. When a SCSI device is initialized, a fixed amount of Direct Memory Access (DMA) memory is reserved. DMA allows devices to send data directly to the RAM on your motherboard, and only part of your RAM is set aside by the kernel for DMA use, so this practice can completely overrun your available DMA memory when it comes up against a Storage Area Network (SAN) with thousands of devices.
Another area that stand to be cleaned up is called "tag command queuing." This practice allows multiple SCSI commands to be placed in a specific device's queue, and is only supported by particular SCSI devices. One problem identified in the Linux kernel's SCSI subsystem is that this feature is implemented who knows how many different ways, with each driver handling it how it sees fit. When it comes to computers, one strives to remove such inefficiencies, especially ones where duplicated code at the very least are making kernels and their modules larger. The memory required to load that extra code could certainly be used in smarter ways.
There is a tag command queuing issue, however, that combines with error handling issues and has been plaguing the SCSI team. It is possible for the low level SCSI drivers to be hit with too many commands in rapid succession. When queue is completely filled, a SCSI driver might simply ignore this command or that as it tries to work through its queue, focusing on particular "hot" sections of the drive where many things are taking place. This scenario is called "tag starvation," because a command can end up timing out if it gets ignored for enough seconds, and write speed slowdowns can result.
Of course, all of these queuing issues bring us to the question of how deep a queue actually is. Queue depth is a topic of hot debate, and some feel that there should be a dynamic method of assigning how many commands can wait in a queue, but there is a question regarding whether there is any speed benefit to this. According to some tests, the most performance gains occur when raising the queue depth from one to four slots. After that, benefits level off, getting smaller and smaller with each added entry.
- 1Linux Top 3: Fedora 24, Peppermint 7 and Solus 1.2
- 2Linux Top 3: Alpine Linux 3.4, deepin 15.2 and Linux Lite 3.0
- 3Linux 4.7 Set to Boost Live Patching, Security and Power Management
- 4Linux 4.6 Charred Weasel adds USB 3.1 Support
- 5Linux Top 3: OpenIndiana 2016.04, Ubuntu 16.04 and Debian's New Leader