[RHEL7,COMMIT] blk-wbt: increase maximum queue depth to increase performance of writes

Submitted by Konstantin Khorenko on Oct. 25, 2019, 10:34 a.m.


Message ID 201910251033.x9PAXx1x005509@finist-ce7.sw.ru
State New
Series "block: backport writeback throttling"
Headers show

Commit Message

Konstantin Khorenko Oct. 25, 2019, 10:34 a.m.
The commit is pushed to "branch-rh7-3.10.0-1062.1.2.vz7.114.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-1062.1.2.vz7.114.9
commit 3bcbd409508c6ea0f8a28834eefe4ec0ff04c80e
Author: Pavel Tikhomirov <ptikhomirov@virtuozzo.com>
Date:   Fri Oct 25 13:33:59 2019 +0300

    blk-wbt: increase maximum queue depth to increase performance of writes
    With wbt patches on simple test:
      rm -rf /my/filetree
      echo 3 > /proc/sys/vm/drop_caches
      time tar -xzf /my/filetree.zip -C /my
      time sync
    we have a performance degradation of ~20-50% of last sync.
    That looks connected with the fact that SATA devices always have a small
    queue depth (request_queue->queue_depth == 31) and thus wbt is limiting
    the maximum number of inflight write requests depending on the prioryty
    with 8/16/23 (low to high), all extra writes just sleep waiting for
    previous writes to complete.
    But if more write requests given to the device driver, elevator
    algorithm should give a better performance than if we only give them by
    small portions, and before wbt we gave as much writes as we could.
    The whole point of wbt is in scaling these three limits making them
    bigger if we have only writes and making them lower if some reads want
    to be processed.
    So we increase the maximum from x3/4 of device queue size to x8. These
    should improve writes performance when where are no concurrent reads.
    These increase of "wb_..." limits makes reads latency decrease as now
    wbt should scale_down several times when detecting read requests.
    Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com>
    Patchset description:
    block: backport writeback throttling
    We have a problem that if we run heavy write load on one cpu
    simultaneousely with short direct reads on other cpu, the latter will
    hang significantly. Writeback throttling looks like a sollution for
    these reads, as it will decrease the priority of long running writeback.
    Running simple dd experiment we see that reads latency decreased after
    wbt patches applied:
    We've ran vconsolidate on custom kernel with these patches, though it
    does not show any performance improvement (likely because this test does
    not produce high rate of writeback), it does not crash or fail the test.
    Jens Axboe (6):
      block: add REQ_BACKGROUND
      writeback: add wbc_to_write_flags()
      writeback: mark background writeback as such
      writeback: track if we're sleeping on progress in
      blk-wbt: add general throttling mechanism
      block: hook up writeback throttling
    Omar Sandoval (1):
      block: get rid of struct blk_issue_stat
    Pavel Tikhomirov (2):
      x86/asm: remove the unused get_limit() method
      block: enable CONFIG_BLK_WBT*
      blk-wbt: increase maximum queue depth to increase performance of writes
 block/blk-wbt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 7278466360b2..554f4ee970c7 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -227,7 +227,7 @@  static bool calc_wb_limits(struct rq_wb *rwb)
 		if (rwb->scale_step > 0)
 			depth = 1 + ((depth - 1) >> min(31, rwb->scale_step));
 		else if (rwb->scale_step < 0) {
-			unsigned int maxd = 3 * rwb->queue_depth / 4;
+			unsigned int maxd = 8 * rwb->queue_depth;
 			depth = 1 + ((depth - 1) << -rwb->scale_step);
 			if (depth > maxd) {