[RHEL7,COMMIT] fuse kio: Move position advance in fiemap_process_one()

Submitted by Konstantin Khorenko on Sept. 5, 2018, 10:06 a.m.

Details

Message ID 201809051006.w85A6aac010849@finist_ce7.work
State New
Series "fuse kio: Move position advance in fiemap_process_one()"
Headers show

Commit Message

Konstantin Khorenko Sept. 5, 2018, 10:06 a.m.
The commit is pushed to "branch-rh7-3.10.0-862.11.6.vz7.71.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-862.11.6.vz7.71.8
------>
commit f5ac211b317c450e6194ae994a07389113f47459
Author: Kirill Tkhai <ktkhai@virtuozzo.com>
Date:   Wed Sep 5 13:06:36 2018 +0300

    fuse kio: Move position advance in fiemap_process_one()
    
    This is a refactoring, which moves the position advance
    to start of the cycle. We need to do that for all iterations
    except the first , so we encode it in "fiter->apireq.size = 0",
    and skip advance if the value is zero.
    
    Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
    Acked-by: Alexey Kuznetsov <kuznet@virtuozzo.com>
    
    =====================
    Patchset description:
    
    Optimize fiemap ioctl
    
    https://jira.sw.ru/browse/HCI-90
    
    Summary:
      This patch set optimizes fiemap ioctl by removing
      kthread creation. Instead of this, static work
      is used, so we safe some time on copy_process().
    
    Fiemap does not require separate kthread, since
    the most time the kthread is spending in waiting
    for fiter->ireq.iocount becomes 0. Instead of this,
    the generic kthread could queue another fiemap
    requests at this time. This is the thing the patch set
    introduces.
    
    Note, that we had a kthread for every fiemap request,
    and this may look more scalable. But this is not true,
    since the actions, fiemap does, is pretty small. So,
    I think for the most workload the single fiemap work
    is enough. If we meet a workload, where the single
    work is not enough, it will be pretty easy to make
    fiemap_work just as an array in struct pcs_cluster_core
    (to make it per-node or even per-cpu). But I think,
    it's not necessary at least till main_job or completion_job
    are per-node or per-cpu (fiemap requests are small subset
    of all requests going through main_job).
    
    https://github.com/shekkbuilder/fiemap/blob/master/fiemap.c
    code was taken as a base for the performance test and modified.
    The below is results and the test's code.
    
    Time of test execution on 3 extents-file (just randomly chosen
    number of extents):
    
    Before: real 0m11.069s
    After:  real 0m9.112s
    
    It became 17% faster, it was 21% slower.
    
    Kirill Tkhai (7):
          fuse kio: Extract fiemap iteration from fiemap_worker() to separate function
          fuse kio: Move it variable from stack to struct fiemap_iterator
          fuse kio: Kill fiemap_worker() thread
          fuse kio: Move position advance in fiemap_process_one()
          fuse kio: Move fiter ireq iocount assignment
          fuse kio: Introduce fiemap_work
          fuse kio: Async queueing of fiemap from work
---
 fs/fuse/kio/pcs/pcs_cluster.c | 40 ++++++++++++++++++++++++++--------------
 1 file changed, 26 insertions(+), 14 deletions(-)

Patch hide | download patch | download mbox

diff --git a/fs/fuse/kio/pcs/pcs_cluster.c b/fs/fuse/kio/pcs/pcs_cluster.c
index cd97db3640a8..31cc93fd201d 100644
--- a/fs/fuse/kio/pcs/pcs_cluster.c
+++ b/fs/fuse/kio/pcs/pcs_cluster.c
@@ -169,13 +169,31 @@  static void fiemap_process_one(struct fiemap_iterator *fiter)
 {
 	struct pcs_int_request *orig_ireq = fiter->orig_ireq;
 	struct pcs_dentry_info *di = orig_ireq->dentry;
+	struct pcs_int_request *sreq;
 	u64 pos, end;
 
-	pos = fiter->orig_ireq->apireq.req->pos;
-	end = pos + fiter->orig_ireq->apireq.req->size;
-	while (pos < end) {
-		struct pcs_int_request * sreq;
+	pos = fiter->apireq.pos;
+	end = orig_ireq->apireq.req->pos + orig_ireq->apireq.req->size;
 
+	while (1) {
+		/*
+		 * We reuse zeroed fiter->apireq.size to detect first
+		 * iteration of the fiter. In this case we do not have
+		 * completed extents and just skip this business.
+		 */
+		if (fiter->apireq.size != 0) {
+			if (pcs_if_error(&fiter->ireq.error)) {
+				fiter->orig_ireq->error = fiter->ireq.error;
+				goto out;
+			}
+			if (fiter->ireq.apireq.aux)
+				xfer_fiemap_extents(fiter, pos, fiter->buffer,
+						    fiter->ireq.apireq.aux);
+			pos += fiter->apireq.size;
+		}
+
+		if (pos >= end)
+			goto out;
 		if (fiter->fiemap_max && *fiter->mapped >= fiter->fiemap_max)
 			break;
 
@@ -215,16 +233,6 @@  static void fiemap_process_one(struct fiemap_iterator *fiter)
 		pcs_cc_process_ireq_chunk(sreq);
 
 		wait_event(fiter->wq, atomic_read(&fiter->ireq.iocount) == 0);
-
-		if (pcs_if_error(&fiter->ireq.error)) {
-			fiter->orig_ireq->error = fiter->ireq.error;
-			goto out;
-		}
-
-		if (fiter->ireq.apireq.aux)
-			xfer_fiemap_extents(fiter, pos, fiter->buffer, fiter->ireq.apireq.aux);
-
-		pos += fiter->apireq.size;
 	}
 out:
 	kvfree(fiter->buffer);
@@ -268,6 +276,10 @@  static void process_ireq_fiemap(struct pcs_int_request *orig_ireq)
 	orig_ireq->apireq.req->get_iter(orig_ireq->apireq.req->datasource, 0, it);
 	fiter->mapped = &((struct fiemap*)it->data)->fm_mapped_extents;
 
+	/* fiemap_process_one() uses this 0 to detect first iteration */
+	fiter->apireq.size = 0;
+	fiter->apireq.pos = orig_ireq->apireq.req->pos;
+
 	fiemap_process_one(fiter);
 }