[PATCHv7,18/33] lib/vdso: Add unlikely() hint into vdso_read_begin()

Submitted by Andrei Vagin on Oct. 24, 2019, 6:13 a.m.

Details

Message ID 20191024061311.GA4541@gmail.com
State New
Series "Series without cover letter"
Headers show

Commit Message

Andrei Vagin Oct. 24, 2019, 6:13 a.m.
On Wed, Oct 16, 2019 at 12:24:14PM +0100, Vincenzo Frascino wrote:
> On 10/11/19 2:23 AM, Dmitry Safonov wrote:
> > From: Andrei Vagin <avagin@gmail.com>
> > 
> > Place the branch with no concurrent write before contended case.
> > 
> > Performance numbers for Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz
> > (more clock_gettime() cycles - the better):
> >         | before    | after
> > -----------------------------------
> >         | 150252214 | 153242367
> >         | 150301112 | 153324800
> >         | 150392773 | 153125401
> >         | 150373957 | 153399355
> >         | 150303157 | 153489417
> >         | 150365237 | 153494270
> > -----------------------------------
> > avg     | 150331408 | 153345935
> > diff %  | 2	    | 0
> > -----------------------------------
> > stdev % | 0.3	    | 0.1
> > 
> > Signed-off-by: Andrei Vagin <avagin@gmail.com>
> > Co-developed-by: Dmitry Safonov <dima@arista.com>
> > Signed-off-by: Dmitry Safonov <dima@arista.com>
> 
> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>

Hello Vincenzo,

Could you test the attached patch on aarch64? On x86, it gives about 9%
performance improvement for CLOCK_MONOTONIC and CLOCK_BOOTTIME.

Here is my test:
https://github.com/avagin/vdso-perf

It is calling clock_gettime() in a loop for three seconds and then
reports a number of iterations.

Thanks,
Andrei
From 5252093fec4c74802e5ef501b9f1db3369430c80 Mon Sep 17 00:00:00 2001
From: Andrei Vagin <avagin@gmail.com>
Date: Tue, 22 Oct 2019 18:23:17 -0700
Subject: [PATCH] lib/vdso: make do_hres and do_coarse as __always_inline

Performance numbers for Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz
(more clock_gettime() cycles - the better):

clock            | before     | after      | diff
----------------------------------------------------------
monotonic        |  153222105 |  166775025 | 8.8%
monotonic-coarse |  671557054 |  691513017 | 3.0%
monotonic-raw    |  147116067 |  161057395 | 9.5%
boottime         |  153446224 |  166962668 | 9.1%

Signed-off-by: Andrei Vagin <avagin@gmail.com>
---
 lib/vdso/gettimeofday.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Patch hide | download patch | download mbox

diff --git a/lib/vdso/gettimeofday.c b/lib/vdso/gettimeofday.c
index e630e7ff57f1..b4f7f0f246af 100644
--- a/lib/vdso/gettimeofday.c
+++ b/lib/vdso/gettimeofday.c
@@ -38,7 +38,7 @@  u64 vdso_calc_delta(u64 cycles, u64 last, u64 mask, u32 mult)
 }
 #endif
 
-static int do_hres(const struct vdso_data *vd, clockid_t clk,
+static __always_inline int do_hres(const struct vdso_data *vd, clockid_t clk,
 		   struct __kernel_timespec *ts)
 {
 	const struct vdso_timestamp *vdso_ts = &vd->basetime[clk];
@@ -68,7 +68,7 @@  static int do_hres(const struct vdso_data *vd, clockid_t clk,
 	return 0;
 }
 
-static void do_coarse(const struct vdso_data *vd, clockid_t clk,
+static __always_inline void do_coarse(const struct vdso_data *vd, clockid_t clk,
 		      struct __kernel_timespec *ts)
 {
 	const struct vdso_timestamp *vdso_ts = &vd->basetime[clk];
@@ -97,12 +97,16 @@  __cvdso_clock_gettime_common(clockid_t clock, struct __kernel_timespec *ts)
 	 */
 	msk = 1U << clock;
 	if (likely(msk & VDSO_HRES)) {
-		return do_hres(&vd[CS_HRES_COARSE], clock, ts);
+		vd = &vd[CS_HRES_COARSE];
+out_hres:
+		return do_hres(vd, clock, ts);
 	} else if (msk & VDSO_COARSE) {
 		do_coarse(&vd[CS_HRES_COARSE], clock, ts);
 		return 0;
 	} else if (msk & VDSO_RAW) {
-		return do_hres(&vd[CS_RAW], clock, ts);
+		vd = &vd[CS_RAW];
+		/* goto allows to avoid extra inlining of do_hres. */
+		goto out_hres;
 	}
 	return -1;
 }

Comments

Vincenzo Frascino Oct. 24, 2019, 9:30 a.m.
Hi Andrei,

On 10/24/19 7:13 AM, Andrei Vagin wrote:
> On Wed, Oct 16, 2019 at 12:24:14PM +0100, Vincenzo Frascino wrote:
>> On 10/11/19 2:23 AM, Dmitry Safonov wrote:
>>> From: Andrei Vagin <avagin@gmail.com>
>>>
>>> Place the branch with no concurrent write before contended case.
>>>
>>> Performance numbers for Intel(R) Core(TM) i5-6300U CPU @ 2.40GHz
>>> (more clock_gettime() cycles - the better):
>>>         | before    | after
>>> -----------------------------------
>>>         | 150252214 | 153242367
>>>         | 150301112 | 153324800
>>>         | 150392773 | 153125401
>>>         | 150373957 | 153399355
>>>         | 150303157 | 153489417
>>>         | 150365237 | 153494270
>>> -----------------------------------
>>> avg     | 150331408 | 153345935
>>> diff %  | 2	    | 0
>>> -----------------------------------
>>> stdev % | 0.3	    | 0.1
>>>
>>> Signed-off-by: Andrei Vagin <avagin@gmail.com>
>>> Co-developed-by: Dmitry Safonov <dima@arista.com>
>>> Signed-off-by: Dmitry Safonov <dima@arista.com>
>>
>> Reviewed-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
>> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
> 
> Hello Vincenzo,
> 
> Could you test the attached patch on aarch64? On x86, it gives about 9%
> performance improvement for CLOCK_MONOTONIC and CLOCK_BOOTTIME.
> 

I did run similar tests in past with a previous version of the unified vDSO
library and what I can tell based on the results of those is that the impact of
"__always_inline" alone was around 7% on arm64, in fact I had a comment stating
"To improve performances, in this file, __always_inline it is used for the
functions called multiple times." in my implementation [1].

[1] https://bit.ly/2W9zMxB

I spent some time yesterday trying to dig out why the approach did not make the
cut but I could not infer it from the review process.

> Here is my test:
> https://github.com/avagin/vdso-perf
> 
> It is calling clock_gettime() in a loop for three seconds and then
> reports a number of iterations.
> 

I am happy to run the test on arm64 and provide some results.

> Thanks,
> Andrei
>
Vincenzo Frascino Oct. 24, 2019, 1:14 p.m.
Hi Andrei,

On 10/24/19 10:30 AM, Vincenzo Frascino wrote:
[...]
>> Here is my test:
>> https://github.com/avagin/vdso-perf
>>
>> It is calling clock_gettime() in a loop for three seconds and then
>> reports a number of iterations.
>>
> 
> I am happy to run the test on arm64 and provide some results.
>

Please find below the results:

clock            | before     | after      | diff
==================================================
monotonic          17326692     17951770     3.6%
monotonic-coarse   43624027     44215292     1.3%
monotonic-raw      17541809     17554932     0.1%
boottime           17334982     17954361     3.5%

The improvement for arm64 for monotonic and boottime is around 3.5%.

I created a pull request on github to send to you back the changes I had
to do to vdso-perf to cross-compile it on arm64 [1].

[1] https://github.com/avagin/vdso-perf/pull/1


>> Thanks,
>> Andrei
>>
>