[RHEL7,COMMIT] Revert "perf tools: Stop fallbacking to kallsyms for vdso symbols lookup"

Submitted by Konstantin Khorenko on Dec. 9, 2019, 10:26 a.m.

Details

Message ID 201912091026.xB9AQasL029798@finist-ce7.sw.ru
State New
Series "perf: make it working again"
Headers show

Commit Message

Konstantin Khorenko Dec. 9, 2019, 10:26 a.m.
The commit is pushed to "branch-rh7-3.10.0-1062.7.1.vz7.130.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-1062.7.1.vz7.130.1
------>
commit 2036745f7606b5a6faf1c80b719892f8eea1ebf8
Author: Konstantin Khorenko <khorenko@virtuozzo.com>
Date:   Mon Dec 9 13:09:15 2019 +0300

    Revert "perf tools: Stop fallbacking to kallsyms for vdso symbols lookup"
    
    This reverts commit edeb0c90df3581b821a764052d185df985f8b8dc.
    
    RHEL7.7 perf does not resolve symbols normally,
    so let's just roll back the patch which broke this.
    
    This is a temporary solution:
    - RedHat is aware of the issue and will hopefully fix it in RHEL7.8
    - once we inherit the proper fix from RedHat, we'll just drop this revert
    
    https://jira.sw.ru/browse/HCI-128
    
    Signed-off-by: Konstantin Khorenko <khorenko@virtuozzo.com>
---
 tools/perf/util/event.c | 21 +++++++++++++++++++--
 1 file changed, 19 insertions(+), 2 deletions(-)

Patch hide | download patch | download mbox

diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index a7bab025f563..55c93bf1b498 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -1417,9 +1417,26 @@  struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 
 		return NULL;
 	}
-
+try_again:
 	al->map = map_groups__find(mg, al->addr);
-	if (al->map != NULL) {
+	if (al->map == NULL) {
+		/*
+		 * If this is outside of all known maps, and is a negative
+		 * address, try to look it up in the kernel dso, as it might be
+		 * a vsyscall or vdso (which executes in user-mode).
+		 *
+		 * XXX This is nasty, we should have a symbol list in the
+		 * "[vdso]" dso, but for now lets use the old trick of looking
+		 * in the whole kernel symbol list.
+		 */
+		if (cpumode == PERF_RECORD_MISC_USER && machine &&
+		    mg != &machine->kmaps &&
+		    machine__kernel_ip(machine, al->addr)) {
+			mg = &machine->kmaps;
+			load_map = true;
+			goto try_again;
+		}
+	} else {
 		/*
 		 * Kernel maps might be changed when loading symbols so loading
 		 * must be done prior to using kernel maps.