From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3A8BC43469 for ; Mon, 21 Sep 2020 02:03:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8ECFB207BB for ; Mon, 21 Sep 2020 02:03:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8ECFB207BB Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B415F900016; Sun, 20 Sep 2020 22:03:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A7DFC900018; Sun, 20 Sep 2020 22:03:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 884DB900016; Sun, 20 Sep 2020 22:03:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 5082D900018 for ; Sun, 20 Sep 2020 22:03:00 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 102CA180AD801 for ; Mon, 21 Sep 2020 02:03:00 +0000 (UTC) X-FDA: 77285420520.12.debt71_19173ed27141 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id E1475180070E1 for ; Mon, 21 Sep 2020 02:02:59 +0000 (UTC) X-HE-Tag: debt71_19173ed27141 X-Filterd-Recvd-Size: 8045 Received: from huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Sep 2020 02:02:59 +0000 (UTC) Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 91434B8DB0F749D7C350; Mon, 21 Sep 2020 10:02:55 +0800 (CST) Received: from mdc.huawei.com (10.175.112.208) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.487.0; Mon, 21 Sep 2020 10:02:48 +0800 From: Chen Jun To: , CC: , , , Subject: [PATCH -next 3/5] mm/kmemleak: Add support for percpu memory leak detect Date: Mon, 21 Sep 2020 02:00:05 +0000 Message-ID: <20200921020007.35803-4-chenjun102@huawei.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20200921020007.35803-1-chenjun102@huawei.com> References: <20200921020007.35803-1-chenjun102@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.112.208] X-CFilter-Loop: Reflected Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Wei Yongjun Currently the reporting of the percpu chunks leaking problem are not supported. This patch introduces this function. Since __percpu pointer is not pointing directly to the actual chunks, this patch creates an object for __percpu pointer, but marks it as no scan block, only check whether this pointer is referenced by other blocks. Introduce two global variables, min_percpu_addr and max_percpu_addr, to store the range of valid percpu pointer values, in order to speed up pointer lookup when scanning blocks. Signed-off-by: Wei Yongjun Signed-off-by: Chen Jun --- mm/kmemleak.c | 71 ++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 59 insertions(+), 12 deletions(-) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index c09c6b59eda6..feedb72f06f2 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -170,6 +170,8 @@ struct kmemleak_object { #define OBJECT_NO_SCAN (1 << 2) /* flag set to fully scan the object when scan_area allocation failed */ #define OBJECT_FULL_SCAN (1 << 3) +/* flag set to percpu ptr object */ +#define OBJECT_PERCPU (1 << 4) =20 #define HEX_PREFIX " " /* number of bytes to print per line; must be 16 or 32 */ @@ -212,6 +214,9 @@ static int kmemleak_error; /* minimum and maximum address that may be valid pointers */ static unsigned long min_addr =3D ULONG_MAX; static unsigned long max_addr; +/* minimum and maximum address that may be valid percpu pointers */ +static unsigned long min_percpu_addr =3D ULONG_MAX; +static unsigned long max_percpu_addr; =20 static struct task_struct *scan_thread; /* used to avoid reporting of recently allocated objects */ @@ -283,6 +288,9 @@ static void hex_dump_object(struct seq_file *seq, const u8 *ptr =3D (const u8 *)object->pointer; size_t len; =20 + if (object->flags & OBJECT_PERCPU) + ptr =3D this_cpu_ptr((void __percpu *)object->pointer); + /* limit the number of lines to HEX_MAX_LINES */ len =3D min_t(size_t, object->size, HEX_MAX_LINES * HEX_ROW_SIZE); =20 @@ -563,17 +571,32 @@ static int __save_stack_trace(unsigned long *trace) return stack_trace_save(trace, MAX_TRACE, 2); } =20 +static void __update_address_range(struct kmemleak_object *object) +{ + unsigned long ptr =3D object->pointer; + size_t size =3D object->size; + unsigned long untagged_ptr; + + if (object->flags & OBJECT_PERCPU) { + min_percpu_addr =3D min(min_percpu_addr, ptr); + max_percpu_addr =3D max(max_percpu_addr, ptr + size); + } else { + untagged_ptr =3D (unsigned long)kasan_reset_tag((void *)ptr); + min_addr =3D min(min_addr, untagged_ptr); + max_addr =3D max(max_addr, untagged_ptr + size); + } +} + /* * Create the metadata (struct kmemleak_object) corresponding to an allo= cated * memory block and add it to the object_list and object_tree_root. */ -static void create_object(unsigned long ptr, size_t size, int min_count, - gfp_t gfp) +static void __create_object(unsigned long ptr, size_t size, int min_coun= t, + unsigned int obj_flags, gfp_t gfp) { unsigned long flags; struct kmemleak_object *object, *parent; struct rb_node **link, *rb_parent; - unsigned long untagged_ptr; =20 object =3D mem_pool_alloc(gfp); if (!object) { @@ -587,7 +610,7 @@ static void create_object(unsigned long ptr, size_t s= ize, int min_count, INIT_HLIST_HEAD(&object->area_list); raw_spin_lock_init(&object->lock); atomic_set(&object->use_count, 1); - object->flags =3D OBJECT_ALLOCATED; + object->flags =3D OBJECT_ALLOCATED | obj_flags; object->pointer =3D ptr; object->size =3D size; object->excess_ref =3D 0; @@ -619,9 +642,7 @@ static void create_object(unsigned long ptr, size_t s= ize, int min_count, =20 raw_spin_lock_irqsave(&kmemleak_lock, flags); =20 - untagged_ptr =3D (unsigned long)kasan_reset_tag((void *)ptr); - min_addr =3D min(min_addr, untagged_ptr); - max_addr =3D max(max_addr, untagged_ptr + size); + __update_address_range(object); link =3D &object_tree_root.rb_node; rb_parent =3D NULL; while (*link) { @@ -651,6 +672,19 @@ static void create_object(unsigned long ptr, size_t = size, int min_count, raw_spin_unlock_irqrestore(&kmemleak_lock, flags); } =20 +static void create_object(unsigned long ptr, size_t size, int min_count, + gfp_t gfp) +{ + __create_object(ptr, size, min_count, 0, gfp); +} + +static void create_object_percpu(unsigned long ptr, size_t size, int min= _count, + gfp_t gfp) +{ + __create_object(ptr, size, min_count, OBJECT_PERCPU | OBJECT_NO_SCAN, + gfp); +} + /* * Mark the object as not allocated and schedule RCU freeing via put_obj= ect(). */ @@ -912,10 +946,12 @@ void __ref kmemleak_alloc_percpu(const void __percp= u *ptr, size_t size, * Percpu allocations are only scanned and not reported as leaks * (min_count is set to 0). */ - if (kmemleak_enabled && ptr && !IS_ERR(ptr)) + if (kmemleak_enabled && ptr && !IS_ERR(ptr)) { for_each_possible_cpu(cpu) create_object((unsigned long)per_cpu_ptr(ptr, cpu), size, 0, gfp); + create_object_percpu((unsigned long)ptr, size, 1, gfp); + } } EXPORT_SYMBOL_GPL(kmemleak_alloc_percpu); =20 @@ -991,10 +1027,12 @@ void __ref kmemleak_free_percpu(const void __percp= u *ptr) =20 pr_debug("%s(0x%p)\n", __func__, ptr); =20 - if (kmemleak_free_enabled && ptr && !IS_ERR(ptr)) + if (kmemleak_free_enabled && ptr && !IS_ERR(ptr)) { for_each_possible_cpu(cpu) delete_object_full((unsigned long)per_cpu_ptr(ptr, cpu)); + delete_object_full((unsigned long)ptr); + } } EXPORT_SYMBOL_GPL(kmemleak_free_percpu); =20 @@ -1224,6 +1262,17 @@ static int scan_should_stop(void) return 0; } =20 +static bool is_valid_address(unsigned long ptr) +{ + unsigned long untagged_ptr; + + if (ptr >=3D min_percpu_addr && ptr < max_percpu_addr) + return true; + + untagged_ptr =3D (unsigned long)kasan_reset_tag((void *)ptr); + return (untagged_ptr >=3D min_addr && untagged_ptr < max_addr); +} + /* * Scan a memory block (exclusive range) for valid pointers and add thos= e * found to the gray list. @@ -1235,7 +1284,6 @@ static void scan_block(void *_start, void *_end, unsigned long *start =3D PTR_ALIGN(_start, BYTES_PER_POINTER); unsigned long *end =3D _end - (BYTES_PER_POINTER - 1); unsigned long flags; - unsigned long untagged_ptr; =20 raw_spin_lock_irqsave(&kmemleak_lock, flags); for (ptr =3D start; ptr < end; ptr++) { @@ -1250,8 +1298,7 @@ static void scan_block(void *_start, void *_end, pointer =3D *ptr; kasan_enable_current(); =20 - untagged_ptr =3D (unsigned long)kasan_reset_tag((void *)pointer); - if (untagged_ptr < min_addr || untagged_ptr >=3D max_addr) + if (!is_valid_address(pointer)) continue; =20 /* --=20 2.25.0