From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E5B9C433EF for ; Tue, 15 Mar 2022 12:55:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 177508D0003; Tue, 15 Mar 2022 08:55:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 127CE8D0001; Tue, 15 Mar 2022 08:55:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 016A28D0003; Tue, 15 Mar 2022 08:55:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id E8EC38D0001 for ; Tue, 15 Mar 2022 08:55:49 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A2AE318206D25 for ; Tue, 15 Mar 2022 12:55:49 +0000 (UTC) X-FDA: 79246617618.17.C27E0A9 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by imf23.hostedemail.com (Postfix) with ESMTP id F3BE1140009 for ; Tue, 15 Mar 2022 12:55:47 +0000 (UTC) Received: from localhost.localdomain (unknown [10.2.5.185]) by mail.loongson.cn (Coremail) with SMTP id AQAAf9AxWs3KjDBiiMQJAA--.30652S2; Tue, 15 Mar 2022 20:55:43 +0800 (CST) From: Jianxing Wang To: will@kernel.org, aneesh.kumar@linux.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com, peterz@infradead.org Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jianxing Wang Subject: [PATCH 1/1] mm/mmu_gather: limit tlb batch count and add schedule point in tlb_batch_pages_flush Date: Tue, 15 Mar 2022 08:55:36 -0400 Message-Id: <20220315125536.1036303-1-wangjianxing@loongson.cn> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-CM-TRANSID:AQAAf9AxWs3KjDBiiMQJAA--.30652S2 X-Coremail-Antispam: 1UD129KBjvJXoWxZFy8trW7GF4fCF4kGw1Utrb_yoW5WFWxpF Z8Crs7ZrZ5Gw4UJr4Iy3Wv93sIvanIgrWrAFy8t3sxAr13J34vkFyvy34jgr18CFWrCrWS gFZrXr4rXrsFvr7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnUUvcSsGvfC2KfnxnUUI43ZEXa7xR_UUUUUUUUU== X-CM-SenderInfo: pzdqwyxldq5xtqj6z05rqj20fqof0/ X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: F3BE1140009 X-Stat-Signature: 1stjjxidsqar3c6g6yuchuq5nbz5suq4 Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf23.hostedemail.com: domain of wangjianxing@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=wangjianxing@loongson.cn X-Rspam-User: X-HE-Tag: 1647348947-32030 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: free a large list of pages maybe cause rcu_sched starved on non-preemptible kernels. howerver free_unref_page_list maybe can't cond_resched as it maybe called in interrupt or atomic context, especially can't detect atomic context in CONFIG_PREEMPTION=3Dn. tlb flush batch count depends on PAGE_SIZE, it's too large if PAGE_SIZE > 4K, here limit max batch size with 4K. And add schedule point in tlb_batch_pages_flush. rcu: rcu_sched kthread starved for 5359 jiffies! g454793 f0x0 RCU_GP_WAIT_FQS(5) ->state=3D0x0 ->cpu=3D19 [...] Call Trace: free_unref_page_list+0x19c/0x270 release_pages+0x3cc/0x498 tlb_flush_mmu_free+0x44/0x70 zap_pte_range+0x450/0x738 unmap_page_range+0x108/0x240 unmap_vmas+0x74/0xf0 unmap_region+0xb0/0x120 do_munmap+0x264/0x438 vm_munmap+0x58/0xa0 sys_munmap+0x10/0x20 syscall_common+0x24/0x38 Signed-off-by: Jianxing Wang --- include/asm-generic/tlb.h | 7 ++++++- mm/mmu_gather.c | 7 +++++-- 2 files changed, 11 insertions(+), 3 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2c68a545ffa7..47c7f93ca695 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -230,8 +230,13 @@ struct mmu_gather_batch { struct page *pages[0]; }; =20 +#if PAGE_SIZE > 4096UL +#define MAX_GATHER_BATCH_SZ 4096 +#else +#define MAX_GATHER_BATCH_SZ PAGE_SIZE +#endif #define MAX_GATHER_BATCH \ - ((PAGE_SIZE - sizeof(struct mmu_gather_batch)) / sizeof(void *)) + ((MAX_GATHER_BATCH_SZ - sizeof(struct mmu_gather_batch)) / sizeof(void = *)) =20 /* * Limit the maximum number of mmu_gather batches to reduce a risk of so= ft diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index afb7185ffdc4..f2c105810b3f 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -8,6 +8,7 @@ #include #include #include +#include =20 #include #include @@ -27,7 +28,7 @@ static bool tlb_next_batch(struct mmu_gather *tlb) if (tlb->batch_count =3D=3D MAX_GATHER_BATCH_COUNT) return false; =20 - batch =3D (void *)__get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); + batch =3D kmalloc(MAX_GATHER_BATCH_SZ, GFP_NOWAIT | __GFP_NOWARN); if (!batch) return false; =20 @@ -49,6 +50,8 @@ static void tlb_batch_pages_flush(struct mmu_gather *tl= b) for (batch =3D &tlb->local; batch && batch->nr; batch =3D batch->next) = { free_pages_and_swap_cache(batch->pages, batch->nr); batch->nr =3D 0; + + cond_resched(); } tlb->active =3D &tlb->local; } @@ -59,7 +62,7 @@ static void tlb_batch_list_free(struct mmu_gather *tlb) =20 for (batch =3D tlb->local.next; batch; batch =3D next) { next =3D batch->next; - free_pages((unsigned long)batch, 0); + kfree(batch); } tlb->local.next =3D NULL; } --=20 2.31.1