From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAC99C433EF for ; Thu, 17 Mar 2022 07:29:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 053176B0071; Thu, 17 Mar 2022 03:29:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 002006B0072; Thu, 17 Mar 2022 03:29:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5AAA8D0001; Thu, 17 Mar 2022 03:29:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id D76416B0071 for ; Thu, 17 Mar 2022 03:29:24 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6D604A32DF for ; Thu, 17 Mar 2022 07:29:24 +0000 (UTC) X-FDA: 79253052648.31.547E4F0 Received: from loongson.cn (mail.loongson.cn [114.242.206.163]) by imf08.hostedemail.com (Postfix) with ESMTP id B61FD160005 for ; Thu, 17 Mar 2022 07:29:22 +0000 (UTC) Received: from localhost.localdomain (unknown [10.2.5.185]) by mail.loongson.cn (Coremail) with SMTP id AQAAf9DxrxM74zJiQbsKAA--.8293S2; Thu, 17 Mar 2022 15:29:16 +0800 (CST) From: Jianxing Wang To: peterz@infradead.org Cc: will@kernel.org, aneesh.kumar@linux.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jianxing Wang Subject: [PATCH v2 1/1] mm/mmu_gather: limit free batch count and add schedule point in tlb_batch_pages_flush Date: Thu, 17 Mar 2022 03:28:57 -0400 Message-Id: <20220317072857.2635262-1-wangjianxing@loongson.cn> X-Mailer: git-send-email 2.31.1 MIME-Version: 1.0 X-CM-TRANSID:AQAAf9DxrxM74zJiQbsKAA--.8293S2 X-Coremail-Antispam: 1UD129KBjvJXoW7WryDAF1xuF4xWFW7uFW8Xrb_yoW8CFykpa 13WwsrCr4rG3yfAr42y3Wv9r9I9a90gFWrArWIyrs8A3sxJ3429F1vyw129rW3GrWrA3y3 Xr4DXFyfuF4kZr7anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnUUvcSsGvfC2KfnxnUUI43ZEXa7xR_UUUUUUUUU== X-CM-SenderInfo: pzdqwyxldq5xtqj6z05rqj20fqof0/ X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of wangjianxing@loongson.cn designates 114.242.206.163 as permitted sender) smtp.mailfrom=wangjianxing@loongson.cn; dmarc=none X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: B61FD160005 X-Stat-Signature: 8koi4c9dr3pj3txmc6eqp5kc4dryn8sq X-HE-Tag: 1647502162-41759 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: free a large list of pages maybe cause rcu_sched starved on non-preemptible kernels. howerver free_unref_page_list maybe can't cond_resched as it maybe called in interrupt or atomic context, especially can't detect atomic context in CONFIG_PREEMPTION=3Dn. tlb flush batch count depends on PAGE_SIZE, it's too large if PAGE_SIZE > 4K, here limit free batch count with 512. And add schedule point in tlb_batch_pages_flush. rcu: rcu_sched kthread starved for 5359 jiffies! g454793 f0x0 RCU_GP_WAIT_FQS(5) ->state=3D0x0 ->cpu=3D19 [...] Call Trace: free_unref_page_list+0x19c/0x270 release_pages+0x3cc/0x498 tlb_flush_mmu_free+0x44/0x70 zap_pte_range+0x450/0x738 unmap_page_range+0x108/0x240 unmap_vmas+0x74/0xf0 unmap_region+0xb0/0x120 do_munmap+0x264/0x438 vm_munmap+0x58/0xa0 sys_munmap+0x10/0x20 syscall_common+0x24/0x38 Signed-off-by: Jianxing Wang Signed-off-by: Peter Zijlstra --- ChangeLog: V1 -> V2: limit free batch count directly in tlb_batch_pages_flush --- --- mm/mmu_gather.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index afb7185ffdc4..a71924bd38c0 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -47,8 +47,20 @@ static void tlb_batch_pages_flush(struct mmu_gather *t= lb) struct mmu_gather_batch *batch; =20 for (batch =3D &tlb->local; batch && batch->nr; batch =3D batch->next) = { - free_pages_and_swap_cache(batch->pages, batch->nr); - batch->nr =3D 0; + struct page **pages =3D batch->pages; + + do { + /* + * limit free batch count when PAGE_SIZE > 4K + */ + unsigned int nr =3D min(512U, batch->nr); + + free_pages_and_swap_cache(pages, nr); + pages +=3D nr; + batch->nr -=3D nr; + + cond_resched(); + } while (batch->nr); } tlb->active =3D &tlb->local; } --=20 2.31.1