From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD082C5475B for ; Wed, 6 Mar 2024 08:17:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 18CB16B0074; Wed, 6 Mar 2024 03:17:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 13D386B0075; Wed, 6 Mar 2024 03:17:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 004906B007D; Wed, 6 Mar 2024 03:17:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E1F796B0074 for ; Wed, 6 Mar 2024 03:17:41 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id BA0B5A0967 for ; Wed, 6 Mar 2024 08:17:41 +0000 (UTC) X-FDA: 81865910322.22.5047EFE Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf25.hostedemail.com (Postfix) with ESMTP id 50769A000C for ; Wed, 6 Mar 2024 08:17:38 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709713060; a=rsa-sha256; cv=none; b=LU/q/IadisU8KDfP4KgnfXwVRR8DqYij5f81EcXGz1OIbvYmfzJeKHP7Kb3BeiDkiDzbay St2FPqdWaYI+33MjZfjQOTObrC2R2+mh0dIye23XVSUIqLfRtetB5zTNmT+SfIza38A0tH ZxA0j6j0VyPOearAqETp2VIf40OTS7M= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709713060; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jvnZLzQdR/QHgJalF9n3ajJ6tFdo9Ze99dN7q6xBOCE=; b=vVJIQ1fx31WuIhhoGu2oK7fPIPHqpDD1/mYW783wrS6EvSP21Mx+QZ0NnwPwhCidZHayGW tQHdPsNcT5wh6CTD7el5JA24fE2qwPD8/xHUPMY/Dxll5nbiDFweneUT0anPzAd+DNbik9 07eW40HkyAGCfCgOu/by2AMoGMVbo9A= Received: from mail.maildlp.com (unknown [172.19.88.163]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4TqQJQ2Ptcz21kd3; Wed, 6 Mar 2024 16:16:54 +0800 (CST) Received: from canpemm500002.china.huawei.com (unknown [7.192.104.244]) by mail.maildlp.com (Postfix) with ESMTPS id 7FE4318001B; Wed, 6 Mar 2024 16:17:34 +0800 (CST) Received: from [10.173.135.154] (10.173.135.154) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.35; Wed, 6 Mar 2024 16:17:34 +0800 Subject: Re: [PATCH 3/8] mm: Return the address from page_mapped_in_vma() To: "Matthew Wilcox (Oracle)" CC: , Naoya Horiguchi , Andrew Morton References: <20240229212036.2160900-1-willy@infradead.org> <20240229212036.2160900-4-willy@infradead.org> From: Miaohe Lin Message-ID: <488d29bb-1c71-741f-8b06-f5aa39efa081@huawei.com> Date: Wed, 6 Mar 2024 16:17:33 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20240229212036.2160900-4-willy@infradead.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.173.135.154] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To canpemm500002.china.huawei.com (7.192.104.244) X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 50769A000C X-Stat-Signature: cgocyb6ptugj4o94yquuho1sqe37wsz9 X-Rspam-User: X-HE-Tag: 1709713058-621675 X-HE-Meta: U2FsdGVkX18lBnZ7O1Tsu1ZVPtk5PXxM7nd1J43thTRziYJaYcW9hs7fRsHn+9ntLuLyeAB4JLmB+yDTFCm0+spL/FSTiExRyQoNipp4WnEnLVie3sak9sG7GaiVAOO4/iEK+rx7dxSC10qKHzDFtNk30ex2vWJ/iwQ1L+PDPTqpYWkW1ZCybHed05H0gi4saEBwgu9WlW+inNMmCp9IUhfcZIHJwCf8sJKZRuJMa7cTrkJyf/Li0zoqKzGnQpQ/Tolj3t3ZcpD9L46LLmdBoCezZinyXizx/VTeDE5phvZTuZBdCQIwUNZQGgFiut3R3trfyPlZmip5TEQ+ce1jjVDlWZqNDW9HQWFDMCGr2VxCQvAgyTIFwGfyzs67+VnJU3c+76AD1CXVaEasmKR5rcmk8gKgGkI5oO/WbMNw2b9jkmDg4tQ4R2ammzUY2+n8doV9B959AUpKY/cBMpGHeANjnNXDprVbGbKhqb14dEBluRj92zb/st7jtZVQUgeMoJXli1RCJmUBZSD9uDhStiMnWDGsLzlmY1rNt65ikA2MvjXmp59aRlV8Kjea0Ap6sCwnSC5r+lwNl107yWUh4l+XsCgi/eRCYDwvJ6lAQRYvvclxi6WNr1Ou/R5KceYFvbXYMLG/6NLp90WaIHlwyhm8FzPWJvtQTiDYQplFObcvX4l3pRvsbEthSAnjJkMzG4EiDmqIIkqeQUIS/pcr6vCwkGuC7wi5aNdHex/C5hddYR05U/WZnLBv1o6hDg0KzEnuYlOx0WALV2qqJGZTveIg+va1ab1uJVOUEA10Y4+YDGH/SnfuggslCkNAFSXDhRzD1dvSHn9EfgHA8+YjTRCxsWC0LCp4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/3/1 5:20, Matthew Wilcox (Oracle) wrote: > The only user of this function calls page_address_in_vma() immediately > after page_mapped_in_vma() calculates it and uses it to return true/false. > Return the address instead, allowing memory-failure to skip the call > to page_address_in_vma(). > > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/linux/rmap.h | 2 +- > mm/memory-failure.c | 22 ++++++++++++++-------- > mm/page_vma_mapped.c | 14 +++++++------- > 3 files changed, 22 insertions(+), 16 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index b7944a833668..ba027a4d9abf 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -698,7 +698,7 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, > > void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked); > > -int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); > +unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); > > /* > * rmap_walk_control: To control rmap traversing for specific needs > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 7f8473c08ae3..40a8964954e5 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -462,10 +462,11 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p, > } > > static void add_to_kill_anon_file(struct task_struct *tsk, struct page *p, > - struct vm_area_struct *vma, > - struct list_head *to_kill) > + struct vm_area_struct *vma, struct list_head *to_kill, > + unsigned long addr) > { > - unsigned long addr = page_address_in_vma(p, vma); > + if (addr == -EFAULT) > + return; > __add_to_kill(tsk, p, vma, to_kill, addr); > } > > @@ -609,12 +610,13 @@ static void collect_procs_anon(struct folio *folio, struct page *page, > continue; > anon_vma_interval_tree_foreach(vmac, &av->rb_root, > pgoff, pgoff) { > + unsigned long addr; > + > vma = vmac->vma; > if (vma->vm_mm != t->mm) > continue; > - if (!page_mapped_in_vma(page, vma)) > - continue; > - add_to_kill_anon_file(t, page, vma, to_kill); > + addr = page_mapped_in_vma(page, vma); > + add_to_kill_anon_file(t, page, vma, to_kill, addr); > } > } > rcu_read_unlock(); > @@ -642,6 +644,8 @@ static void collect_procs_file(struct folio *folio, struct page *page, > continue; > vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, > pgoff) { > + unsigned long addr; > + > /* > * Send early kill signal to tasks where a vma covers > * the page but the corrupted page is not necessarily > @@ -649,8 +653,10 @@ static void collect_procs_file(struct folio *folio, struct page *page, > * Assume applications who requested early kill want > * to be informed of all such data corruptions. > */ > - if (vma->vm_mm == t->mm) > - add_to_kill_anon_file(t, page, vma, to_kill); > + if (vma->vm_mm != t->mm) > + continue; > + addr = page_address_in_vma(page, vma); > + add_to_kill_anon_file(t, page, vma, to_kill, addr); > } > } > rcu_read_unlock(); > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c > index 74d2de15fb5e..e9e208b4ac4b 100644 > --- a/mm/page_vma_mapped.c > +++ b/mm/page_vma_mapped.c > @@ -319,11 +319,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) > * @page: the page to test > * @vma: the VMA to test > * > - * Returns 1 if the page is mapped into the page tables of the VMA, 0 > - * if the page is not mapped into the page tables of this VMA. Only > - * valid for normal file or anonymous VMAs. > + * Return: If the page is mapped into the page tables of the VMA, the > + * address that the page is mapped at. -EFAULT if the page is not mapped. > + * Only valid for normal file or anonymous VMAs. > */ > -int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) > +unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) > { > struct page_vma_mapped_walk pvmw = { > .pfn = page_to_pfn(page), > @@ -334,9 +334,9 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) > > pvmw.address = vma_address(page, vma); > if (pvmw.address == -EFAULT) > - return 0; > + return -EFAULT; > if (!page_vma_mapped_walk(&pvmw)) > - return 0; > + return -EFAULT; > page_vma_mapped_walk_done(&pvmw); > - return 1; > + return pvmw.address; page_mapped_in_vma is only called by collect_procs_anon. Will it be better to make it under CONFIG_MEMORY_FAILURE? Anyway, this patch looks good to me. Thanks. Acked-by: Miaohe Lin