From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26B36C7EE29 for ; Wed, 24 May 2023 03:20:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9BD6D6B0074; Tue, 23 May 2023 23:20:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 96DE26B0075; Tue, 23 May 2023 23:20:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83519900002; Tue, 23 May 2023 23:20:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 74F136B0074 for ; Tue, 23 May 2023 23:20:48 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 231F01C7264 for ; Wed, 24 May 2023 03:20:48 +0000 (UTC) X-FDA: 80823696576.29.15899C0 Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) by imf09.hostedemail.com (Postfix) with ESMTP id 4402014000A for ; Wed, 24 May 2023 03:20:46 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=OydEIfSz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.219.174 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684898446; a=rsa-sha256; cv=none; b=nkClDBzLtdsYh++EE2AErCfUhxp7Md+j3oFkuZUTfRR8J+0wDqRBRU8ZtpRr+XoNJfC9XR mYoc4+e4L8yQ1gxTyzFCcfjkpLueiNeBrgUb7u+VhjczszXSHESNZykmWsmx9QN4/nyJv4 vt1IY7K2wMLurw3ToBTHI3aOjsP9DQY= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=OydEIfSz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of hughd@google.com designates 209.85.219.174 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684898446; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=S1Ic9lmajt1bc7YqAp1ZDUsVACgZvUXdJaCPzytnSgQ=; b=5eok8iGPCX/UIflY14taLoEcZ1yjyOW60tvN0FMwZM86RJoQfmIjTj/JXphvCPQnZOnM7n Vo8LGE/gFLdizsDdsDJbT9L0GKa5Qt8mOMV3ftmAHU78ZkVj0hsr+IJKPpgmzuQRLkNTRC Q2qgLv/zMs2RKE2Aj2PAS95pCRYzrDk= Received: by mail-yb1-f174.google.com with SMTP id 3f1490d57ef6-ba829e17aacso770462276.0 for ; Tue, 23 May 2023 20:20:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1684898445; x=1687490445; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=S1Ic9lmajt1bc7YqAp1ZDUsVACgZvUXdJaCPzytnSgQ=; b=OydEIfSzuBg9A30R/d+SaZ/46FzOSp7LL0tywF5phYVeld9ovniXgucSvpmx9qp6kv fzEb5o+qX222y7d40RT69HOOmTqtE/mxh+dGZutzpRnxC1ohXmoOHRNT3LwrIrQLXLDd Cl+78DjSEKOQBNYxPXfM3tYeP5xrhc8C5uVkJYKYfF3nnLxd2sIHzA7toqq5yz5HQSF1 qZshTFG6b0qR+n/iCO+lOHGEAZmQirGfiA7P7fpVHdYW/k2/6SHC1RT8qguWp/8hd4Ng Qelg8Olb9YUqP2TgKiPWMZdpBBtb7f/YltwBC83heuZghuFH3pWH8qOaitV46WFgAbZm vcEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684898445; x=1687490445; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=S1Ic9lmajt1bc7YqAp1ZDUsVACgZvUXdJaCPzytnSgQ=; b=IxpI13uzNckN4jn52xLTMOMc3n4KVWzn+hKySXiNruWkIjiv2zh+TtfGeGT+9fQ9l+ imnT2MskoMRhQKSG99r3NeCmScZNjzKd1B8lvjwG9FlgdhmSnEbMv+Ul7qHAgK+B+nYj cte82u8LIvubQ/T7oHbXpDY1/rsi2zh0vdeuH9T9j+kuFQPzgrLuAvnqJeKoEyyeqJxu WFmvDWZZkMa8O96HuEX7eZX/5IhBGPVvJXzfyh9RdHFnGr5jiEzE0D4YkieRHvAWEJ4v iyKeRQOooP2uh+OPwSB19zPH/w6siWnPRiEO0Dw4MOj2VW/iLD+LqT5jDPtHZ4e59oeS YB5A== X-Gm-Message-State: AC+VfDz1UkHT4WepUibW4adLyCjyVALufFVNf7+I7A0laQssVHNIvP4J c1SojiOrHuczfF8AtB0vNyxahQ== X-Google-Smtp-Source: ACHHUZ4jp475x6K67Bd7uNS4vA7Hth7pnLr9V4HuAUB51TqTKZ6Hb4uHD1d9YbQMgTAOfsJ3QTeg7g== X-Received: by 2002:a81:6ac1:0:b0:556:9f48:cbe7 with SMTP id f184-20020a816ac1000000b005569f48cbe7mr18184556ywc.3.1684898445186; Tue, 23 May 2023 20:20:45 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id e129-20020a0dc287000000b00545a08184f8sm3386553ywd.136.2023.05.23.20.20.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 23 May 2023 20:20:44 -0700 (PDT) Date: Tue, 23 May 2023 20:20:41 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Qi Zheng cc: Hugh Dickins , Andrew Morton , Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Jason Gunthorpe , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH 17/31] mm/various: give up if pte_offset_map[_lock]() fails In-Reply-To: <07317766-c901-34a9-360a-e916db4b9045@linux.dev> Message-ID: <4df7a2a5-e2b9-6c9e-9bbf-27e1dbdace8@google.com> References: <68a97fbe-5c1e-7ac6-72c-7b9c6290b370@google.com> <07317766-c901-34a9-360a-e916db4b9045@linux.dev> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="-1463760895-1197784731-1684898444=:7491" X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 4402014000A X-Stat-Signature: fo8fjjpbhj37c6bp4pamtzb1ysad7a1b X-HE-Tag: 1684898446-511226 X-HE-Meta: U2FsdGVkX1+Av4W5IhH+rVnBQeqU4x/k7YEvnV2LrLlEMVvL74xi+/kxj6h2JUBz4kY9gfGOzOus6UZWgAdX25NTxPBxHk+RuAkvxkuvNpTeltMKhH71w7iXSfZPj3o4c0QNmA846oHHFJfPVcxEsD6qG2D/UrbWBKdY+OaplDB0Dn1Pk+GFk+ko6S84vLILv3E32T+s2pBtnPA+8dmmjq31nVZF6yy6n7Yobs6ubykBaK+Mf75x90zb8kg3Ty47oP/hDuZhvLR87JQDtwZQb6npYV2t0PR32Ai/1ye+kxaF9BA9BFq0TadVhr2qKnyBk515UjjD0WxLjod4g3VTlvlb1uF7NaB1U5/lbUE17U4xw+IYWrPsKalt/SsnL7/rrqkFN/VbmInakFtsqQ6FpFBv33gU0n+jbgpCO0wQwmS9y+3CNTTNjvT9JYYm1hqO8JIxfzLZfzqL4lNBJocBgoTMwhbZ/WfV1YTGpuGMGILzkYaANbRbX9GIO3Z+yMwa97eUeJpJzNvB85BawsOfSZmKgsujvVpX1us+jtanT4PkZtoeWllwQI4LnXK8QaRB4l8I/IzuC1dwQNGMFEaKZ9BUAnCgu0//OmZul4U6aoVP1UH6XeEo3nY9Yvb+1MsA7niY1tCBb4xFyarwuVQazmq50QD0Mhvm31JR5d8GlEUEFowLKiFV98D5KSzqx5e5F2GYTIVU9CGeBmxhqLK25C9Awff+CKF2ShZXU+Haly9Qks+EeOOJ7oiRTthBgOuwBM6j2wZrLpmAHxy8zJjKfbx4NnnwuvTfs6T/P199q6JJBfNQlq/59ovwFSGz453WkmW04K6mU3rmuIA/L3BZmysAr1iTOYe4MwD6Ic+quhmU/A+K5/Dgicql1QDr0U4lvujflyOmeDtqAbi36WiQv0IR/WK5NlK5Hwt/bkP+bYsBQEy9wlO+sUPqcrJ+UYs3AhDsSLo67fO9syk7bs3 J2lgnube J63Jp+htCdnNWp5aJqiNmtZP1urlA+G4aOSZVaxhV9Ueo6bwDsnbnW4YXsIFWiHJZ+SH1o3Segcji7XC+y9/KvE3oXNtlUWG2yb2RJ9Obk9T5RIlxp8F8r7UFGYEn9p/0l7tHujsWYqCBjmN+1BOD8gAS/9p5KYPB3SvEI6y3HvP+B3f1KY/zrG05yDcJlRXuFDU4+2BoFnBxtGWpn3Lqwx+lk41LoCb+uF+uYu1BQZ4rYGuuv66gVUd8HwFDq2hvmd40XfmKau0QdSlEvdi/ZUGDZWAeMvgsi2AOHAD7OVdLshHrmE21G4UvK+pMRYB62iqvRspfL64/J8W1mszWFmFSgaBZTyDA0rDJ2XNPm76fxWfWkizOISXa9vXuKbs5SyU5HDRn7zuFbGWUXP4ZHd/JE9dbPAYrA+lXh+1lC7mBHzTMhmvu285eS52RjRz2sapdzcrqy8LX9LwoHl1E4F6OMljgyg+wrTjOzpoY2MkKNSKreEcT2aojHSugAI7k3l/8 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This message is in MIME format. The first part should be readable text, while the remaining parts are likely unreadable without MIME-aware tools. ---1463760895-1197784731-1684898444=:7491 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: QUOTED-PRINTABLE On Mon, 22 May 2023, Qi Zheng wrote: > On 2023/5/22 20:24, Qi Zheng wrote: > > On 2023/5/22 13:10, Hugh Dickins wrote: > >> Following the examples of nearby code, various functions can just give > >> up if pte_offset_map() or pte_offset_map_lock() fails.=C2=A0 And there= 's no > >> need for a preliminary pmd_trans_unstable() or other such check, since > >> such cases are now safely handled inside. > >> > >> Signed-off-by: Hugh Dickins > >> --- > >> =C2=A0 mm/gup.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 | 9 ++++++--- > >> =C2=A0 mm/ksm.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 | 7 ++++--- > >> =C2=A0 mm/memcontrol.c=C2=A0=C2=A0=C2=A0=C2=A0 | 8 ++++---- > >> =C2=A0 mm/memory-failure.c | 8 +++++--- > >> =C2=A0 mm/migrate.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 3 +++ > >> =C2=A0 mm/swap_state.c=C2=A0=C2=A0=C2=A0=C2=A0 | 3 +++ > >> =C2=A0 6 files changed, 25 insertions(+), 13 deletions(-) > >> > >=20 > > [...] > >=20 > >> diff --git a/mm/migrate.c b/mm/migrate.c > >> index 3ecb7a40075f..308a56f0b156 100644 > >> --- a/mm/migrate.c > >> +++ b/mm/migrate.c > >> @@ -305,6 +305,9 @@ void migration_entry_wait(struct mm_struct *mm, pm= d_t > >> *pmd, > >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 swp_entry_t entry; > >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ptep =3D pte_offset_map_lock(mm, pmd, a= ddress, &ptl); > >> +=C2=A0=C2=A0=C2=A0 if (!ptep) > >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return; > >=20 > > Maybe we should return false and let the caller handle the failure. We have not needed to do that before, it's normal for migration_entry_wait(= ) not to wait sometimes: it just goes back out to userspace to try again (by which time the situation is usually resolved). I don't think we want to trouble the callers with a new case to handle in some other way. > >=20 > >> + > >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pte =3D *ptep; > >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pte_unmap(ptep); > >> diff --git a/mm/swap_state.c b/mm/swap_state.c > >> index b76a65ac28b3..db2ec85ef332 100644 > >> --- a/mm/swap_state.c > >> +++ b/mm/swap_state.c > >> @@ -734,6 +734,9 @@ static void swap_ra_info(struct vm_fault *vmf, > >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Copy the PTEs because the page table= may be unmapped */ > >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 orig_pte =3D pte =3D pte_offset_map(vmf= ->pmd, faddr); > >> +=C2=A0=C2=A0=C2=A0 if (!pte) > >> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return; > >=20 > > Ditto? >=20 > Oh, I see that you handle it in the PATCH[22/31]. I don't think 22/31 (about swapoff "unuse") relates to this one. Here swap_vma_readahead() is doing an interesting calculation for how big the readaround window should be, and my thinking was, who cares? just read 1, in the rare case that the page table vanishes underneath us. But thank you for making me look again: it looks like I was not careful enough before, ra_info->win is definitely *not* 1 on this line, and I wonder if something bad might result from not following through on the ensuing calculations - see how !CONFIG_64BIT is copying ptes (and that implies CONFIG_64BIT is accessing the page table after pte_unmap()). This swap_ra_info() code looks like it will need a patch all it own: I must come back to it. Thanks! Hugh ---1463760895-1197784731-1684898444=:7491--