From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DBDBAC001DF for ; Thu, 6 Jul 2023 15:07:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 596DF8D0001; Thu, 6 Jul 2023 11:07:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 545616B0072; Thu, 6 Jul 2023 11:07:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 349FE8D0001; Thu, 6 Jul 2023 11:07:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 221D86B0071 for ; Thu, 6 Jul 2023 11:07:10 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 69CAF120BDA for ; Thu, 6 Jul 2023 15:07:08 +0000 (UTC) X-FDA: 80981514936.19.C9958C9 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf14.hostedemail.com (Postfix) with ESMTP id 2035810043C for ; Thu, 6 Jul 2023 15:03:27 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=hotimYj8; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf14.hostedemail.com: domain of gerald.schaefer@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=gerald.schaefer@linux.ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688655810; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qSwFzruPcaNdP45SxVUbDhB22B0cgY81dMFR8R2Q4e4=; b=HJU2Ehax+autgD61mmqLUnQd7fLKWW2vCPlxyp0hfcQ0HUFPskiVNL1QxztKqplOIryt+o ndaCVH+9u4A9yZwwAEYLAg3YSv/OBfUOxmifLcQFazVNLsNMeI9J7M+Hz1z1UrAvcnzzfL +7eW6IUsu8q46/3UDAN8m0llDT9XY4U= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=hotimYj8; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf14.hostedemail.com: domain of gerald.schaefer@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=gerald.schaefer@linux.ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688655810; a=rsa-sha256; cv=none; b=NUGTCpyc2S6VKvGeU3AH3L4Zyfep0VRyQiOapdG4YOug4a3vpViR3bRRlK3mOKvD/z+hRe 32v4SeLZfCeTgiv8P0M6UjF4YXvuVG2+iYPVNa1AWxQ5mdJEYXhlAEcKrqEKUNenaYBW9t zz6eBW9t71lBfCJcngRWJCL63T/1r28= Received: from pps.filterd (m0353722.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 366F1NL4002682; Thu, 6 Jul 2023 15:02:29 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc : subject : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=pp1; bh=qSwFzruPcaNdP45SxVUbDhB22B0cgY81dMFR8R2Q4e4=; b=hotimYj8WUqulE0HBmKXfLbgKYf2Lr78pmtKqqFoRsMj+u7koEPz4PW/MJQavdO7FXY4 4deakJj22T41XELxCotJ/nsKRaEybKwwFHpa/WuYovYUX3Vxzk+h9VYh39HagbwiLdaz wvHA7EuH/UHAfSZMLrdaU3p1mf5IX9MEh1SHRNua1zhTEuBwxJP1Q/2NSejDsFLYcw0/ qEecqFTSnPgtCPmp5yfhYC7FM09aD6hGzYQvQVOCiUeKeceXt83sl78liBwNo9TpIAOe 9ADgn9vhlWzjRPiOt40+CwluQTKf5JV4SC00qSKRzyy1qBkZXnhe8W44fnStQo9Y1hMm nA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3rnyvs81m3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Jul 2023 15:02:27 +0000 Received: from m0353722.ppops.net (m0353722.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 366F1SXO003194; Thu, 6 Jul 2023 15:02:25 GMT Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3rnyvs81gs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Jul 2023 15:02:25 +0000 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 365Ni7ZQ002690; Thu, 6 Jul 2023 15:02:22 GMT Received: from smtprelay03.fra02v.mail.ibm.com ([9.218.2.224]) by ppma06ams.nl.ibm.com (PPS) with ESMTPS id 3rjbde3cmy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 06 Jul 2023 15:02:22 +0000 Received: from smtpav03.fra02v.mail.ibm.com (smtpav03.fra02v.mail.ibm.com [10.20.54.102]) by smtprelay03.fra02v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 366F2IIN12911276 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Thu, 6 Jul 2023 15:02:18 GMT Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 42AF920040; Thu, 6 Jul 2023 15:02:18 +0000 (GMT) Received: from smtpav03.fra02v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 565902004B; Thu, 6 Jul 2023 15:02:15 +0000 (GMT) Received: from thinkpad-T15 (unknown [9.179.10.27]) by smtpav03.fra02v.mail.ibm.com (Postfix) with SMTP; Thu, 6 Jul 2023 15:02:15 +0000 (GMT) Date: Thu, 6 Jul 2023 17:02:13 +0200 From: Gerald Schaefer To: Hugh Dickins Cc: Jason Gunthorpe , Andrew Morton , Vasily Gorbik , Mike Kravetz , Mike Rapoport , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Suren Baghdasaryan , Qi Zheng , Yang Shi , Mel Gorman , Peter Xu , Peter Zijlstra , Will Deacon , Yu Zhao , Alistair Popple , Ralph Campbell , Ira Weiny , Steven Price , SeongJae Park , Lorenzo Stoakes , Huang Ying , Naoya Horiguchi , Christophe Leroy , Zack Rusin , Axel Rasmussen , Anshuman Khandual , Pasha Tatashin , Miaohe Lin , Minchan Kim , Christoph Hellwig , Song Liu , Thomas Hellstrom , Russell King , "David S. Miller" , Michael Ellerman , "Aneesh Kumar K.V" , Heiko Carstens , Christian Borntraeger , Claudio Imbrenda , Alexander Gordeev , Jann Horn , Vishal Moola , Vlastimil Babka , linux-arm-kernel@lists.infradead.org, sparclinux@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v2 07/12] s390: add pte_free_defer() for pgtables sharing page Message-ID: <20230706170213.2ea63606@thinkpad-T15> In-Reply-To: <1014735-ecc4-b4bc-3ae7-48a4328ed149@google.com> References: <54cb04f-3762-987f-8294-91dafd8ebfb0@google.com> <20230628211624.531cdc58@thinkpad-T15> <20230629175645.7654d0a8@thinkpad-T15> <7bef5695-fa4a-7215-7e9d-d4a83161c7ab@google.com> <20230704171905.1263478f@thinkpad-T15> <20230705145516.7d9d554d@thinkpad-T15> <1014735-ecc4-b4bc-3ae7-48a4328ed149@google.com> X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; x86_64-redhat-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: QGkni06Tvema7LXh8NlVshLGX4mkBNVW X-Proofpoint-GUID: 5jal2qspqbLQazmPwsOCxzoLdtbfP85A X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-07-06_11,2023-07-06_02,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 spamscore=0 lowpriorityscore=0 adultscore=0 phishscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2305260000 definitions=main-2307060130 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 2035810043C X-Stat-Signature: ok5msth8mxmpwe7jhfcyk9y4zacwbzre X-Rspam-User: X-HE-Tag: 1688655807-273696 X-HE-Meta: U2FsdGVkX1/UZTFbultbjkDZ/S58LnuBuZ+3iZEt8v05Du4Kzd5g9fIEoiTwbID/9FhEn19zGQiUn3yu3lc437tSzHd+Y5RFa6aBQyRXC4+2wzShXeXCIhnlo1lpVg8+9Re1myLAKkwscQ+jI4RJ0yuPepYIVJVNOTlhPGCaMRTtzS3e2/EXbIPBJ9CHKuo27k7uMV32Vx9oiHTeDJpBdmhMSuc8K3K/rR/0qR/0It2HChOD+r3lzvvJeYvN4DRRgwAzlOveNo3itpHJatNw65Yc5EZCVwyZhJUy2cu0IqZI0KiuZQsbOewU9vUs4f4sMa1ApHP0bTylgxUyrQqFHuzMtelgRzRC0Llra157KOT4GtgFP+XWmO2v3/dCmLTEp+GS8LRnGItXDloHbuLdI8iYq1OK+x/VoKlSPAvOaLuYim+Jb+F9ZUYDYXd3qI1NkZTtw1vBbpDs1UF2fkYuRagB2LWh56Xl1ojNlVhZIKMTEpx5WzFGqrCBWpMF2cnACN/ZuAoeOiQU7IW7ZQZeTCGQddYPGxyLiVc4rMh6O+CTYbmMCcDsHcIgJnPEZijPzk30e/8tTqVDqmSRSr6ClmVgDK9LctvdedJ/nF6dyKV+vsu0myB1mgdgZY3Sd7N2N3y+jUuNbPuXqXZslPYF1aajHHen9vW9Y1TiilIjY2m025rN4MzO+dpln+qSH+puRFmNXF2aQNiuY/spoDE7sejVDdm/SgZ5B7Zik8IhiW5yc8d5prJ7nO20J+TsDNeLh+9R6n5DHlBVSoy7EiQa3kctoLUZk7FG5aYGNBOJOhC9OiSpPhaMBW6jNrdSA6KH7HfSKlMC/w9AghpxDuj/GS2Tvxo1JJhklmREc4sR1bdK/gYMSiRVklqvsAIzVthkHbcPO/Ce8+XrmyZht6700WVE2x4+ZCbK+ae9RuG2u57lEidVPdPDswKw+ztN9NfZGuBdGWJ1wkm6lQkYzTr GHqFi5h6 S+ieCv50RifgzkSbKXIfafyERyX1PxMqeMR8PltdTjqbMvYQf0hLqbuzARPtHmHVhJje8hO7AkNiSjIK4zV4mbMA9MyUntfhItEW3wGMlrkCc9/mPHQJ49HrGMt4IaRnIaBKNBBzSgYcFYlwecauzQCm/Iv1fxQrHg0RvgktjnumWcEwGUMyElNm+5ZvAdQge6edArPfWXcgjK/R+38KtQ84hxTVD+NQLYPdyRqkzVXCvHZZrXMFvDtyf3HoZ/gSmyMbXfKk/f56XBm9L7zpyrp3iKErfqRcCVfMYZfT+xuihlacSHUJgd35VUtMse70Yati/WHJnM/oIxY8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 5 Jul 2023 18:20:21 -0700 (PDT) Hugh Dickins wrote: > On Wed, 5 Jul 2023, Gerald Schaefer wrote: > > On Tue, 4 Jul 2023 10:03:57 -0700 (PDT) > > Hugh Dickins wrote: > > > On Tue, 4 Jul 2023, Gerald Schaefer wrote: > > > > On Sat, 1 Jul 2023 21:32:38 -0700 (PDT) > > > > Hugh Dickins wrote: > > > > > On Thu, 29 Jun 2023, Hugh Dickins wrote: > > > ... > > > > > --- a/arch/s390/mm/pgalloc.c > > > > > +++ b/arch/s390/mm/pgalloc.c > > > > > @@ -229,6 +229,15 @@ void page_table_free_pgste(struct page *page) > > > > > * logic described above. Both AA bits are set to 1 to denote a 4KB-pgtable > > > > > * while the PP bits are never used, nor such a page is added to or removed > > > > > * from mm_context_t::pgtable_list. > > > > > + * > > > > > + * pte_free_defer() overrides those rules: it takes the page off pgtable_list, > > > > > + * and prevents both 2K fragments from being reused. pte_free_defer() has to > > > > > + * guarantee that its pgtable cannot be reused before the RCU grace period > > > > > + * has elapsed (which page_table_free_rcu() does not actually guarantee). > > > > > > > > Hmm, I think page_table_free_rcu() has to guarantee the same, i.e. not > > > > allow reuse before grace period elapsed. And I hope that it does so, by > > > > setting the PP bits, which would be noticed in page_table_alloc(), in > > > > case the page would be seen there. > > > > > > > > Unlike pte_free_defer(), page_table_free_rcu() would add pages back to the > > > > end of the list, and so they could be seen in page_table_alloc(), but they > > > > should not be reused before grace period elapsed and __tlb_remove_table() > > > > cleared the PP bits, as far as I understand. > > > > > > > > So what exactly do you mean with "which page_table_free_rcu() does not actually > > > > guarantee"? > > > > > > I'll answer without locating and re-reading what Jason explained earlier, > > > perhaps in a separate thread, about pseudo-RCU-ness in tlb_remove_table(): > > > he may have explained it better. And without working out again all the > > > MMU_GATHER #defines, and which of them do and do not apply to s390 here. > > > > > > The detail that sticks in my mind is the fallback in tlb_remove_table() > > > > Ah ok, I was aware of that "semi-RCU" fallback logic in tlb_remove_table(), > > but that is rather a generic issue, and not s390-specific. > > Yes. > > > I thought you > > meant some s390-oddity here, of which we have a lot, unfortunately... > > Of course, we call tlb_remove_table() from our page_table_free_rcu(), so > > I guess you could say that page_table_free_rcu() cannot guarantee what > > tlb_remove_table() cannot guarantee. > > > > Maybe change to "which page_table_free_rcu() does not actually guarantee, > > by calling tlb_remove_table()", to make it clear that this is not a problem > > of page_table_free_rcu() itself. > > Okay - I'll rephrase slightly to avoid being sued by s390's lawyers :-) > > > > > > in mm/mmu_gather.c: if its __get_free_page(GFP_NOWAIT) fails, it cannot > > > batch the tables for freeing by RCU, and resorts instead to an immediate > > > TLB flush (I think: that again involves chasing definitions) followed by > > > tlb_remove_table_sync_one() - which just delivers an interrupt to each CPU, > > > and is commented: > > > /* > > > * This isn't an RCU grace period and hence the page-tables cannot be > > > * assumed to be actually RCU-freed. > > > * > > > * It is however sufficient for software page-table walkers that rely on > > > * IRQ disabling. > > > */ > > > > > > Whether that's good for your PP pages or not, I've given no thought: > > > I've just taken it on trust that what s390 has working today is good. > > > > Yes, we should be fine with that, current code can be trusted :-) > > Glad to hear it :-) Yes, I think it's not actually relying on the "rcu" > implied by the function name. Ah ok, now I get it. Never noticed that naming it "free_rcu" could be misleading. It is only ever called from pte_free_tlb(), so always in that "semi-RCU" context. If you just look at the name, you could expect this to always free pagetables by RCU, which would be exactly what you need for pte_free_defer(), and which of course cannot be guaranteed by our page_table_free_rcu(). IOW, exactly what your comment says, and now I think it is actually fine as it is :-) I guess I am a bit lamebrained this week, due to early shift and not enough sleep... > > > > > > > > > If that __get_free_page(GFP_NOWAIT) fallback instead used call_rcu(), > > > then I would not have written "(which page_table_free_rcu() does not > > > actually guarantee)". But it cannot use call_rcu() because it does > > > not have an rcu_head to work with - it's in some generic code, and > > > there is no MMU_GATHER_CAN_USE_PAGE_RCU_HEAD for architectures to set. > > > > > > And Jason would have much preferred us to address the issue from that > > > angle; but not only would doing so destroy my sanity, I'd also destroy > > > 20 architectures TLB-flushing, unbuilt and untested, in the attempt. > > > > Oh yes, if your changes would have allowed to get rid of that "semi RCU" > > logic, that would really be a major boost in popularity, I guess. But > > it probably is as it is, because it is not so easily fixed... > > I'm hoping that this series might help stir someone else to get into that. > > > > > > > > > ... > > > > > @@ -325,10 +346,17 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) > > > > > */ > > > > > mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); > > > > > mask >>= 24; > > > > > - if (mask & 0x03U) > > > > > + if ((mask & 0x03U) && !PageActive(page)) { > > > > > + /* > > > > > + * Other half is allocated, and neither half has had > > > > > + * its free deferred: add page to head of list, to make > > > > > + * this freed half available for immediate reuse. > > > > > + */ > > > > > list_add(&page->lru, &mm->context.pgtable_list); > > > > > - else > > > > > - list_del(&page->lru); > > > > > + } else { > > > > > + /* If page is on list, now remove it. */ > > > > > + list_del_init(&page->lru); > > > > > + } > > > > > > > > Ok, we might end up with some unnecessary list_del_init() here, e.g. if > > > > other half is still allocated, when called from pte_free_defer() on a > > > > fully allocated page, which was not on the list (and with PageActive, and > > > > (mask & 0x03U) true). > > > > Not sure if adding an additional mask check to the else path would be > > > > needed, but it seems that list_del_init() should also be able to handle > > > > this. > > > > > > list_del_init() is very cheap in the unnecessary case: the cachelines > > > required are already there. You don't want a flag to say whether to > > > call it or not, it is already the efficient approach. > > > > Yes, I also see no functional issue here. Just thought that the extra > > write could be avoided, e.g. by checking for list_empty() or mask first. > > But I guess that is simply the benefit of list_del_init(), that you > > don't have to check, at least if it is guaranteed that rcu_head is > > never in use here. > > > > Then maybe adjust the comment, because now it makes you wonder, when > > you read (and understand) the code, you see that this list_del_init() > > might also be called for pages not on the list. > > Sorry, I don't understand what clarification you're asking for there. > I thought > /* If page is on list, now remove it. */ > list_del_init(&page->lru); > was good enough comment. > > (I certainly don't want to enumerate the cases when it is or is not > already on the list there, that would be misery; but I don't think > that's the adjustment you were asking for either.) I was mislead by the comment saying "If page is on the list", in an else path where we also end up for pages not on the list any more. I guess I would have added something like "it is also ok to do list_del_init() here for pages not on the list". But thinking again, that would probably just be a reminder of how list_del_init() works, which should be obvious anyway, at least for people with enough sleep. > > > > > > > > > (But you were right not to use it in your pt_frag_refcount version, > > > because there we were still trying to do the call_rcu() per fragment > > > rather than per page, so page->lru could have been on the RCU queue.) > > > > That is actually the one thing I still try to figure out, by drawing > > pictures, i.e. if we really really never end up here on list_del_init(), > > while using rcu_head, e.g. by racing PageActive. > > There is no race with PageActive being seen when the table page is > finally to be freed (by RCU or not). But there is definitely a harmless > race with pte_free_defer()er of other half setting PageActive an instant > after page_table_free() checked PageActive here. So maybe this > page_table_free() does a list_add(), which the racer then list_del_init()s > when it gets the mm->context.lock; or maybe they both list_del_init(). Agree. Since none of my remarks on the comments seem valid or strictly necessary any more, and I also could not find functional issues, I think you can add this patch as new version for 07/12. And I can now give you this: Reviewed-by: Gerald Schaefer