From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46763EB64D7 for ; Tue, 20 Jun 2023 14:30:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 908718D0003; Tue, 20 Jun 2023 10:30:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B8198D0001; Tue, 20 Jun 2023 10:30:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7822E8D0003; Tue, 20 Jun 2023 10:30:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 676D98D0001 for ; Tue, 20 Jun 2023 10:30:17 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 0D9FF1610B8 for ; Tue, 20 Jun 2023 14:30:17 +0000 (UTC) X-FDA: 80923361274.24.BE0F7B6 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf16.hostedemail.com (Postfix) with ESMTP id 4891E180025 for ; Tue, 20 Jun 2023 14:30:11 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=OqPL4qH1; spf=pass (imf16.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687271412; a=rsa-sha256; cv=none; b=RpP68Xz584DKHAYHn3svxVj5Njeslb3T27oD3KHhBMm2oqKJPm5heHaXVUgmj4YNg5c6dI jRejC8NkDDxacJJ8WmAGSLELpSI9WrEQJlaETPz+1BPOQ++/V8Uj9l3qd8XxeeQ6FnrDrb PMO+04Ga+8pBsp1dyeUZycpWQ4AsmWE= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=OqPL4qH1; spf=pass (imf16.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687271412; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=n2xX6aO4xwN9RgYibyev+a+0/vVcwRyICxOAka2Y5dw=; b=JJ4JLNPZqlZN+99yiMoZg+I2pjTGtTqbWx6Ifad6p53birzCSKEEFqZkNA4kBHdXG+DSrZ ugYS/SrwTCDyX1RY9QQDL9B+P6KOhfthcav6hwcRqdIFrUCqfNkzj5QPlehmKeEoDmrajr hqLT+3KUNvg/elZ1c82xga6LLVr3hyA= Received: from pps.filterd (m0353727.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35KEN0tI009766; Tue, 20 Jun 2023 14:29:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : in-reply-to : references : date : message-id : content-type : mime-version; s=pp1; bh=n2xX6aO4xwN9RgYibyev+a+0/vVcwRyICxOAka2Y5dw=; b=OqPL4qH1h2I8Zeu8wMAaaOA3VD+GJDrM5cID1Sbynj5uaTJO/SZAbcIV37lrn0IXFjtt 3oppf3RRYIkg+SMTr1F9lDEhfANFRH7LU2BVYxPJCDzTx0PFa5r15/2bJLgTbSk//iFh 4D+yLgcfw3sKNqeLWHkkctzMrOtlPmrSEiQOmiOC94y3pwTCCI8+xQ77Gr+mveOF70RS SYhR1pJBXNdq0BQxkXoIl1lVNSurFGKUGkDOVxIgNm+/NMXZgYoel6j3OV9r9QqUqsx3 Av++4xUnLuBtqwYppyahQCcCWqM92bWJRCO5KARHOv5dskbJ8tPZxQaPpNj0yYPS3kXh 9w== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3rbdtt85fp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 20 Jun 2023 14:29:58 +0000 Received: from m0353727.ppops.net (m0353727.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 35KEOJGX013769; Tue, 20 Jun 2023 14:29:57 GMT Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3rbdtt85ev-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 20 Jun 2023 14:29:57 +0000 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.17.1.19/8.17.1.19) with ESMTP id 35KC0noH031811; Tue, 20 Jun 2023 14:29:56 GMT Received: from smtprelay05.dal12v.mail.ibm.com ([9.208.130.101]) by ppma02dal.us.ibm.com (PPS) with ESMTPS id 3r94f68kje-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 20 Jun 2023 14:29:56 +0000 Received: from smtpav01.wdc07v.mail.ibm.com (smtpav01.wdc07v.mail.ibm.com [10.39.53.228]) by smtprelay05.dal12v.mail.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 35KETs2D64618820 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 20 Jun 2023 14:29:55 GMT Received: from smtpav01.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CCD2F58066; Tue, 20 Jun 2023 14:29:54 +0000 (GMT) Received: from smtpav01.wdc07v.mail.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 968375804B; Tue, 20 Jun 2023 14:29:49 +0000 (GMT) Received: from skywalker.linux.ibm.com (unknown [9.43.83.124]) by smtpav01.wdc07v.mail.ibm.com (Postfix) with ESMTP; Tue, 20 Jun 2023 14:29:49 +0000 (GMT) X-Mailer: emacs 29.0.91 (via feedmail 11-beta-1 I) From: "Aneesh Kumar K.V" To: Joao Martins Cc: Oscar Salvador , linux-mm@kvack.org, akpm@linux-foundation.org, Mike Kravetz , mpe@ellerman.id.au, Dan Williams , Catalin Marinas , npiggin@gmail.com, linuxppc-dev@lists.ozlabs.org, Muchun Song , Will Deacon , christophe.leroy@csgroup.eu Subject: Re: [PATCH v2 08/16] mm/vmemmap: Improve vmemmap_can_optimize and allow architectures to override In-Reply-To: References: <20230616110826.344417-1-aneesh.kumar@linux.ibm.com> <20230616110826.344417-9-aneesh.kumar@linux.ibm.com> Date: Tue, 20 Jun 2023 19:59:47 +0530 Message-ID: <87zg4ugqas.fsf@linux.ibm.com> Content-Type: text/plain X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: zMp05J84sT5d4O-PDIFL4bBYOuqJcqxp X-Proofpoint-GUID: OUrd2cVPyI1EklakWIrUYy3OfJ9ia0W5 X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-20_10,2023-06-16_01,2023-05-22_02 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 clxscore=1015 spamscore=0 malwarescore=0 phishscore=0 priorityscore=1501 impostorscore=0 suspectscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2305260000 definitions=main-2306200127 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4891E180025 X-Stat-Signature: 8szch1jzqfu3kgj9i88dz7ti9mre7kxj X-Rspam-User: X-HE-Tag: 1687271411-992858 X-HE-Meta: U2FsdGVkX1/RB8Z4mtFOfqC4T37WqmJh56Pxf+Qzv3fLB55hnKjyeHbvFuDxn+WPEI8t3sYA//wKA/Y/C4DeNf2R6cF6cICddfCRZ7mfvBvCK8JKtkC2SPBOX2CkeIVPltPtcXIy2BlNlxQ6+iqcBwZnYuqe/nM6MqtBbwggbbtuKCJEdh+hbHZOZhAi05GLp5wizYTDpodBXYSVNQ60hBesFrfBzOk8imkvb10OREVYpDRoDLy/PnZvVifyNPmDymVxTYxwzDtzESjxx999qtgbVSTUJRnwVY0Ftv2n+bApqDVc3ef4vhMpSU6S2mzX6kwLMJXNIECzKv+4G9v1Zeni247an6kt4GQISVT+z8w9UiKUGyOCG8h3U5PbxZvFFpqJy5uwlnl4nPD+/aNP4z222OYXUmZ/5//SbAlTGc4cdapyxv4tOhzPdrAcOzLTRD+nKeRTpKip3fwK34naVGrlpcPWfDowxdhWnPc/jqD7Q2Pj4fh/mLCpxVsFa2eKAEi1idwSUYzNIL1GgL4KzTACjr+qHMpFLD20WAiWZoUf+qLO4e3G7MHvXuqTz3nxhTAD6xfeRw1CvxbfWhmr1jzO/S6rRnJENXJMngaZOiO8jrgjS8+5BGF4bNA/iGltGU4ygDj16nBQ1JVmM46B8ybN90S9E2pAPAQx83WmFJhE38QuRmgZch8yNAVzCuUVWurwXmlPc5RSNWZz4L1TOhjfxNt/DRpKep8HQJKuRJydbiGYWquR1lmr6phZma7iXKvTwX2lZhv9pjHKo65M3kO4IoDReEiFzA/+pjSrDm8jhr3/Z38p2WKjFfoY/5HQekhfsi1yGJYoysFKN22fkMPSRZlnuufofX0iQu/Q6mtxYLZtY8NlIMDfmNkM7G+zSHXRedkBPFief88NXLY/j9GnZNNvBDzHhVS0NuV7RamgIbmkGie1B8piroTrHwS70WWdChAgiB3tnosdWOq HqDYwtcg 5TQ8kPNTB/rCq5uOEhOA3Qc2sPpoRQJ78jeVfxptPzOEPCSxnKjQqdvfw4VC00yw9ONkissq9h5ZZvQjybfRIJR/DljeCichjKXUxyXBhF0SFJ88iq3LnLTlrQShxB6RRnTPW50Qs/FhCM2yC6QLCIN4wj/m9qUJuhtJCLv6funcvkdSlOrsiS6+tTEOvqo481Layz25mN/e7LYmIy/jH6lFRHRl9DFpF6WpapuGGyI2+vmg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Joao Martins writes: > On 16/06/2023 12:08, Aneesh Kumar K.V wrote: >> dax vmemmap optimization requires a minimum of 2 PAGE_SIZE area within >> vmemmap such that tail page mapping can point to the second PAGE_SIZE area. >> Enforce that in vmemmap_can_optimize() function. >> >> Architectures like powerpc also want to enable vmemmap optimization >> conditionally (only with radix MMU translation). Hence allow architecture >> override. >> > This makes sense. The enforcing here is not just for correctness but because you > want to use VMEMMAP_RESERVE_NR supposedly? > > I would suggest having two patches one for the refactor and another one for the > override, but I don't feel particularly strongly about it. > I will wait for feedback from others. If we have others also suggesting for the split patch I will do that. >> Signed-off-by: Aneesh Kumar K.V >> --- >> include/linux/mm.h | 30 ++++++++++++++++++++++++++---- >> mm/mm_init.c | 2 +- >> 2 files changed, 27 insertions(+), 5 deletions(-) >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index 27ce77080c79..9a45e61cd83f 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -31,6 +31,8 @@ >> #include >> #include >> >> +#include >> + > > Why is this include needed? That was for PAGE_SHIFT. But then we do pull that through other include dependencies. I will see if i can drop that. > >> struct mempolicy; >> struct anon_vma; >> struct anon_vma_chain; >> @@ -3550,13 +3552,33 @@ void vmemmap_free(unsigned long start, unsigned long end, >> struct vmem_altmap *altmap); >> #endif >> >> +#define VMEMMAP_RESERVE_NR 2 > > see below > >> #ifdef CONFIG_ARCH_WANT_OPTIMIZE_VMEMMAP >> -static inline bool vmemmap_can_optimize(struct vmem_altmap *altmap, >> - struct dev_pagemap *pgmap) >> +static inline bool __vmemmap_can_optimize(struct vmem_altmap *altmap, >> + struct dev_pagemap *pgmap) >> { >> - return is_power_of_2(sizeof(struct page)) && >> - pgmap && (pgmap_vmemmap_nr(pgmap) > 1) && !altmap; >> + if (pgmap) { >> + unsigned long nr_pages; >> + unsigned long nr_vmemmap_pages; >> + >> + nr_pages = pgmap_vmemmap_nr(pgmap); >> + nr_vmemmap_pages = ((nr_pages * sizeof(struct page)) >> PAGE_SHIFT); >> + /* >> + * For vmemmap optimization with DAX we need minimum 2 vmemmap > > > >> + * pages. See layout diagram in Documentation/mm/vmemmap_dedup.rst >> + */ >> + return is_power_of_2(sizeof(struct page)) && >> + (nr_vmemmap_pages > VMEMMAP_RESERVE_NR) && !altmap; >> + } > > It would be more readable (i.e. less identation) if you just reverse this: > > unsigned long nr_vmemmap_pages; > > if (!pgmap || !is_power_of_2(sizeof(struct page)) > return false; > > nr_vmemmap_pages = ((pgmap_vmemmap_nr(pgmap) * > sizeof(struct page)) >> PAGE_SHIFT); > > /* > * For vmemmap optimization with DAX we need minimum 2 vmemmap > * pages. See layout diagram in Documentation/mm/vmemmap_dedup.rst > */ > return (nr_vmemmap_pages > VMEMMAP_RESERVE_NR) && !altmap; > > Will update >> + return false; >> } >> +/* >> + * If we don't have an architecture override, use the generic rule >> + */ >> +#ifndef vmemmap_can_optimize >> +#define vmemmap_can_optimize __vmemmap_can_optimize >> +#endif >> + > > sparse-vmemmap code is trivial to change to use dedup a single vmemmap page > (e.g. to align with hugetlb), hopefully the architecture override to do. this is > to say whether VMEMMAP_RESERVE_NR should have similar to above? VMEMMAP_RESERVE_NR was added to avoid the usage of `2` in the below code. The reason we need the arch override was to add an additional check on ppc64 as shown by patch https://lore.kernel.org/linux-mm/20230616110826.344417-16-aneesh.kumar@linux.ibm.com/ bool vmemmap_can_optimize(struct vmem_altmap *altmap, struct dev_pagemap *pgmap) { if (radix_enabled()) return __vmemmap_can_optimize(altmap, pgmap); return false; } > >> #else >> static inline bool vmemmap_can_optimize(struct vmem_altmap *altmap, >> struct dev_pagemap *pgmap) >> diff --git a/mm/mm_init.c b/mm/mm_init.c >> index 7f7f9c677854..d1676afc94f1 100644 >> --- a/mm/mm_init.c >> +++ b/mm/mm_init.c >> @@ -1020,7 +1020,7 @@ static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap, >> if (!vmemmap_can_optimize(altmap, pgmap)) >> return pgmap_vmemmap_nr(pgmap); >> >> - return 2 * (PAGE_SIZE / sizeof(struct page)); >> + return VMEMMAP_RESERVE_NR * (PAGE_SIZE / sizeof(struct page)); >> } >> >> static void __ref memmap_init_compound(struct page *head,