From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C0FCC54E58 for ; Sat, 23 Mar 2024 17:39:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9FB66B0082; Sat, 23 Mar 2024 13:39:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B50FE6B0083; Sat, 23 Mar 2024 13:39:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A17FF6B0085; Sat, 23 Mar 2024 13:39:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 8FBF56B0082 for ; Sat, 23 Mar 2024 13:39:42 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4A17D4034F for ; Sat, 23 Mar 2024 17:39:42 +0000 (UTC) X-FDA: 81929016204.07.701D79A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 692B52000B for ; Sat, 23 Mar 2024 17:39:40 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ti8N1rpm; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711215580; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3KpHjticzOUnHT2dPif+lyhzudngj6gqkO1DTO8r5w0=; b=flh1b7AttTvJS/ScEa5jd/glDHEfyqH6HDU3r4mwgtihB80da8eP5XOUABcUYAU37V9TMz e8kEziMjbVdKFRplLDvJ6U+WOwI9uzqUAWZtT77Cm+iPuu2Oe8pPdnOPf1ylfLT3GwL0KT QB+Bt1/Wdb/TDXok++nYEWhBqdMBrSA= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ti8N1rpm; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711215580; a=rsa-sha256; cv=none; b=6NxC8bYMMX7gxHU/CQGPSEopC4vnp62CIlY9fmPs708bElY0KCa5tDEK5Fqp7wcn8b2f1B e94D+5tnCviEIqQTqSTjj3+HAIHogkwR0vEoWsTlaq7ZVh9qwgU8QDKGSYwaOqps0c75tZ t4jyVlmQtU2uhq10wErRYVMp8pEb11I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1711215579; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=3KpHjticzOUnHT2dPif+lyhzudngj6gqkO1DTO8r5w0=; b=Ti8N1rpmH3xa0g0v26H3if05Mf7zQkguFAC7BdJJsPoKJ7IvAZKwTI3BphBOywgGKtAPmT apkU/LwFBz6kQhT4Hv6Yz9DvLUE5wdYrDdNHk4qIYa6Zid1TY8AslMrXcD/iBjvxsOr/c6 dtcYrZYYXzvxJvbFeosvlbYCsL0ztSw= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-580-xWLgkmkINiGfdOJKbSQmgw-1; Sat, 23 Mar 2024 13:39:38 -0400 X-MC-Unique: xWLgkmkINiGfdOJKbSQmgw-1 Received: by mail-qv1-f71.google.com with SMTP id 6a1803df08f44-69651ab4c4fso8507006d6.0 for ; Sat, 23 Mar 2024 10:39:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711215577; x=1711820377; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=3KpHjticzOUnHT2dPif+lyhzudngj6gqkO1DTO8r5w0=; b=NTh+QgRp8jHEFA4KMaFpQWF24AztoRp4QS2/pPTCfJMmjX6W5ichQl8mcr/5uTHTpa iod4QjhUD6GsvvyUqNYrhEOanIfvPw4Bc2Xtq07Gz5yBZrUi1j7i7bWnW8+OgLi8OrlF UM+Im2NuQLfryFwIH64IOU5C8ZHk8MGghsLf14WrT3740MUdj6ooJWGaKVWZXaGCc+N1 DG+4N2h2glz3JQstPFt0vJaWiPkLKV78nK7tf0Gw9IgQrX1RwqgbHAxatW6FJk7F7F9d IstzeOYBU6DLnr16g1XOJls56ahwhCWuyGfPguQRjrlbjGxuzmeMrw0+KOKp0x8tajRt jKpw== X-Forwarded-Encrypted: i=1; AJvYcCX9GKGi5YgjU6oKI0kyR6UtgooWAg/YFdFQXPRbJi1i+SNxKblGzS36IJ/Q/yktIhABEFqPEOpvBPKkq/Vr/T7/JrI= X-Gm-Message-State: AOJu0YwN0wCtKKntLuGwOkyu+zUQYJrYKsw+yVVwwqlcrV/MafUBCN0y DcNPARLPHHPwHi+DtQYYFjD0oDY6Jlz9NtY8ux+xW3Q8PXRfJOPm0eP4cjsAxlCjl3/U4JoiYtJ BPptm3nWaGqtErYe/yjrlQuOOYLj/FS45RuF3l4D/GOUeQliP X-Received: by 2002:a05:6214:2b81:b0:695:c55d:fdf8 with SMTP id kr1-20020a0562142b8100b00695c55dfdf8mr3085726qvb.1.1711215576949; Sat, 23 Mar 2024 10:39:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE8UMOaZJTo0ystqWAJsb4XvzGXuMmMjMRALL1k+nYpAcZE+Nyvfo2Ng3iSlKC7841Sl6rP/Q== X-Received: by 2002:a05:6214:2b81:b0:695:c55d:fdf8 with SMTP id kr1-20020a0562142b8100b00695c55dfdf8mr3085704qvb.1.1711215576448; Sat, 23 Mar 2024 10:39:36 -0700 (PDT) Received: from x1n ([99.254.121.117]) by smtp.gmail.com with ESMTPSA id f7-20020a0562141d2700b00690f0d7057esm2269983qvd.39.2024.03.23.10.39.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 23 Mar 2024 10:39:36 -0700 (PDT) Date: Sat, 23 Mar 2024 13:39:34 -0400 From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: SeongJae Park , Andrew Morton , Jason Gunthorpe , Mike Rapoport , Matthew Wilcox , kernel test robot Subject: Re: [PATCH] mm/arch: Provide pud_pfn() fallback Message-ID: References: <20240323151643.1047281-1-peterx@redhat.com> MIME-Version: 1.0 In-Reply-To: <20240323151643.1047281-1-peterx@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Queue-Id: 692B52000B X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 4f6c8t54peh4gcomz6xnptw43tgo3ab7 X-HE-Tag: 1711215580-61809 X-HE-Meta: U2FsdGVkX1+OD3MAhxOcmRvgnOG93DcIwlzqKIoohR1sTbArbnM2I48EW8wYvFLkkIs54intSsZRrvwUP5YlvgQJ4poI89r2ivvtJwb3gZXUhXz4+aP1+zXdB8Jnk9sEjFIgznzYl6KjU8TwK/AYkVbSlY/SC1ZpXIuEHoNVVQqic3hMXH0kLqGQqH9hOEf8d4Tsg0pidQT02cAxHb0b+lJOVxf3K0WsecZzhTEZU44pEBqQ12HBPIKz0yxtxlgzFzJqhYN8iueZL5N9omfO4iRvs8zK8XfLinDqY3g+VZeSipVS2nv2rqlWb9fx2UoaHNxxwH5EytXhF4Y7NAA2DB8hGx22wLKcmc1D5tXOZvHCA270D5LcvCMQh9yFp8sdZ93THfKL9mJir+WTvQNm0nAfe4V3lhVDhtYAE7bsPAltdEiQx3709SpuelsPA/FxKM5o/Rq/Dh6uzIcQBz4k5WRlCQYxbWDWkZPK10EK/uOEdl+aSfuYjTWYJkC1PU0mj/ylbDw7L+FUjoje7jQSvLqyHDSEZFOwELwvgo/WQWRy54/XJl7XIedqKrSo4wecz2RT93/vfnotAoYVaKmBVHhhZSyIeGaJlb5KyU1V85Yb44/6nLBwCQiT3+Q/xi1vf7Qf9Iakxar1MqAJgyS8q6NRoeYdBvAYw3vSLtUyunp5VnUXjqOIQkmZjSdhriPT6sGrNhBqngLSwAABQu1gWMcR/F3NiSlXuttj3mVm8rV+ZA0Q8sEJUUq4K/gkH6Rt+iBma8VDr1yg9iehOM2DLDubwjFB5iReWn11hQP7ueTHwt0i0AWbCwY3lOqhN3KAWbsOry/s5Wl8ZMbnAzzNAPJ7EGqc9krSG2npE/EaY6AlRs7gypRGyArOXi8TsaohwP+VEJnmQ7CMaNXaaOwQAYIowqOgIs6Dmwez3BaBmt2QDGw32IWU/tGb3wZzwUyxoVks5bF8JJZ38wP5SQJ 8sAy/z5p n8hwOO56Mcga98AkwfeeJXJNkD9XMraQWxIPsb0mEJLpUjtSTgXHZUc7fwaoMROV2mPwYGaLndCAAQfjogHONjqtsNx/9fUFT1C/HAiQ+7JBua4MQCnQ6M5MuJa+9lJi0f4RwY0SlaOGX+Lg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Mar 23, 2024 at 11:16:43AM -0400, peterx@redhat.com wrote: > From: Peter Xu > > The comment in the code explains the reasons. We took a different approach > comparing to pmd_pfn() by providing a fallback function. > > Another option is to provide some lower level config options (compare to > HUGETLB_PAGE or THP) to identify which layer an arch can support for such > huge mappings. However that can be an overkill. > > Cc: Mike Rapoport (IBM) > Cc: Matthew Wilcox > Reported-by: kernel test robot > Closes: https://lore.kernel.org/oe-kbuild-all/202403231529.HRev1zcD-lkp@intel.com/ Also: Closes: https://lore.kernel.org/oe-kbuild-all/202403240112.kHKVSfCL-lkp@intel.com/ > Signed-off-by: Peter Xu > --- > > Andrew, > > If we care about per-commit build errors (and if it is ever feasible to > reorder), we can move this patch to be before the patch "mm/gup: handle > huge pud for follow_pud_mask()" in mm-unstable to unbreak build on that > commit. > > Thanks, > --- > arch/riscv/include/asm/pgtable.h | 1 + > arch/s390/include/asm/pgtable.h | 1 + > arch/sparc/include/asm/pgtable_64.h | 1 + > arch/x86/include/asm/pgtable.h | 1 + > include/linux/pgtable.h | 10 ++++++++++ > 5 files changed, 14 insertions(+) > > diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h > index 20242402fc11..0ca28cc8e3fa 100644 > --- a/arch/riscv/include/asm/pgtable.h > +++ b/arch/riscv/include/asm/pgtable.h > @@ -646,6 +646,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd) > > #define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT) > > +#define pud_pfn pud_pfn > static inline unsigned long pud_pfn(pud_t pud) > { > return ((__pud_to_phys(pud) & PUD_MASK) >> PAGE_SHIFT); > diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h > index 1a71cb19c089..6cbbe473f680 100644 > --- a/arch/s390/include/asm/pgtable.h > +++ b/arch/s390/include/asm/pgtable.h > @@ -1414,6 +1414,7 @@ static inline unsigned long pud_deref(pud_t pud) > return (unsigned long)__va(pud_val(pud) & origin_mask); > } > > +#define pud_pfn pud_pfn > static inline unsigned long pud_pfn(pud_t pud) > { > return __pa(pud_deref(pud)) >> PAGE_SHIFT; > diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h > index 4d1bafaba942..26efc9bb644a 100644 > --- a/arch/sparc/include/asm/pgtable_64.h > +++ b/arch/sparc/include/asm/pgtable_64.h > @@ -875,6 +875,7 @@ static inline bool pud_leaf(pud_t pud) > return pte_val(pte) & _PAGE_PMD_HUGE; > } > > +#define pud_pfn pud_pfn > static inline unsigned long pud_pfn(pud_t pud) > { > pte_t pte = __pte(pud_val(pud)); > diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h > index cefc7a84f7a4..273f7557218c 100644 > --- a/arch/x86/include/asm/pgtable.h > +++ b/arch/x86/include/asm/pgtable.h > @@ -234,6 +234,7 @@ static inline unsigned long pmd_pfn(pmd_t pmd) > return (pfn & pmd_pfn_mask(pmd)) >> PAGE_SHIFT; > } > > +#define pud_pfn pud_pfn > static inline unsigned long pud_pfn(pud_t pud) > { > phys_addr_t pfn = pud_val(pud); > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > index 2a1c044ae467..deae9e50f1a8 100644 > --- a/include/linux/pgtable.h > +++ b/include/linux/pgtable.h > @@ -1817,6 +1817,16 @@ typedef unsigned int pgtbl_mod_mask; > #define pte_leaf_size(x) PAGE_SIZE > #endif > > +/* > + * We always define pmd_pfn for all archs as it's used in lots of generic > + * code. Now it happens too for pud_pfn (and can happen for larger > + * mappings too in the future; we're not there yet). Instead of defining > + * it for all archs (like pmd_pfn), provide a fallback. > + */ > +#ifndef pud_pfn > +#define pud_pfn(x) ({ BUILD_BUG(); 0; }) > +#endif > + > /* > * Some architectures have MMUs that are configurable or selectable at boot > * time. These lead to variable PTRS_PER_x. For statically allocated arrays it > -- > 2.44.0 > -- Peter Xu