From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D721DC3DA7F for ; Wed, 31 Jul 2024 09:59:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 456B36B0082; Wed, 31 Jul 2024 05:59:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 406F16B0083; Wed, 31 Jul 2024 05:59:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F58A6B0085; Wed, 31 Jul 2024 05:59:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 12A9D6B0082 for ; Wed, 31 Jul 2024 05:59:32 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id A4176C0228 for ; Wed, 31 Jul 2024 09:59:31 +0000 (UTC) X-FDA: 82399600542.23.552079D Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf04.hostedemail.com (Postfix) with ESMTP id 72E3E40007 for ; Wed, 31 Jul 2024 09:59:28 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722419915; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JdQoQep+EHFumQZMy4xtFpzAZ+c3FijY3ecpvfbaPDQ=; b=vO/PvhOQB/r3+FK7vjWdpu8149XnHjBpeZTCXO/HIkF2zJjhRqoAS5AiEmX+iCrA6YKgfA ABJqZTGSdpo3jN7vxySjsYfxtN1RzWQmPm4oGN7yqSbLYoalsyAjKYjNmyvZx92Ho9648I mr/9NBBeQVFNPTuMw+S2TAaU26Ep820= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722419915; a=rsa-sha256; cv=none; b=KegZETnxdibWwPMIkBu9FHV1JcdVy8NitENJzpxFpt4PbkhmNOq5XfaMy65pSxamFNt16q qVoHB1rM9WzItT3SMaVYuSKPQ1GrtbXhaV/uBM6eiH9coYX/+ZXIeMPBX0LZEGtmVjV5QS DRSaRQzyU7fsF1ZAPgbXaXbn8bWd1a8= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com Received: from mail.maildlp.com (unknown [172.19.88.194]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4WYnW565JyzyPGZ; Wed, 31 Jul 2024 17:54:25 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 2AC1A140156; Wed, 31 Jul 2024 17:59:24 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Wed, 31 Jul 2024 17:59:23 +0800 Message-ID: <87769ae8-b6c6-4454-925d-1864364af9c8@huawei.com> Date: Wed, 31 Jul 2024 17:59:23 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem Content-Language: en-US To: Baolin Wang , Barry Song <21cnbao@gmail.com> CC: , , , , , , , , , References: <117121665254442c3c7f585248296495e5e2b45c.1722404078.git.baolin.wang@linux.alibaba.com> From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspamd-Queue-Id: 72E3E40007 X-Stat-Signature: ue96e9hsdtudbce4eqt8r83kjcr13pkf X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1722419968-645225 X-HE-Meta: U2FsdGVkX18qb0oP4mTeVc/V83s1izDEsfpWu8VEIfr+deprAJaC0PUu413C8xmUt9urlerepUe+Ekwvdhv75ieCJpm6fdxAN6Tw6ESF1nfVltY/3CWLjNPjbotODVD91248GAu/4ETqZbzmdrh5QighucSZ1blbhd0seyrv4dThol6hurqGZOs1e8i5e8vVa+aCePFla6zBQ8/x1LbKW+Mxho0olDu8hJtvlkrftaP5mM04CsexpOI/DG0mJeKayWwCo5N7FFdhSs4d5toy2s64uVMlsiPmsSrVQvyoPpQDDAZyajObFD0rnHPSwFUFQcHfl+gkwD7OI9VaLrj1tOo5v6lSuMcN0bpuWQDIqLiBiZb+nW+dtMGNj7p2Fe6wXeiQP6iylXfF5ZQdTl6H/YP8z0AGVyRHWVsHiEe76LmON+5hEmefuO9EedPxlpZcNNzlne4e50LifKeYmzWD0jAEXkp+NwtqR/ZHMA4S3G9spzzTXN+xcz8Vu2dYelQFnwuJ/aKsN2U78uDINrEVzIbnFHf0yXm7HhnjwmBcXB8WpJMnLG4g/e6u9/yvBYy+5XbSc1ap0+H/sXfEL5DaW1rWdo6kCBvpleGNOblyJVjHm5KVHEt5V4yDlEBLk9Ls1otz9By7gBUYM4Rp2pEq2Hk6mbAZRicY7Y9v2bxymzz8nFdC1mtKJTEEVIDq+c30YDUTFxiD62w1Ob0AKWYJmHh12P1R16yamnitVrK0AZYfueLaAupG6Z1BTHvy5S10gCGgCva8x4A3V0c24OfYYGdyFFJ8+0CsvvI/QdZUBL69p2abzsd0nTLvmqJqpgU12+ED39/9+t3pgNF10YdmQ2kUyMMh4qQwueEw0Q9tc0XpdYuoT8WQsRJLTjldM5gxbtN/ZZ6veHwf1UmkyP9qd3AR047MuWKM7/4jXvptMYcwAyqH5Hm9e2gZhyKqO9Nisu6wLmNftrz85HpfTIg EjRd9ZRU 8EKDGk5JzTEqFwcwYQrurGKJnDKNxFR+px/oe08QINr4ZzxX0oWeBxZLlPqJk6sHjfmz+RhG8ZwrcxLLjx1N9INsFX8R0ugEv6bffSn0gvBw36iPsu+E5ngj8OP/rIAqFpJJnx9EHG8B6hZLspbH1f3cQgPeZyYNxJ4wvVzp92r54+W7WPrTu+80kK9mynSKnlJYn9YrP53idfP9Nm21yNM+gXhIcQ49B1In/9Lq/gGNYzjS27oINlK+gOn1C3/5PfMP4l1FnUoIeCwGWvvBCTRIscuGgib5ddithmIPx2zEKi77ljJATZoOTIUtonOL4FgJJdqVM0ls5vD0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/7/31 16:56, Baolin Wang wrote: > > > On 2024/7/31 14:18, Barry Song wrote: >> On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang >> wrote: >>> >>> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page >>> cache >>> if needed"), ARM64 can support 512MB PMD-sized THP when the base page >>> size is >>> 64KB, which is larger than the maximum supported page cache size >>> MAX_PAGECACHE_ORDER. >>> This is not expected. To fix this issue, use >>> THP_ORDERS_ALL_FILE_DEFAULT for >>> shmem to filter allowable huge orders. >>> >>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") >>> Signed-off-by: Baolin Wang >> >> Reviewed-by: Barry Song > > Thanks for reviewing. > >> >>> --- >>>   mm/shmem.c | 4 ++-- >>>   1 file changed, 2 insertions(+), 2 deletions(-) >>> >>> diff --git a/mm/shmem.c b/mm/shmem.c >>> index 2faa9daaf54b..a4332a97558c 100644 >>> --- a/mm/shmem.c >>> +++ b/mm/shmem.c >>> @@ -1630,10 +1630,10 @@ unsigned long >>> shmem_allowable_huge_orders(struct inode *inode, >>>          unsigned long within_size_orders = >>> READ_ONCE(huge_shmem_orders_within_size); >>>          unsigned long vm_flags = vma->vm_flags; >>>          /* >>> -        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that >>> +        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 >>> that >>>           * are enabled for this vma. >> >> Nit: >> THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough. >> I feel we don't need this comment? > > Sure. > > Andrew, please help to squash the following changes into this patch. > Thanks. Maybe drop unsigned long orders too? diff --git a/mm/shmem.c b/mm/shmem.c index 6af95f595d6f..8485eb6f2ec4 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1638,11 +1638,6 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, unsigned long mask = READ_ONCE(huge_shmem_orders_always); unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size); unsigned long vm_flags = vma ? vma->vm_flags : 0; - /* - * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that - * are enabled for this vma. - */ - unsigned long orders = BIT(PMD_ORDER + 1) - 1; bool global_huge; loff_t i_size; int order; @@ -1698,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode, if (global_huge) mask |= READ_ONCE(huge_shmem_orders_inherit); - return orders & mask; + return THP_ORDERS_ALL_FILE_DEFAULT & mask; } > > diff --git a/mm/shmem.c b/mm/shmem.c > index 6e9836b1bd1d..432faec21547 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1629,10 +1629,6 @@ unsigned long shmem_allowable_huge_orders(struct > inode *inode, >         unsigned long mask = READ_ONCE(huge_shmem_orders_always); >         unsigned long within_size_orders = > READ_ONCE(huge_shmem_orders_within_size); >         unsigned long vm_flags = vma->vm_flags; > -       /* > -        * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that > -        * are enabled for this vma. > -        */ >         unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT; >         loff_t i_size; >         int order; >