From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3927DC3DA7F for ; Wed, 31 Jul 2024 10:22:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CF6696B0082; Wed, 31 Jul 2024 06:22:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CA7ED6B0083; Wed, 31 Jul 2024 06:22:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B97186B0088; Wed, 31 Jul 2024 06:22:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 9B4CA6B0082 for ; Wed, 31 Jul 2024 06:22:25 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2BBAB160268 for ; Wed, 31 Jul 2024 10:22:25 +0000 (UTC) X-FDA: 82399658250.19.CD113A0 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf21.hostedemail.com (Postfix) with ESMTP id D799A1C0015 for ; Wed, 31 Jul 2024 10:22:21 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=RzeoUsJ4; spf=pass (imf21.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722421338; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9h+g/1oD2W/YdX4QEHKvjvVLn4FL9lC4w/DaudYh6/w=; b=zu1vzAqOrQy53jnOHG20xgUpZp/OoEFIddkwRDAYmkHEko3ZnRV1Hagbj5+X55qViz1+7Q CRCpvOUU1zf3x9WB/oVna6QU7SJdWm0W6OMZuZIO/nFjaiJtpwX4Sjr4CFUWSIRz5eYX1S osow/C3khXO06WJ403iJkrbohacV0Xs= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=RzeoUsJ4; spf=pass (imf21.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.130 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722421338; a=rsa-sha256; cv=none; b=yT5Ph6Z4mZBeerr3Ke2c2CtwmY77Ik/3kc1cyhWlHcxl8PMocOs9EDjb/EigtE3VThmOtE HQ0Cjwp/mPTAb78BajhJzsvo80cww51XJCXFcPewFQlN2otROa59Cl5ZjFhwXisSTK/IxR gdVKzglV3UKUcTHRfv36YD5yXZ5Unw4= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1722421339; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=9h+g/1oD2W/YdX4QEHKvjvVLn4FL9lC4w/DaudYh6/w=; b=RzeoUsJ4ivRrSD366DH9mmPDvuHKPaQQxDe/7Fa+wYs+5FYU2i69zpLhl7jjrAz31R2yA8sfEUpwI9IOYmbn84QLG8fRBCMWFbQZTioWlnmjFUkbgv88lsTJE2TPqkPhc41PBMRMgAkcDVjzNpyjWIms31SavnxsznfvST7uKP4= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045220184;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=12;SR=0;TI=SMTPD_---0WBj2509_1722421337; Received: from 30.97.56.76(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WBj2509_1722421337) by smtp.aliyun-inc.com; Wed, 31 Jul 2024 18:22:18 +0800 Message-ID: Date: Wed, 31 Jul 2024 18:22:17 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem To: Kefeng Wang , Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, hughd@google.com, willy@infradead.org, david@redhat.com, ryan.roberts@arm.com, ziy@nvidia.com, gshan@redhat.com, ioworker0@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <117121665254442c3c7f585248296495e5e2b45c.1722404078.git.baolin.wang@linux.alibaba.com> <87769ae8-b6c6-4454-925d-1864364af9c8@huawei.com> From: Baolin Wang In-Reply-To: <87769ae8-b6c6-4454-925d-1864364af9c8@huawei.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: rsan8mkjh5epjie55x9uroo3ypd9x6sd X-Rspamd-Queue-Id: D799A1C0015 X-Rspamd-Server: rspam11 X-HE-Tag: 1722421341-997476 X-HE-Meta: U2FsdGVkX1+gHObIZvgfVpfecYdPN9qmYeac4sIJSuMcetebVZRkxL+PVbmy1ivmWDbD2VC+AYmS0FYY4yptO2tR6NSMkrAM0hbnOw1JFuQt+IMJIPrSANP9A48dQi1UA1jRCscr3WLZV/fT5tO0pX+hnUL0i529r4DHj8RwbgzMg+wnKciox/0sMPlFsOdEj3Pe57MWQ1DaGJO/9V2BrTjThEmmDUwGOR+XObK0KPTmgqD0ct1vugMflDA8AxiuOKLop96kg4ZSImAasdwRsiZNIloSzsF7/q7NKG1uOtipD4cEQLGKA7cVxQEXZju8OpxlRg4AIU6AmBTNs73V1dUqgyrBFiMhYHzPRfMgi8jGZjeRBDDTY4QV/zPrvr1y+BZiNA0qyEe5D8zV3p20WaiN29AoaMGUYBuynAYmLvLIYgmEK22KtD8v6Hh5f3xu6HET5T2vTxYFevpSK5pp7SEwYvhPUcDuUBXmomrJNh1aQtmoBS5GzOTI1KoSheVtlJ+/CR/kK4OVOuTFNPphqT3cxcPsP8sWWvIARqe5L0HiQ3lTAw5uNrXjbgsQszBL/UZ9rbIHdQIhuE5ImEij996UCF7x7V4pqdjYtPb4zNMcpTrsn3gnzgon9FFykgELpxVXOQ/Idk2DerxEb/UlCL+Y2vYJLa4+nXmIB4eVULgWj79ZEGHeCwRIqCxn0e25SzJZ1L5vjDd8tz/5vE9smgerh7zuomPMHbP4mG2Lu5r+kQYoa20qx9c57Bx7vtxuNejw9hHqffocpJa1/TbZTQUL/wLT3gTPVLdQPJjKwIyD4VWHqjfF2vSmF85HIcvWswfeFop+5+gtPLyYmWq/hlVY312UQkHzf/aXi2mfw32kl0slbitencGE7ePITD8zOPskfsetwmg/aiWbZPcxJ2T8UiJER2aqH6avNfmRYLTWlMevVF+RnZAgXE9dmcVuIlkUA3c+fl5WEkhJmBN UMWo9gci etl2j+b7wN1uHwxF4PbqlppDWeZ2gLUUS2F0u8LFA1jVg2Alb7LOQ8QG5ztX6jzbhHVtPHGgpknXLTAKH99icy8v2tdKvBliOQk/Vr3lIlMZHcMXDtL4rOzcxOQX7zppUQmEW6Qy7RJoRH+L08x+7vnq1vAEObmAPk/2tg5ILFIEDd0EnUlDVQJjILLjibZTrxjGe9+tsv6Ih3XwTm789RwesGcHtUsJcbBGGnX2FOv8i1FKCxBJvZlYTA6ILi2LSGRQFdE20qEQOFwoIUT24GRtjmKfuGob+EfIKONh+WxOYXVvQFpn6sGJwm8O761sE1PL/mjMTuA02KvkvRj8XOj0ZFVuGjg/+WCGq7T523ngRvogZef/rlmPQ4tBGtCS1P7wmwryZfzfCoLs5QedIZn5jwKCy1xZZjexzd0dAB8RhG80PDS3EyMnX1Ef0X+7KCauI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/7/31 17:59, Kefeng Wang wrote: > > > On 2024/7/31 16:56, Baolin Wang wrote: >> >> >> On 2024/7/31 14:18, Barry Song wrote: >>> On Wed, Jul 31, 2024 at 1:46 PM Baolin Wang >>> wrote: >>>> >>>> Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size >>>> page cache >>>> if needed"), ARM64 can support 512MB PMD-sized THP when the base >>>> page size is >>>> 64KB, which is larger than the maximum supported page cache size >>>> MAX_PAGECACHE_ORDER. >>>> This is not expected. To fix this issue, use >>>> THP_ORDERS_ALL_FILE_DEFAULT for >>>> shmem to filter allowable huge orders. >>>> >>>> Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem") >>>> Signed-off-by: Baolin Wang >>> >>> Reviewed-by: Barry Song >> >> Thanks for reviewing. >> >>> >>>> --- >>>>   mm/shmem.c | 4 ++-- >>>>   1 file changed, 2 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/mm/shmem.c b/mm/shmem.c >>>> index 2faa9daaf54b..a4332a97558c 100644 >>>> --- a/mm/shmem.c >>>> +++ b/mm/shmem.c >>>> @@ -1630,10 +1630,10 @@ unsigned long >>>> shmem_allowable_huge_orders(struct inode *inode, >>>>          unsigned long within_size_orders = >>>> READ_ONCE(huge_shmem_orders_within_size); >>>>          unsigned long vm_flags = vma->vm_flags; >>>>          /* >>>> -        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that >>>> +        * Check all the (large) orders below MAX_PAGECACHE_ORDER + >>>> 1 that >>>>           * are enabled for this vma. >>> >>> Nit: >>> THP_ORDERS_ALL_FILE_DEFAULT should be self-explanatory enough. >>> I feel we don't need this comment? >> >> Sure. >> >> Andrew, please help to squash the following changes into this patch. >> Thanks. > > Maybe drop unsigned long orders too? > > diff --git a/mm/shmem.c b/mm/shmem.c > index 6af95f595d6f..8485eb6f2ec4 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1638,11 +1638,6 @@ unsigned long shmem_allowable_huge_orders(struct > inode *inode, >         unsigned long mask = READ_ONCE(huge_shmem_orders_always); >         unsigned long within_size_orders = > READ_ONCE(huge_shmem_orders_within_size); >         unsigned long vm_flags = vma ? vma->vm_flags : 0; > -       /* > -        * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that > -        * are enabled for this vma. > -        */ > -       unsigned long orders = BIT(PMD_ORDER + 1) - 1; >         bool global_huge; >         loff_t i_size; >         int order; > @@ -1698,7 +1693,7 @@ unsigned long shmem_allowable_huge_orders(struct > inode *inode, >         if (global_huge) >                 mask |= READ_ONCE(huge_shmem_orders_inherit); > > -       return orders & mask; > +       return THP_ORDERS_ALL_FILE_DEFAULT & mask; >  } Yes. Good point. Thanks. (Hope Andrew can help to squash these changes :))