From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16C8ECE7AE8 for ; Fri, 6 Sep 2024 08:56:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9744F6B0085; Fri, 6 Sep 2024 04:56:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 922B86B0088; Fri, 6 Sep 2024 04:56:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 811436B0089; Fri, 6 Sep 2024 04:56:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 64DA46B0085 for ; Fri, 6 Sep 2024 04:56:02 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id E15B91611D9 for ; Fri, 6 Sep 2024 08:56:01 +0000 (UTC) X-FDA: 82533706122.15.84B7C10 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf04.hostedemail.com (Postfix) with ESMTP id 26D6C40021 for ; Fri, 6 Sep 2024 08:55:59 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725612861; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=yTrz5H5/P2kPG7SJEMGo6tvCI/cgfFo+VoKmfzTfQsU=; b=IBQHyvSBqIhqR69fbU65FmxrNt1495x7p8L8omiZo31mwg4ugaC1MNf9SK7R01HyugmWCb pfdPbAmorfT9ddxjWZP+o+UH411la5o4snHD8XJma8G/7xWDoRQXlMKiZs3PkRauyMrkdb pblow1dkHgLoU7mxn/BuDzIHVG3ilEo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725612861; a=rsa-sha256; cv=none; b=SY1PgJVBYQsDAoYyYaAOPBZzonKZ/D+HUJIpuHy5afoepOAb9d8+l3HpJL1lkNvn0VYbEC EweZaaHIvAR2F/gehjrXMvuJ1Kpsbv+jdRXGzz0WifwGco/myXLuLF/J1tggCSu1zexvw1 EsQsczU0twfZTna21zG+F+yHY9A7NRo= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 81B8EFEC; Fri, 6 Sep 2024 01:56:26 -0700 (PDT) Received: from [10.57.86.132] (unknown [10.57.86.132]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5DC593F73F; Fri, 6 Sep 2024 01:55:58 -0700 (PDT) Message-ID: <58cf63c1-25e5-4958-96cb-a9d65390ca3e@arm.com> Date: Fri, 6 Sep 2024 09:55:57 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: shmem: fix khugepaged activation policy for shmem Content-Language: en-GB To: Baolin Wang , akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <7c796904528e21152ba5aa639e963e0ae45b7040.1725600217.git.baolin.wang@linux.alibaba.com> From: Ryan Roberts In-Reply-To: <7c796904528e21152ba5aa639e963e0ae45b7040.1725600217.git.baolin.wang@linux.alibaba.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Stat-Signature: ddzgdcrc3r4cwwou4zpo6ryixzbwdxp4 X-Rspamd-Queue-Id: 26D6C40021 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1725612959-931802 X-HE-Meta: U2FsdGVkX18a5yG5n9iXxNWs7/frhWZ7KbV0b+gs9hkQZ53YZXStTo/jpd8KDu3A9z4CcX6JOf+9xzVqKQ5teU5UFctMTsWKaLpMF3iUl1+IYA23OUCQbpky7fUOCn5QUjn3ynK6Ydrs4VNNWxp4rK5wvPdZRZs8xSgRnOl4JRa2cHQT7guCUqHiIFKoQMAXmxyZDhPAtG7zZtRoTo0nttMgBkveQmNXNZ8IM7jnwg69rt16dfmMWPdspkh5hL5nWuxGreWS5mSdr20b+JbOtgEkIP3lOfgp6c4a5rJyUzrhc7NDwc1QVL3Yb3rS+PluuBjsMXIj68edoBh9VyjpdhgOWrQkdd05XMbAM7T+gpyHA1GMPBfXMtgblOE6QxQa4vFXQpCpix75xdydeQfiJEKJTrzYeJ+QBfZiPhpKG6EzDn1szQBwfcLcpOWE6dBEOnmDxnAShmbzFoWl0+abMG5jVVm9H1wiRtrx1xj3PL79NBqfqsBADH2/A0nbGj8aaH+brDW6mzb3xsg+OrV260XP9oSThl5rOsRehE5nCppBvd0+MC8YnPhMdpmcIVOmO1feDVgxYx6zHbRIkMfbaI18aj+ElDYe0ZloHTAqh9w+7sK5ygj+NSVoqkH2GdNXivyhHBO2nu03Oo8T5SrgaZqQQigQk1vNnzbP7ZXwj/7ebTInkt5mOn3pk9F0B8qbrN9dkTFKEjepsQ+3iGBYxV30g26IRfPJisuDx6PJCcHqrZEq9JOqHbHHYxxUiypWIuDmtOEFIoBUTv++mn5IXHAC6bh9B5sJd/uDKUwIc7yIo7ZS4KWnCwenGY7rsSfLZU5ZLJCvK+eS77ZFwOQ8nRMuevQIzRzoy1T+oxFTzrf0wNzOD37JI3Ko6Fw58XfOEzTkMQQAvwbQk0KzzzrUv+OVmO4CX7EDgQGlM0OQ8YV8OZ3QDN9kwEnkonD2Ps9zH+pN+PJLNTS/4vhMRo+ MQDHkZzq UmzOud8qjzgF6FMfYcJ7aykvwZOLqWv9Rb/C6cV1ZB6seTw4VQe/6CzqSb2nKbYjen17UZUXYH0KeiLWp/bNuXZBH9d49w+FH0CRgoBA+T7MuVIpvpUbliWrOseDl9lKvBocv672JGBSU6hztTcYM//7BnHjFZFM1SrPP21vofudiqGyy4ZaRr4FDhI0+rmb2yjNlXsXMtUN250SZ1ib1xsAIljaCsXvArb3U6fxUe/4+eO+zDNaiKPCs54Nq7+VRzjmCeQzdT+a6U139l1tbGlHoWQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 06/09/2024 06:28, Baolin Wang wrote: > Shmem has a separate interface (different from anonymous pages) to control > huge page allocation, that means shmem THP can be enabled while anonymous > THP is disabled. However, in this case, khugepaged will not start to collapse > shmem THP, which is unreasonable. > > To fix this issue, we should call start_stop_khugepaged() to activate or > deactivate the khugepaged thread when setting shmem mTHP interfaces. > Moreover, add a new helper shmem_hpage_pmd_enabled() to help to check > whether shmem THP is enabled, which will determine if khugepaged should > be activated. > > Reported-by: Ryan Roberts > Signed-off-by: Baolin Wang > --- > Hi Ryan, > > I remember we discussed the shmem khugepaged activation issue before, but > I haven’t seen any follow-up patches to fix it. Recently, I am trying to > fix the shmem mTHP collapse issue in the series [1], and I also addressed > this activation issue. Please correct me if you have a better idea. Thanks. Thanks for for sorting this - it looks like a good approach to me! Just a couple of nits. Regardless: Reviewed-by: Ryan Roberts > > [1] https://lore.kernel.org/all/cover.1724140601.git.baolin.wang@linux.alibaba.com/T/#u > --- > include/linux/shmem_fs.h | 6 ++++++ > mm/khugepaged.c | 2 ++ > mm/shmem.c | 29 +++++++++++++++++++++++++++-- > 3 files changed, 35 insertions(+), 2 deletions(-) > > diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h > index 515a9a6a3c6f..ee6635052383 100644 > --- a/include/linux/shmem_fs.h > +++ b/include/linux/shmem_fs.h > @@ -114,6 +114,7 @@ int shmem_unuse(unsigned int type); > unsigned long shmem_allowable_huge_orders(struct inode *inode, > struct vm_area_struct *vma, pgoff_t index, > loff_t write_end, bool shmem_huge_force); > +bool shmem_hpage_pmd_enabled(void); > #else > static inline unsigned long shmem_allowable_huge_orders(struct inode *inode, > struct vm_area_struct *vma, pgoff_t index, > @@ -121,6 +122,11 @@ static inline unsigned long shmem_allowable_huge_orders(struct inode *inode, > { > return 0; > } > + > +static inline bool shmem_hpage_pmd_enabled(void) > +{ > + return false; > +} > #endif > > #ifdef CONFIG_SHMEM > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index f9c39898eaff..caf10096d4d1 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -430,6 +430,8 @@ static bool hugepage_pmd_enabled(void) > if (test_bit(PMD_ORDER, &huge_anon_orders_inherit) && > hugepage_global_enabled()) > return true; > + if (shmem_hpage_pmd_enabled()) > + return true; nit: There is a comment at the top of this function, perhaps that could be extended to cover shmem too? > return false; > } > > diff --git a/mm/shmem.c b/mm/shmem.c > index 74f093d88c78..d7c342ae2b37 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1653,6 +1653,23 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp) > } > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > +bool shmem_hpage_pmd_enabled(void) > +{ > + if (shmem_huge == SHMEM_HUGE_DENY) > + return false; > + if (test_bit(HPAGE_PMD_ORDER, &huge_shmem_orders_always)) question: When is it correct to use HPAGE_PMD_ORDER vs PMD_ORDER? I tend to use PMD_ORDER (in hugepage_pmd_enabled() for example). > + return true; > + if (test_bit(HPAGE_PMD_ORDER, &huge_shmem_orders_madvise)) > + return true; > + if (test_bit(HPAGE_PMD_ORDER, &huge_shmem_orders_within_size)) > + return true; > + if (test_bit(HPAGE_PMD_ORDER, &huge_shmem_orders_inherit) && > + shmem_huge != SHMEM_HUGE_NEVER) > + return true; > + > + return false; > +} > + > unsigned long shmem_allowable_huge_orders(struct inode *inode, > struct vm_area_struct *vma, pgoff_t index, > loff_t write_end, bool shmem_huge_force) > @@ -5036,7 +5053,7 @@ static ssize_t shmem_enabled_store(struct kobject *kobj, > struct kobj_attribute *attr, const char *buf, size_t count) > { > char tmp[16]; > - int huge; > + int huge, err; > > if (count + 1 > sizeof(tmp)) > return -EINVAL; > @@ -5060,7 +5077,9 @@ static ssize_t shmem_enabled_store(struct kobject *kobj, > shmem_huge = huge; > if (shmem_huge > SHMEM_HUGE_DENY) > SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge; > - return count; > + > + err = start_stop_khugepaged(); > + return err ? err : count; > } > > struct kobj_attribute shmem_enabled_attr = __ATTR_RW(shmem_enabled); > @@ -5137,6 +5156,12 @@ static ssize_t thpsize_shmem_enabled_store(struct kobject *kobj, > ret = -EINVAL; > } > > + if (ret > 0) { > + int err = start_stop_khugepaged(); > + > + if (err) > + ret = err; > + } > return ret; > } >