From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88A53D2E01E for ; Wed, 23 Oct 2024 08:04:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1AA458D0002; Wed, 23 Oct 2024 04:04:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 159BD8D0001; Wed, 23 Oct 2024 04:04:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 048A18D0002; Wed, 23 Oct 2024 04:04:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id D81D08D0001 for ; Wed, 23 Oct 2024 04:04:31 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 5879714078E for ; Wed, 23 Oct 2024 08:04:13 +0000 (UTC) X-FDA: 82704129564.16.DC81345 Received: from out30-97.freemail.mail.aliyun.com (out30-97.freemail.mail.aliyun.com [115.124.30.97]) by imf24.hostedemail.com (Postfix) with ESMTP id 1AC85180015 for ; Wed, 23 Oct 2024 08:04:25 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=pWR9Sj+w; spf=pass (imf24.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729670517; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=K9VJ2p+XZ+PUBAleTJhU7TSC+VfHlht8O6qDk/d3Mew=; b=28BRCFZiPluLwT0PU1KSm4WgSMOnxW2dGIgPS/WK+mR4BigF3tJ6vN6eHIBxvi64j/N3fq kTpN7ILOL6bvYFLSSRggjoZMaiEC7tGWBBEr3cKhrcURSmUfWdhg5SNmbSQZNpfQ6ETL19 3ILNt+6P+dGOzMfJmaTkUuWJVlpx7GA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729670517; a=rsa-sha256; cv=none; b=YReFTAqxj6EBA4t+WjNM4H7+po23OoS8abiDqDSFtb5Wx4/MpWeFYtqxa9OOqQCzxP8btJ 9RTU0KZzZXqZH4n9T3p9NfP05blYeUmHtkgLHEfMmVUpBK8+n3CLCl+fSDcOeZ6Ar9kiJx 0SuQYLBb0EQ6O/n1T6l8s/L0VWVE7zY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=pWR9Sj+w; spf=pass (imf24.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.97 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1729670663; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=K9VJ2p+XZ+PUBAleTJhU7TSC+VfHlht8O6qDk/d3Mew=; b=pWR9Sj+wLC2817fwcNFOEyPXB32ucl/OXCTS4KxpQCfMPBL8WSHMkQK+mApoEQOKoR0pGdqGmlNYtWiHoKSXvpZdeWlxTWFEYaiMWjGAHEkmpud+Z+zU3l76Ta4xlxvefFXvT/azSFx4d3sHyYN0VoObP+LSHZ7Lm7to5hh5sHc= Received: from 30.74.144.118(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WHkczZg_1729670661 cluster:ay36) by smtp.aliyun-inc.com; Wed, 23 Oct 2024 16:04:21 +0800 Message-ID: <7eb412d1-f90e-4363-8c7b-072f1124f8a6@linux.alibaba.com> Date: Wed, 23 Oct 2024 16:04:19 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v3 0/4] Support large folios for tmpfs To: David Hildenbrand , Daniel Gomez , "Kirill A. Shutemov" Cc: Matthew Wilcox , akpm@linux-foundation.org, hughd@google.com, wangkefeng.wang@huawei.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ioworker0@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A . Shutemov" References: <6dohx7zna7x6hxzo4cwnwarep3a7rohx4qxubds3uujfb7gp3c@2xaubczl2n6d> <8e48cf24-83e1-486e-b89c-41edb7eeff3e@linux.alibaba.com> <486a72c6-5877-4a95-a587-2a32faa8785d@redhat.com> From: Baolin Wang In-Reply-To: <486a72c6-5877-4a95-a587-2a32faa8785d@redhat.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1AC85180015 X-Stat-Signature: b56r1dy75eu86q7a9wx7r9jjzot5dp14 X-HE-Tag: 1729670665-149707 X-HE-Meta: U2FsdGVkX18wpf1vGKtMF00AYtadwJf9pckqcIiHwJZBtkDjQvTsJR+EK6jI0cDMTKyjcfBjp3LsQ+gQetcIhjyOjzHlGb0pA2bvCy/jkZPsGVCgzEzft9KljjceyMvybzRstf4cDDtmVmmYrutwS1ghuBEI6L5Rt9pGoLDzwOllz1L2GnyOEu3FaDowkXVzSAFobGOwDeX4zDcgZb1HsXoiAwD2Uy2PcNLKloOqAx3SYv/6hUaItQq+2vzM/VdJ+5lG0tpNd6EYxoM2+0VUpUOxQFSGxc+L5AmVwFFuzd5PWg9xVq42be02XWjZjroRp7+osRVXxeq/D0UVwpMh1QIY9buo7MW1z3H7+3l4LGVS05jzRpPk5cEgpNJpl3H/R86AyV5s/hmnArdoYOQS65/qlWp41ADXqVOD9qOXqGGdDCIu8LRbSm/hFcS4FBLTi+WtXAk941+UVwZn2er3TQ0mgJ218Uh593r2LlcVVa7Uc07sTIpMrsZjSIQ2uKlcs4YcKydk5wQUlgBM5rAvs9eflDjPXxgg9YT2Li+HVIrSMDzdHVRwMinqPOIq5FGPUIjuFx486Zcb9Gvx5UX8Js2NHgX+XFSSXQYcPV1L41GRiLEEGj8v60HsSn/71AJuRcmwptO+bbx+Rm7EwshQSlm2moWZtPrt4lX+XJEwcXAgjnkNt/Fv0lh+7bNH1hpQ77rxcdivG96xbGY/IrZaBQLHTepIuzAMOdi8aa4MF4gCLcvQlcxfERXwFPoTq0i2zfUFJjiT53bR33PpFP6v01Tv+0+7sEE2mBLlnGNOFgLnepDBexZDFwnczlf4xFGrPHbFJYiuFNjZqjr9WHVcqfqDVqdl4w8s6b+B8boQ97K/DwdJq2qUrXOZskRSb8WL9Ya00IH50lLUipi+Cw8g/D04rWcV8gc2hwfMma3HB669a5sgLqzo6ylf5/tPjmM33qPXJ6gu5A7FC7kGueA TMHrkrub H0kYbNiZYLcxt0Omij4JLJ3nXCRV9UCLiUz8QThmaBP4ZDKqvKqLwxlSMhyAYDXpK+xUFdOnzLuziObb+P2B/SIL6PbQCTKTDZ0Z4BHvokbrEEz3Fddr5mceklfUwVRNTvy/TOpO9QRKSARTgymxxKXR63WF02UPD3evBqL+9nKV/7I0zus8K1m+KTsj8ezxcKCV5T3tktaT0tYsJueLrgI8CObJYqVYwc5HAWXe3lkDreWpnbCV4GR0ZPVnuPb8JGhQ8V2Tis+2miGthvNtudX80aQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/10/22 23:31, David Hildenbrand wrote: > On 22.10.24 05:41, Baolin Wang wrote: >> >> >> On 2024/10/21 21:34, Daniel Gomez wrote: >>> On Mon Oct 21, 2024 at 10:54 AM CEST, Kirill A. Shutemov wrote: >>>> On Mon, Oct 21, 2024 at 02:24:18PM +0800, Baolin Wang wrote: >>>>> >>>>> >>>>> On 2024/10/17 19:26, Kirill A. Shutemov wrote: >>>>>> On Thu, Oct 17, 2024 at 05:34:15PM +0800, Baolin Wang wrote: >>>>>>> + Kirill >>>>>>> >>>>>>> On 2024/10/16 22:06, Matthew Wilcox wrote: >>>>>>>> On Thu, Oct 10, 2024 at 05:58:10PM +0800, Baolin Wang wrote: >>>>>>>>> Considering that tmpfs already has the 'huge=' option to >>>>>>>>> control the THP >>>>>>>>> allocation, it is necessary to maintain compatibility with the >>>>>>>>> 'huge=' >>>>>>>>> option, as well as considering the 'deny' and 'force' option >>>>>>>>> controlled >>>>>>>>> by '/sys/kernel/mm/transparent_hugepage/shmem_enabled'. >>>>>>>> >>>>>>>> No, it's not.  No other filesystem honours these settings. >>>>>>>> tmpfs would >>>>>>>> not have had these settings if it were written today.  It should >>>>>>>> simply >>>>>>>> ignore them, the way that NFS ignores the "intr" mount option >>>>>>>> now that >>>>>>>> we have a better solution to the original problem. >>>>>>>> >>>>>>>> To reiterate my position: >>>>>>>> >>>>>>>>      - When using tmpfs as a filesystem, it should behave like >>>>>>>> other >>>>>>>>        filesystems. >>>>>>>>      - When using tmpfs to implement MAP_ANONYMOUS | MAP_SHARED, >>>>>>>> it should >>>>>>>>        behave like anonymous memory. >>>>>>> >>>>>>> I do agree with your point to some extent, but the ‘huge=’ option >>>>>>> has >>>>>>> existed for nearly 8 years, and the huge orders based on write >>>>>>> size may not >>>>>>> achieve the performance of PMD-sized THP in some scenarios, such >>>>>>> as when the >>>>>>> write length is consistently 4K. So, I am still concerned that >>>>>>> ignoring the >>>>>>> 'huge' option could lead to compatibility issues. >>>>>> >>>>>> Yeah, I don't think we are there yet to ignore the mount option. >>>>> >>>>> OK. >>>>> >>>>>> Maybe we need to get a new generic interface to request the semantics >>>>>> tmpfs has with huge= on per-inode level on any fs. Like a set of >>>>>> FADV_* >>>>>> handles to make kernel allocate PMD-size folio on any allocation >>>>>> or on >>>>>> allocations within i_size. I think this behaviour is useful beyond >>>>>> tmpfs. >>>>>> >>>>>> Then huge= implementation for tmpfs can be re-defined to set these >>>>>> per-inode FADV_ flags by default. This way we can keep tmpfs >>>>>> compatible >>>>>> with current deployments and less special comparing to rest of >>>>>> filesystems on kernel side. >>>>> >>>>> I did a quick search, and I didn't find any other fs that require >>>>> PMD-sized >>>>> huge pages, so I am not sure if FADV_* is useful for filesystems >>>>> other than >>>>> tmpfs. Please correct me if I missed something. >>>> >>>> What do you mean by "require"? THPs are always opportunistic. >>>> >>>> IIUC, we don't have a way to hint kernel to use huge pages for a >>>> file on >>>> read from backing storage. Readahead is not always the right way. >>>> >>>>>> If huge= is not set, tmpfs would behave the same way as the rest of >>>>>> filesystems. >>>>> >>>>> So if 'huge=' is not set, tmpfs write()/fallocate() can still >>>>> allocate large >>>>> folios based on the write size? If yes, that means it will change the >>>>> default huge behavior for tmpfs. Because previously having 'huge=' >>>>> is not >>>>> set means the huge option is 'SHMEM_HUGE_NEVER', which is similar >>>>> to what I >>>>> mentioned: >>>>> "Another possible choice is to make the huge pages allocation based >>>>> on write >>>>> size as the *default* behavior for tmpfs, ..." >>>> >>>> I am more worried about breaking existing users of huge pages. So >>>> changing >>>> behaviour of users who don't specify huge is okay to me. >>> >>> I think moving tmpfs to allocate large folios opportunistically by >>> default (as it was proposed initially) doesn't necessary conflict with >>> the default behaviour (huge=never). We just need to clarify that in >>> the documentation. >>> >>> However, and IIRC, one of the requests from Hugh was to have a way to >>> disable large folios which is something other FS do not have control >>> of as of today. Ryan sent a proposal to actually control that globally >>> but I think it didn't move forward. So, what are we missing to go back >>> to implement large folios in tmpfs in the default case, as any other fs >>> leveraging large folios? >> >> IMHO, as I discussed with Kirill, we still need maintain compatibility >> with the 'huge=' mount option. This means that if 'huge=never' is set >> for tmpfs, huge page allocation will still be prohibited (which can >> address Hugh's request?). However, if 'huge=' is not set, we can >> allocate large folios based on the write size. > > I consider allocating large folios in shmem/tmpfs on the write path less > controversial than allocating them on the page fault path -- especially > as long as we stay within the size to-be-written. > > I think in RHEL THP on shmem/tmpfs are disabled as default (e.g., > shmem_enabled=never). Maybe because of some rather undesired > side-effects (maybe some are historical?): I recall issues with VMs with > THP+ memory ballooning, as we cannot reclaim pages of folios if > splitting fails). I assume most of these problematic use cases don't use > tmpfs as an ordinary file system (write()/read()), but mmap() the whole > thing. > > Sadly, I don't find any information about shmem/tmpfs + THP in the RHEL > documentation; most documentation is only concerned about anon THP. > Which makes me conclude that they are not suggested as of now. > > I see more issues with allocating them on the page fault path and not > having a way to disable it -- compared to allocating them on the write() > path. I may not understand your issues. IIUC, you can disable allocating huge pages on the page fault path by using the 'huge=never' mount option or setting shmem_enabled=deny. No?