From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2B3DC54E68 for ; Wed, 20 Mar 2024 02:48:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 57C6C6B00A1; Tue, 19 Mar 2024 22:48:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52CAE6B00A2; Tue, 19 Mar 2024 22:48:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A6D36B00A3; Tue, 19 Mar 2024 22:48:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 25D986B00A1 for ; Tue, 19 Mar 2024 22:48:05 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C0397C0D9A for ; Wed, 20 Mar 2024 02:48:04 +0000 (UTC) X-FDA: 81915882888.21.77E1149 Received: from mail-ua1-f48.google.com (mail-ua1-f48.google.com [209.85.222.48]) by imf13.hostedemail.com (Postfix) with ESMTP id 07BFB2000A for ; Wed, 20 Mar 2024 02:48:02 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=cMwSQaHQ; spf=pass (imf13.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710902883; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=g2zDHLzD2gozMABkU+Rb308kWqluttfK/My8IZFv7bE=; b=dxHBUtmbQkOIraJJLGaKXs07hIiaNoKeNg+1QgrxtDNUpgMmrQwejri4VM7hsruCa3Ck2G MU/92VGsyPiA7sBeuVzPkggv0KapyGfmOITn/+9lxlY7839/azm3XNjJqa/AjsJQUvNGuA FqAmI6VYnt0R9p9Rujhj/eF3F8YzYuw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710902883; a=rsa-sha256; cv=none; b=R5bjUn6GCRXh1K9FSuFP/Q9J8bofxNJ61t7xGcdeTOY4AJtp7FPLUQuGNvQFsp4lRExPQ3 EVYMG9KtIPeAnb8Xv3XgNibxmCWZxAtf+qzl0HhrBsJVAveB6kCCovCmjlgNdEE+uR3drd Fsj6dLHzaZZqPwpwR6np2scD16X37/8= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=cMwSQaHQ; spf=pass (imf13.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-ua1-f48.google.com with SMTP id a1e0cc1a2514c-7e046990b6aso1383089241.1 for ; Tue, 19 Mar 2024 19:48:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1710902882; x=1711507682; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=g2zDHLzD2gozMABkU+Rb308kWqluttfK/My8IZFv7bE=; b=cMwSQaHQtZ9AdStH/4txGSbiCJXecZWCWdegwzp7G5CEgdxUumszm2xtlTD2quspBp pyKlPcWe11KvvLol9bqm2ak1cXwX7yY7URrsS0OX9lDM87qg9yl4urgrq0V1Gwn8rDW/ rgFz13Yu3A1x2nBQLjDqc+dKWmmcsCT6lz9XmpgLdB1g+9BIkaFKburRznksjTPiKAPZ 6E+TOu+SVn5uoaAbm7TL6MEKeUJgRQaw3ypI5SaD1d91/ZppsDVzm8w+ggHq5rEiVMWv 6cNdJt1B5x1Sg9AyZ9x2m4qgQzwU5ivsbYq69YqM5uC4ixQm9MHJ5wxPp3WxvyoCUhst abSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710902882; x=1711507682; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g2zDHLzD2gozMABkU+Rb308kWqluttfK/My8IZFv7bE=; b=Vdz4QrI6fnBxrD4vB6pDhQ6Fpd0uJlepv2jK1AQibF0lisknX8Gw6XSjhAVklq8mrm IowiSdavzbumOHQvMoYcWLgB1i/pgiTCpjyXHfKOptXL78DWl4hPlgsltmjKU0vTGupf PaKPNyfOXhzShAbwSxV5pfrh3ICCteEX9rQVKK0bw0WBF6e+oUOCngVRiWGFh8NKhkIi t09RnRXgYeW5U//YIa7fRZZ0loInfgOvDifc64XuJAkIH3015UlOrKM2ZeMAa0vBBabm ohPzgRT5M10T3Gux9a9pKruep/7UZciUVKsKcEL8a8RTUKCYmKBuOVCs+djeGYB2ic8A +sWw== X-Forwarded-Encrypted: i=1; AJvYcCUF2EPRr2YtDP8mkbSSwUrHnRrYDFT0HA6D/upAWn4ViuNTuxDTlSct4jy0JtSGNABP2ZHiax6CyYXFCqv6KCk8zc8= X-Gm-Message-State: AOJu0Yy8REjrIcDFcvbc/+T28KpPA4kKgGaBpko/tiN1TZf39BwRW3Hi FkD2MBcLcFYxLWdzJnYe0PVdd4nIpLUGDfpVJB07i3YMBUg20fGsAFg7wW2rIHDFUITbIsmYSjP mP1B3Kis9KEOnhVItNow3Z03A8M8= X-Google-Smtp-Source: AGHT+IFPKyvY1+N1OUzaJDaMMheT9M1hSCaKRHjOgDe3v3QLnyYDrkevkMjsjrIdW148I9iAOA9baVXo//SuHF7Ly8A= X-Received: by 2002:a1f:eb43:0:b0:4d4:1a1a:6db7 with SMTP id j64-20020a1feb43000000b004d41a1a6db7mr672030vkh.2.1710902881911; Tue, 19 Mar 2024 19:48:01 -0700 (PDT) MIME-Version: 1.0 References: <20240304081348.197341-1-21cnbao@gmail.com> <20240304081348.197341-6-21cnbao@gmail.com> <87wmq3yji6.fsf@yhuang6-desk2.ccr.corp.intel.com> <87sf0rx3d6.fsf@yhuang6-desk2.ccr.corp.intel.com> <87jzm0wblq.fsf@yhuang6-desk2.ccr.corp.intel.com> <9ec62266-26f1-46b6-8bb7-9917d04ed04e@arm.com> <87jzlyvar3.fsf@yhuang6-desk2.ccr.corp.intel.com> <87zfutsl25.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87zfutsl25.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 20 Mar 2024 15:47:50 +1300 Message-ID: Subject: Re: [RFC PATCH v3 5/5] mm: support large folios swapin as a whole To: "Huang, Ying" Cc: Ryan Roberts , Matthew Wilcox , akpm@linux-foundation.org, linux-mm@kvack.org, chengming.zhou@linux.dev, chrisl@kernel.org, david@redhat.com, hannes@cmpxchg.org, kasong@tencent.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, mhocko@suse.com, nphamcs@gmail.com, shy828301@gmail.com, steven.price@arm.com, surenb@google.com, wangkefeng.wang@huawei.com, xiang@kernel.org, yosryahmed@google.com, yuzhao@google.com, Chuanhua Han , Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: fbjhzsnzybhqpnryi7t4kfrwyrfbz9yk X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 07BFB2000A X-Rspam-User: X-HE-Tag: 1710902882-933620 X-HE-Meta: U2FsdGVkX19U8hiA84lovxORHnsuaY4c1kJWb97pUkYB7t+qrpwgbqsTuQZaWCrTNHg+YKilwpzunnar9jG9d/ZjWPctn5jr2DcBNI7sdHdnon13ZTG+jNnATXfQDx/TqUgpUol+4myF9RDF7pMkYYmTYgyGAFevkt2wDPWvU6fHGJ++VVm8wdGSp8qxiuatCcvjVTVCetXkT2nfPn6daNL1FGsWYY+apr9sPPLgoaJkImkJfft/vnzag4aycgXc+wIJMyBxSYA3fLKkORBeDdNEgHXbWatZjVcID9T8klTxHsivMnRfTbM3yYKp4xDqcoXfeIVXnRAFMYvMEleINffGGtIlDMVAEzMhNDF6wduziqf7PRlzjb5UnNVilZZRavHQHMMo2LoQ0jyj08NXn1Z+HLLYKfhm/tABh44xl1rQgiTBIfZF2wh8ubYrHbM6JSMYt64ORPCtgS8GODSMK69Dpj6DfaxOxEkuWyhBxlOdFfZKV4tHdms6uFur9etkK+3oBXVsBReZ42AucENMpiBG1Fns4njyNFH+oNgov1eH4cK18rVyRF0qDjcBLJkoUDpW+KgIsmBTbCOgvkuaVB8IKVSOu1kIDJSjKQQqA6Oosuo+zsTw/DImQDtU/TGyc3IT5Xsk7wEiii8LA865/O+4Zm9boyFymkNkq11yCk1QRzSRuUY+1WqO7oRtcoUN1ti3k3P3sNMdIoUz14sNJTL2JzDMSQO1XKBzoAQiwG9ggDF82FdRYyHrtYlPW7DN9P8FnpVPZ6zo80/D4B3872zIglCApvgGL0N4FeHPEKh67pNn1FL8elh3xc4+POauDzHuNrUhDMZ3eHZyT/7H5uW7r6gAL4FGQOZtKBWOetuNsSG+mtEGYL6Z7w+copFb0KQltEIAHDJUWwaO1TYjAb44MO3gnWm/ZsX+NF3S7IbzOqExKzLn9P187MwGKfgiNjFEJMcwWuuiInavFM5 nqhHXj6Y c3Li4TFuT5zQhqm/LcHbnS5jVx1JYt5lS559LoqzkELERAuX1EJxKm23VE/0ILHnRMM+078MoBnioXN1ibo5scECb8/D2wrloqbQQowGFAm/BpztcHYoEltgi04EAbrlZiqLW8CjJ8zccc2HpzqFUOpgyGBi53AWweegEFioKRCrhXTXsDC3zCgVhFqzVdfG/4tPxOytuzf05sENgpwvx3hpotP8UlPAQNhBEe3cUUkRVgey3QevGGsV4LG++/hjMjCtoPiTtiK4DPNwE17fXe3t4TUKBJxPveSqC8ZlWi3tL/dRz04hRvc5DDnFiSHPBWDM1OLjV2q69T0q9ab78ZXlMsw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.003418, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 20, 2024 at 3:20=E2=80=AFPM Huang, Ying = wrote: > > Ryan Roberts writes: > > > On 19/03/2024 09:20, Huang, Ying wrote: > >> Ryan Roberts writes: > >> > >>>>>> I agree phones are not the only platform. But Rome wasn't built in= a > >>>>>> day. I can only get > >>>>>> started on a hardware which I can easily reach and have enough har= dware/test > >>>>>> resources on it. So we may take the first step which can be applie= d on > >>>>>> a real product > >>>>>> and improve its performance, and step by step, we broaden it and m= ake it > >>>>>> widely useful to various areas in which I can't reach :-) > >>>>> > >>>>> We must guarantee the normal swap path runs correctly and has no > >>>>> performance regression when developing SWP_SYNCHRONOUS_IO optimizat= ion. > >>>>> So we have to put some effort on the normal path test anyway. > >>>>> > >>>>>> so probably we can have a sysfs "enable" entry with default "n" or > >>>>>> have a maximum > >>>>>> swap-in order as Ryan's suggestion [1] at the beginning, > >>>>>> > >>>>>> " > >>>>>> So in the common case, swap-in will pull in the same size of folio= as was > >>>>>> swapped-out. Is that definitely the right policy for all folio siz= es? Certainly > >>>>>> it makes sense for "small" large folios (e.g. up to 64K IMHO). But= I'm not sure > >>>>>> it makes sense for 2M THP; As the size increases the chances of ac= tually needing > >>>>>> all of the folio reduces so chances are we are wasting IO. There a= re similar > >>>>>> arguments for CoW, where we currently copy 1 page per fault - it p= robably makes > >>>>>> sense to copy the whole folio up to a certain size. > >>>>>> " > >>> > >>> I thought about this a bit more. No clear conclusions, but hoped this= might help > >>> the discussion around policy: > >>> > >>> The decision about the size of the THP is made at first fault, with s= ome help > >>> from user space and in future we might make decisions to split based = on > >>> munmap/mremap/etc hints. In an ideal world, the fact that we have had= to swap > >>> the THP out at some point in its lifetime should not impact on its si= ze. It's > >>> just being moved around in the system and the reason for our original= decision > >>> should still hold. > >>> > >>> So from that PoV, it would be good to swap-in to the same size that w= as > >>> swapped-out. > >> > >> Sorry, I don't agree with this. It's better to swap-in and swap-out i= n > >> smallest size if the page is only accessed seldom to avoid to waste > >> memory. > > > > If we want to optimize only for memory consumption, I'm sure there are = many > > things we would do differently. We need to find a balance between memor= y and > > performance. The benefits of folios are well documented and the kernel = is > > heading in the direction of managing memory in variable-sized blocks. S= o I don't > > think it's as simple as saying we should always swap-in the smallest po= ssible > > amount of memory. > > It's conditional, that is, > > "if the page is only accessed seldom" > > Then, the page swapped-in will be swapped-out soon and adjacent pages in > the same large folio will not be accessed during this period. > > So, I suggest to create an algorithm to decide swap-in order based on > swap-readahead information automatically. It can detect the situation > above via reduced swap readahead window size. And, if the page is > accessed for quite long time, and the adjacent pages in the same large > folio are accessed too, swap-readahead window will increase and large > swap-in order will be used. The original size of do_anonymous_page() should be honored, considering it embodies a decision influenced by not only sysfs settings and per-vma HUGEPAGE hints but also architectural characteristics, for example CONT-PTE. The model you're proposing may offer memory-saving benefits or reduce I/O, but it entirely disassociates the size of the swap in from the size prior t= o the swap out. Moreover, there's no guarantee that the large folio generated by the readahead window is contiguous in the swap and can be added to the swap cache, as we are currently dealing with folio->swap instead of subpage->swap. Incidentally, do_anonymous_page() serves as the initial location for alloca= ting large folios. Given that memory conservation is a significant consideration= in do_swap_page(), wouldn't it be even more crucial in do_anonymous_page()? A large folio, by its nature, represents a high-quality resource that has t= he potential to leverage hardware characteristics for the benefit of the entire system. Conversely, I don't believe that a randomly determined size dictated by the readahead window possesses the same advantageous qualities. SWP_SYNCHRONOUS_IO devices are not reliant on readahead whatsoever, their needs should also be respected. > > You also said we should swap *out* in smallest size possible. Have I > > misunderstood you? I thought the case for swapping-out a whole folio wi= thout > > splitting was well established and non-controversial? > > That is conditional too. > > >> > >>> But we only kind-of keep that information around, via the swap > >>> entry contiguity and alignment. With that scheme it is possible that = multiple > >>> virtually adjacent but not physically contiguous folios get swapped-o= ut to > >>> adjacent swap slot ranges and then they would be swapped-in to a sing= le, larger > >>> folio. This is not ideal, and I think it would be valuable to try to = maintain > >>> the original folio size information with the swap slot. One way to do= this would > >>> be to store the original order for which the cluster was allocated in= the > >>> cluster. Then we at least know that a given swap slot is either for a= folio of > >>> that order or an order-0 folio (due to cluster exhaustion/scanning). = Can we > >>> steal a bit from swap_map to determine which case it is? Or are there= better > >>> approaches? > >> > >> [snip] > > -- > Best Regards, > Huang, Ying Thanks Barry