From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A63C4CCF9F8 for ; Wed, 25 Sep 2024 21:23:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F3326B00AC; Wed, 25 Sep 2024 17:23:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3A3246B00AD; Wed, 25 Sep 2024 17:23:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26AE46B00AF; Wed, 25 Sep 2024 17:23:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0780F6B00AC for ; Wed, 25 Sep 2024 17:23:27 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 83461160607 for ; Wed, 25 Sep 2024 21:23:26 +0000 (UTC) X-FDA: 82604536812.04.3322129 Received: from mail-vs1-f45.google.com (mail-vs1-f45.google.com [209.85.217.45]) by imf08.hostedemail.com (Postfix) with ESMTP id ACAC2160012 for ; Wed, 25 Sep 2024 21:23:24 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="GGWnB9/i"; spf=pass (imf08.hostedemail.com: domain of yuzhao@google.com designates 209.85.217.45 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727299283; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7K7gyhap+H2n6PUqc57MGvJNsh7ab/hzu5M29MqKlLU=; b=3IMZWZLi+Ndu6sNpIY5h8fyn1lvfg8r0sFDR9GHPDScfCeKKHmvU1+U8fzkRhRlx9DWH5S gS7gWQ2dkUYy9/SLh/96sd91EmFkNvRldN0GUsage8iQyhfh6PATmxwBcSVxjgfRJ5ngOs LRvRuSyTsXfaStvKcb8FaDO419VVYmg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727299283; a=rsa-sha256; cv=none; b=7AobvKMIOb/qHghn00LF9fxle1VxblBpjV2/J7Gp9QYNFIXSmFYL1iLYSQDg5NNJlLlhIm D5icxnwY90C50Vxscc5zDxx2RNsJwf4cMcM0WBlpWE0qhOjsy1zXtYctIdHty5g4Erq9pF X8zkeXxX4qCPTxRAPI+xqP6QBzVammY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="GGWnB9/i"; spf=pass (imf08.hostedemail.com: domain of yuzhao@google.com designates 209.85.217.45 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-vs1-f45.google.com with SMTP id ada2fe7eead31-49bcbf7bdb9so152823137.2 for ; Wed, 25 Sep 2024 14:23:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1727299404; x=1727904204; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=7K7gyhap+H2n6PUqc57MGvJNsh7ab/hzu5M29MqKlLU=; b=GGWnB9/id3bNT85vmP1nJE7cGKDvK6FjgoErOdbpwTMsqOc+K62xCp/mwrnVSSau+D 00VoTbVx2h0X4EnnNY1UolxV9ElH7BSqVYWOlPHJ8TmfyOTRZAJhASu8HeMvDr/0zxWr zLEfFXztyAWkC9VO1jlNnxbrenRhTh//voYoTMcttCYRQEWBCm3RrzORGkQjl+fNTuzk jCrV7tSJ/5/o+PtVJLg0h75SWGpqfNDP+YazXLNoYGt7dr6KkeAE8kaU6X4lOIWFLZW8 m22jOF+LBUK7CfTSIXWB3qSa19w+4VFw8yUtG1/XjLXD8GbzesYMAwUAix1qTwxvDkUH cWeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727299404; x=1727904204; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7K7gyhap+H2n6PUqc57MGvJNsh7ab/hzu5M29MqKlLU=; b=r/BJzpgDcJWMXQsmNCnxTMmH7br+t8DchvzpMZkEAWs147gB9TrJBiDyTVsHLJEVUA rUH0ce636xzZXHEu6bhBOeceXmlrOjVOXDmuxhGAKaPtxuPovxeSLUUdNrcJDJCBcfF4 omYFl4esF6NhjOCuz1XQv31Ra38b1WBBEDzC8GOOF4QP/X1iCZXLnM0gDOgFfv+xnVnc qGzV2ZehJoJ0xSLI6jKlBXEOOzr/osI6rqPrbbgFy3xmZmNDuMR6I0nbbzC1W60Rzj6+ PvICF0dL5YGAhBjiHTAZsvwnB6U+C4rn2pSdva+RD9eEO1BOgYEr7O3/zg88Y4pj85Kr vXBw== X-Forwarded-Encrypted: i=1; AJvYcCVAK8cGHS0en4+J/UoVw71OMeHeXF9p4pHHIgPKYJ5HowWsPugkfR3mRKV4q7Q/PsEvH4rN4t+IAA==@kvack.org X-Gm-Message-State: AOJu0YyX3DuEy+tCZVvXBzV3dlIfVwbheFnDo2+E6++YVMA+mguJx9Qp VzJw2zKx1Ms3SCC0c4GUFxLtPt4Zr35EnV2KYHQKJogfYudt9Z95Nsi9iOgsOlq2eJNfrDs4fGE aZMrBMqhhjoLBiYQBNRJxEOZcBCVIQDEaUm2oOcHFr8k7ycdNe/yb1d0= X-Google-Smtp-Source: AGHT+IHARl/sqH+UF9Tgj9PZ7hVymFoqDVNm06j0xdMsPZYx83BlOyjx2GcScpIQBmBMYcIWEgcI6u5EJYgdRDazAFs= X-Received: by 2002:a05:6102:2911:b0:48f:3f9a:7609 with SMTP id ada2fe7eead31-4a15dc67936mr5164754137.5.1727299403620; Wed, 25 Sep 2024 14:23:23 -0700 (PDT) MIME-Version: 1.0 References: <20240925030225.236143-1-zhaoyang.huang@unisoc.com> <20240925024215.265614f6839e752882b1c28b@linux-foundation.org> In-Reply-To: From: Yu Zhao Date: Wed, 25 Sep 2024 15:22:46 -0600 Message-ID: Subject: Re: [PATCH] mm: migrate LRU_REFS_MASK bits in folio_migrate_flags To: Zhaoyang Huang Cc: Andrew Morton , "zhaoyang.huang" , David Hildenbrand , Matthew Wilcox , linux-mm@kvack.org, linux-kernel@vger.kernel.org, steve.kang@unisoc.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: xta4onnddeyn1b58stnxaobq9bwtjbmo X-Rspamd-Queue-Id: ACAC2160012 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1727299404-87045 X-HE-Meta: U2FsdGVkX1+fWciwRXFLCRQPR4MdpIPt45BOySor0KN2k/gZcZOZ5gvENQiW+WupeREJXeQEO4Pxl6VP9rQMR7WDt8mB5Avzrt2+Pq9JspWSzIn6KN7bo0DEl+XmUUT/r7CTqnB9yfFPw6BGzBvo3Zv2Qs1q/c2TPLf8Cf4GM9MSJQ//TwKnjj+EHfur5/bnNRo4maslwY0AkvI4OJ8SmAbLOwlTSM4eIUY5isylrW/KOpb06kiP7PqkPrJhZoKr4vACbrp6IKaEm+TR1X9Ofww6ub9gm2CyN/2jqnjiEd9222nPiVzsnlpVOEgN/j9BlKs6CPvrcmBxGdlaaQdQQQXmlxw1sbaAeRqHavd0BDW82O5sR7wqmSL1oCyk7ZJjcPAvFaj8o8azWRmJErK38EFmSVY9VLaWRCGqx326pIgYRnes8MiOLv2GfS9yDV9bPlzCwbQ31UfPkWM6fj0oKBy5QQAGzNVxor8IZ9ZzNh3u/ucfiTduOTMCEnWx/UWPBYbZhP+YGoFVCgEm3PyP8neIojPHk5ad5BQtwXGOkQ3Ci7ghETxV5uXIrrQV6kXGQqpQtp08muXpK9tL0s9sCzzQNwM/rrghl1h/leCMJwdQ5uzW+OAQrg2pQWVXXbR/bZ0voJPlvx5Fy8Zf2sLnK8JtrSLa1Yd1JyQj8zM4MycS0bVDQP+S8HfezyaEyoM9DHIG4tLHlE2dXaiiaiDx/ycSiOmt7FZY0kYFCf5BhiT7/5TuXXbJ3IlTG2mnsPvj9Pi/L9E58hj3xvXCgMe251WkwgJ0J99aMxZ9QT3QmMr2you30RoZaaf8l1bcOowMFuhdj50sM0/q8h0bocmwyh121msWfCx9zoYiq+1QkmY45OSsUaSWIpRztOoMdOpOBAJHdNrdGMZx+4Zs6uzt/2cbYL5em5EhiSriQpjEcKH97tMMTOojBwLqrHNDbFcNs4Z5h0deZzqhIfQkRVh 8aGDW32w VYNA4Ag/wSnIOdftDFnqswLozXvfzXyMLgrbtAGeW0xXCOFUIEFedO2lYR5NHHMap9/cQT53HSiCRItXtXgnWSgi0Wp8TfqjNQR9qwRhwmpgIiPrdKUG7/iiVTn9cb7n2IYtkpw5APNziNwlVXCvE1cdxvPHybrao/ZV6BixbH50DfJamZuV79H0B5iCTbHBG9epIMTHGyIhv+/oGSdkGQriOYeRhn/YAp0NRyIeRl6Tx7+VSxrVzPbnSApHs6e/xRG922snJ/c4h832FY+x8ivGhOwTnjfWbdesJl+2YSGCnS7L6Pjc6EbeAiQjsuVAxxRyMZ0ZMrC3/3sqqMmCnQ4/WobTaryWq5oXe+eulgIegtayhYPufSwSothDJb+YtK4Uc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Sep 25, 2024 at 5:50=E2=80=AFAM Zhaoyang Huang wrote: > > On Wed, Sep 25, 2024 at 5:42=E2=80=AFPM Andrew Morton wrote: > > > > On Wed, 25 Sep 2024 11:02:25 +0800 "zhaoyang.huang" wrote: > > > > > From: Zhaoyang Huang > > > > > > Bits of LRU_REFS_MASK are not inherited during migration which lead t= o > > > new_folio start from tier0. Fix this by migrate the bits domain. > > > > I'm having trouble understanding this, sorry. Please more fully > > describe the runtime effects of this flaw. > Sorry for bringing confusion. According to my understanding, MGLRU > records how many times that vfs access this page in a range of bits > domain(LRU_REFS_MASK) in folio->flags which are not being migrated to > new_folios so far. This commit would like to do so to have the > new_folio inherit these bits from the old folio. Is it right and > worthy to do? Correct. And this was not done because 1. While LRU_REFS_MASK is useful, but it's not important, and 2. More important information, e.g., generations, is lost during migration. So I'm not sure how much this patch can help. But you think it does, why no= t. > > > --- a/include/linux/mm_inline.h > > > +++ b/include/linux/mm_inline.h > > > @@ -291,6 +291,12 @@ static inline bool lru_gen_del_folio(struct lruv= ec *lruvec, struct folio *folio, > > > return true; > > > } > > > > > > +static inline void folio_migrate_refs(struct folio *new_folio, struc= t folio *folio) Nitpick: folio_migrate_refs(struct folio *new, struct folio *old) > > > +{ > > > + unsigned long refs =3D READ_ONCE(folio->flags) & LRU_REFS_MASK; > > > + > > > + set_mask_bits(&new_folio->flags, LRU_REFS_MASK, refs); > > > +} > > > #else /* !CONFIG_LRU_GEN */ > > > > > > static inline bool lru_gen_enabled(void) > > > @@ -313,6 +319,8 @@ static inline bool lru_gen_del_folio(struct lruve= c *lruvec, struct folio *folio, > > > return false; > > > } > > > > > > +static inline void folio_migrate_refs(struct folio *new_folio, struc= t folio *folio) Ditto. > > > +{} A line break between "{" and "}". > > > #endif /* CONFIG_LRU_GEN */ > > > > > > static __always_inline > > > diff --git a/mm/migrate.c b/mm/migrate.c > > > index 923ea80ba744..60c97e235ae7 100644 > > > --- a/mm/migrate.c > > > +++ b/mm/migrate.c > > > @@ -618,6 +618,7 @@ void folio_migrate_flags(struct folio *newfolio, = struct folio *folio) > > > if (folio_test_idle(folio)) > > > folio_set_idle(newfolio); > > > > > > + folio_migrate_refs(newfolio, folio); > > > /* > > > * Copy NUMA information to the new page, to prevent over-eager > > > * future migrations of this same page. > > > -- > > > 2.25.1