From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6F43D70DED for ; Thu, 28 Nov 2024 19:53:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1F1376B0083; Thu, 28 Nov 2024 14:53:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 17A366B0085; Thu, 28 Nov 2024 14:53:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F35B06B0088; Thu, 28 Nov 2024 14:53:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id CD6EC6B0083 for ; Thu, 28 Nov 2024 14:53:42 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 526B9A0536 for ; Thu, 28 Nov 2024 19:53:42 +0000 (UTC) X-FDA: 82836553506.03.3E0A659 Received: from mail-qt1-f175.google.com (mail-qt1-f175.google.com [209.85.160.175]) by imf28.hostedemail.com (Postfix) with ESMTP id 26274C001D for ; Thu, 28 Nov 2024 19:53:29 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=13gf254z; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of surenb@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1732823617; a=rsa-sha256; cv=none; b=i0UyWQtTL/+S9LJUHDVUeGMgRsTSw9yCqlw6wwhQqCwkqbGweBEBT8RfPrq1v1UBqnnjgs ATUHdXdNg740eRPV0NmodePu7exY6YBn4lizYPCu8hkq9kgVBXsmR2DKFgVK9P1I6/fet8 r5wMlKVx7ervOAqtm8TN1Obvbrf31tA= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=13gf254z; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of surenb@google.com designates 209.85.160.175 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1732823617; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e3MGEQAqQD2LsTNLOh1GtipnKL/JQve8iBL2ukXgVaw=; b=v0TNTbRiJrx9UxWMEgwK2+hUEdnGK0/b2pUq7TnNnudd4+i7D8hTVjfVZ81ue/iBJZP5WK kUg66AHYIPxRC12pTmY73HLZaD9MgW3M4G2/XdR+0dhQSS/agJ+1X+gyv1cwfLabWf3tsN DgaoiCpRGEuPzp2GDwg3CzNOfN0TSnA= Received: by mail-qt1-f175.google.com with SMTP id d75a77b69052e-466ab386254so256211cf.1 for ; Thu, 28 Nov 2024 11:53:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1732823619; x=1733428419; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=e3MGEQAqQD2LsTNLOh1GtipnKL/JQve8iBL2ukXgVaw=; b=13gf254zHofc4zBqH+GoDcTzJ6LdAcpRzDPyiLKEzUQRbHe1/a/wjHZlTqoVbR8jDi S0+BN/QBgbQj5ZF6LqcjDxh2t/fe4lYhZM0cvzvdwG/Zq2GUdE6eMdTBfQGiZz3NrHrM SrLBWb6Uq+9U9snRKIztJynDLt8ZMy/YFRicx2d7ulDQwU19/LCUS+8OnOanm2512eS6 5iOcWxB6HWBZ/OlUYt4cbUvXGArXpmX27AXrSBblma8ytAsY5U0JR+B6uF6CnkasrwwX elLapJlqbGSdftMQk0Ac1XLoWxaj0qxpGhM9GZUAoQNffNnbNHZQxL4cpV7e2CNLuoLL aGrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1732823619; x=1733428419; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e3MGEQAqQD2LsTNLOh1GtipnKL/JQve8iBL2ukXgVaw=; b=WNGb39xXoJ+MavXhq/qNtaHqCs6FHUtInSnCGBlet5IWBLOGCwc3PDofbMvgpW8OBM AxTbcE0uAQ815KL11iddioa/jLnurH20gxoM+3copuHW7LcCV9lJ37bOQngCEE7xFWMP CAD9TFIMBHqSA+wy+6BOJMUsJ7Fhg3rFaBeUTgh7o//UKsVDGrcbZCdb+L9pfXuEGfg+ Q9p4xAAuBEj0U6tRyCK8sZyLF98eWliE8wFOC+cMP9gCrAIYDbxPxw+jfBtq/KU++KL4 y2GfdnxOD8QzIHWP0w3tCpl4fkGbrrzRHKCS+whyoGQH9zXn7ggYlKlXUi7Fnca8L3Zl Vo+g== X-Forwarded-Encrypted: i=1; AJvYcCXAwe4WaXtYcR4MwAGyT2e4o9L3S93IpB2kXKvt6vF8wQWtNCAXq+9MqqDCSjSVRjyMt7ziV5+P4g==@kvack.org X-Gm-Message-State: AOJu0Ywi/UqnxE3pVg5jK/GvB8giDQboLWUkdGCOW1MRRn8qLts00CMu rBkdHPKSP/1lpbp3GTm0ahNWBAEQ3HKErMHqxLTPIPY8+vNUmT5xaenIzI/c+ljGy6WPz165bqu f+5nFbonQdrsVCeZM71H9rT5EuFbjxKeIFvVc X-Gm-Gg: ASbGncsj4fFM2OWOzWOWsH9NcqdeRcZsy/RD8ulIgzA3tYK3NxunBIUhJ3PXTekO/EW 82GYzOvoYVetPXMa3lGWJggyPiM8Yvh0= X-Google-Smtp-Source: AGHT+IGBjhy1uINCSebC6lm/avSG4XYl+AUlbvdOTuvz7rQDsjTKEl7zyUWYjSduO6lR3bCdbNIDHNwi33X8ilGcl4o= X-Received: by 2002:a05:622a:4d90:b0:465:3d28:8c02 with SMTP id d75a77b69052e-466c2a5ebcfmr3632411cf.26.1732823619308; Thu, 28 Nov 2024 11:53:39 -0800 (PST) MIME-Version: 1.0 References: <20241128102619.707071-1-00107082@163.com> In-Reply-To: <20241128102619.707071-1-00107082@163.com> From: Suren Baghdasaryan Date: Thu, 28 Nov 2024 11:53:28 -0800 Message-ID: Subject: Re: [PATCH] mm/codetag: swap tags when migrate pages To: David Wang <00107082@163.com> Cc: kent.overstreet@linux.dev, yuzhao@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 26274C001D X-Stat-Signature: 3z9sk4e6hwsekrenmxmda68otzxmrike X-HE-Tag: 1732823609-816363 X-HE-Meta: U2FsdGVkX1+/02LIEF9At/R6CYiUr7ctCKlyNj67ftoPJEWH7oKpmJLu7kXTtHECuwDCiE1RYIsppxDGEMvPjRx4G835t5RWuyEvW88TzO8d6McX8V0129PMz0vjvzxFFMeKWBG2hBa9dHpu9+psQzuQCkDEwvzMbfyCf/gxeoRgiBl4Nj+5Ee1VRZpuT2tmavcD3vRdAC2F4tqwDxfbnHiIBDvslD6hY+aFWnEBD9ouVvTBZa/7Ewc8KDJe9KA6D22Sv52xfQQq7JQkwLIOOcJgN6GBLrMukXbUHKbM8fgdXBUg5uFjhJavalQrYkIuO5/lQcLRR6WSCqUl/j+HDI2iAURAiVmmZIUIWeScMXb/w5wSZ/yaDjdeBFeSdL1ROjB4XBhpyWtpQD4nj5mXzDYPDZw5ZCGXrZqOeSQQ2puMVxGETHjcu8sq/0K2v42CnxkGzHGoLwiueP8nhY+bWIqqUIlCemvf2YueVgA/t3p4BcusA696LvtQsxF2YAFkW1vAESKEJ/IeKzf0W+KAQfHp2D9By54Kkpt6Gzf3EZl1H6lxOmPGxd7WgwFpDYITPwv1mCa5em3IUBPUbdH9kn/eWuF9YiPXfnM8zYhh42IGmeQiyidkBpH1ciHBA8gDU4iEoT3J4cVk35Xqy/OjPV0yxx1uHlyRx6/ZR2eyk7btDvG6wC84tQcgCa8J/+O+BlSbjwBY8BoOido20dG3NI6N5kAXNImShi96rGxUSy3iNrX4TdIkkZoLEvVcQhwf52Jw5kK8OgbJyqoClGIAzC90O624ayz8kBTv7Pf8g3YwuFfXnPRiu3+ocfEHzkCfVZjVLxn/1wc3q6z6KHQpABysi2Pu1zeqVNiIT8ES0HdK2NN/VOTNMORoqiJvEn4iFyQyglB+jazOGd+lbvr8+aX3shPC0c7OzY2DMbem5OigU/j/ZayA6cBy/kLVPI/v2LIUOX+FRHbCLFCIlNZ zB53sWOB R97k4399WMBKXqzaRjd59jAZBJFR7YQSlRek8kvMOBZ+b0IQqG1DwlJ3egdISVw41jm06z/aJ5wk8NhR2pHeHp8XWPUDKZpStr6eSOesGSGDiITTOOq/7lzQCIFH5KMSaAzad/79PJoWSiXtxNi/ZXBBsjPNPt604k4/E1lbkyUm/5EVuFrETPPufQbDz/r9Uapj7ylf5m4NTBUgltnulgVE8BsW4aKzRVAe2SbPvXsxi9/OtMzDY0KCLM9Oj/rTtpgnXTDptPZ1Lxrt19NZYUxyVfBg5G2OSrjbXmUU5nSjAohW3mU85wkaKrxA4dDlnShskLmVPXB4kWr0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000017, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Nov 28, 2024 at 2:26=E2=80=AFAM David Wang <00107082@163.com> wrote= : > > The initial solution for codetag adjustment during page migration > uses three kinds of low level plumbings, those steps can be replaced > by swapping tags, which only needs one kind of low level plumbing, > and code is more clear. This description does not explain the real issue. I would suggest something like: Current solution to adjust codetag references during page migration is done in 3 steps: 1. sets the codetag reference of the old page as empty (not pointing to any codetag); 2. subtracts counters of the new page to compensate for its own allocation; 3. sets codetag reference of the new page to point to the codetag of the old page. This does not work if CONFIG_MEM_ALLOC_PROFILING_DEBUG=3Dn because set_codetag_empty() becomes NOOP. Instead, let's simply swap codetag references so that the new page is referencing the old codetag and the old page is referencing the new codetag. This way accounting stays valid and the logic makes more sense. Fixes: e0a955bf7f61 ("mm/codetag: add pgalloc_tag_copy()") > > Signed-off-by: David Wang <00107082@163.com> > Link: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.c= om/ This above Link: seems unusual. Maybe uses Closes instead like this: Closed: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.c= om/ > --- > include/linux/pgalloc_tag.h | 4 ++-- > lib/alloc_tag.c | 35 +++++++++++++++++++---------------- > mm/migrate.c | 2 +- > 3 files changed, 22 insertions(+), 19 deletions(-) > > diff --git a/include/linux/pgalloc_tag.h b/include/linux/pgalloc_tag.h > index 0e43ab653ab6..3469c4b20105 100644 > --- a/include/linux/pgalloc_tag.h > +++ b/include/linux/pgalloc_tag.h > @@ -231,7 +231,7 @@ static inline void pgalloc_tag_sub_pages(struct alloc= _tag *tag, unsigned int nr) > } > > void pgalloc_tag_split(struct folio *folio, int old_order, int new_order= ); > -void pgalloc_tag_copy(struct folio *new, struct folio *old); > +void pgalloc_tag_swap(struct folio *new, struct folio *old); > > void __init alloc_tag_sec_init(void); > > @@ -245,7 +245,7 @@ static inline struct alloc_tag *pgalloc_tag_get(struc= t page *page) { return NULL > static inline void pgalloc_tag_sub_pages(struct alloc_tag *tag, unsigned= int nr) {} > static inline void alloc_tag_sec_init(void) {} > static inline void pgalloc_tag_split(struct folio *folio, int old_order,= int new_order) {} > -static inline void pgalloc_tag_copy(struct folio *new, struct folio *old= ) {} > +static inline void pgalloc_tag_swap(struct folio *new, struct folio *old= ) {} > > #endif /* CONFIG_MEM_ALLOC_PROFILING */ > > diff --git a/lib/alloc_tag.c b/lib/alloc_tag.c > index 2414a7ee7ec7..b45efde50c40 100644 > --- a/lib/alloc_tag.c > +++ b/lib/alloc_tag.c > @@ -189,26 +189,29 @@ void pgalloc_tag_split(struct folio *folio, int old= _order, int new_order) > } > } > > -void pgalloc_tag_copy(struct folio *new, struct folio *old) > +void pgalloc_tag_swap(struct folio *new, struct folio *old) > { > - union pgtag_ref_handle handle; > - union codetag_ref ref; > - struct alloc_tag *tag; > + union pgtag_ref_handle handles[2]; > + union codetag_ref refs[2]; > + struct alloc_tag *tags[2]; > + struct folio *folios[2] =3D {new, old}; > + int i; > > - tag =3D pgalloc_tag_get(&old->page); > - if (!tag) > - return; > + for (i =3D 0; i < 2; i++) { > + tags[i] =3D pgalloc_tag_get(&folios[i]->page); > + if (!tags[i]) > + return; > + if (!get_page_tag_ref(&folios[i]->page, &refs[i], &handle= s[i])) > + return; If any of the above getters fail on the second cycle, you will miss calling put_page_tag_ref(handles[0]) and releasing the page_tag_ref you obtained on the first cycle. It might be cleaner to drop the use of arrays and use new/old. > + } > > - if (!get_page_tag_ref(&new->page, &ref, &handle)) > - return; > + swap(tags[0], tags[1]); > > - /* Clear the old ref to the original allocation tag. */ > - clear_page_tag_ref(&old->page); > - /* Decrement the counters of the tag on get_new_folio. */ > - alloc_tag_sub(&ref, folio_size(new)); > - __alloc_tag_ref_set(&ref, tag); > - update_page_tag_ref(handle, &ref); > - put_page_tag_ref(handle); > + for (i =3D 0; i < 2; i++) { > + __alloc_tag_ref_set(&refs[i], tags[i]); > + update_page_tag_ref(handles[i], &refs[i]); > + put_page_tag_ref(handles[i]); > + } > } > > static void shutdown_mem_profiling(bool remove_file) > diff --git a/mm/migrate.c b/mm/migrate.c > index 2ce6b4b814df..cc68583c86f9 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -745,7 +745,7 @@ void folio_migrate_flags(struct folio *newfolio, stru= ct folio *folio) > folio_set_readahead(newfolio); > > folio_copy_owner(newfolio, folio); > - pgalloc_tag_copy(newfolio, folio); > + pgalloc_tag_swap(newfolio, folio); > > mem_cgroup_migrate(folio, newfolio); > } > -- > 2.39.2 >