From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FEEBC54E67 for ; Wed, 27 Mar 2024 18:59:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D2516B0095; Wed, 27 Mar 2024 14:59:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 982CB6B0096; Wed, 27 Mar 2024 14:59:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 822816B0098; Wed, 27 Mar 2024 14:59:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 619DC6B0095 for ; Wed, 27 Mar 2024 14:59:48 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2A4AA80650 for ; Wed, 27 Mar 2024 18:59:48 +0000 (UTC) X-FDA: 81943733256.20.11E1D83 Received: from mail-pl1-f173.google.com (mail-pl1-f173.google.com [209.85.214.173]) by imf09.hostedemail.com (Postfix) with ESMTP id 5134114001B for ; Wed, 27 Mar 2024 18:59:46 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=OBdOKoaj; spf=pass (imf09.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711565986; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WZVffBVXWhCz9qmiNGNujqYZrrrvTpbqG3lC0K8t/00=; b=UnABjqqteBtDYWMSto3L1IrgHhS0N2a7suhsLcFoS0FOG5/DK0fu7/AEWtGqUENlmHNG30 u7hTVdKCxdtqS+cTtVGBIbBAJofG8LwFGTXHH07rkNdNoUul0gcN5w/xY0NCaW4plTrib9 E4WzEUqjjEnESAUXu9wnvW5w8W+IRZA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711565986; a=rsa-sha256; cv=none; b=sCPKiiNhrsiBUHwBXMrx2tlVXTQYkDJwrs/ai3/qNLmaB9wSXDJoV85vVoX+FFMuz07CO7 t7ui19j12dDwW40IQoTV6pQ6k6o/B+MOlc3Jrgda7Q8KExwpPmxhnpstc4HA3DUXc+O92s OWufdXEpMI0p/fA+cKejgtQP7oMi7bQ= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=OBdOKoaj; spf=pass (imf09.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.214.173 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f173.google.com with SMTP id d9443c01a7336-1dde26f7e1dso1617355ad.1 for ; Wed, 27 Mar 2024 11:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711565985; x=1712170785; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=WZVffBVXWhCz9qmiNGNujqYZrrrvTpbqG3lC0K8t/00=; b=OBdOKoajUri2cJcAbNv+Km3dKeFqKONIp2niHPiT42epGInDs4GWMZHaCiwKU8jzCz vKez/R65qFmDXlWqqiCezIe9zB1HxtciqoDTRg0z2oOQLm4803hdUMUrMEMoW8AUyURa Age7+H7syvIXEvD5f1ag+9W4wBV1kCU4MSFhOzfnuYWoX5FLyn0hjkA+nPN5QKXAxREo P/HZb3IsFcTFqK2D3LMXzZ2FCn9VSrPfD3vsTRe6XTEteSdFncjERbwSnWMPv2QNlJ6w /F4Gy/xyQg8DkJWdJMB3OoQZ4Viy8NMB+kmEfo062mVd4BxN/wTJi1xOPOg3X8deU4Iq Nq3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711565985; x=1712170785; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=WZVffBVXWhCz9qmiNGNujqYZrrrvTpbqG3lC0K8t/00=; b=EboZtS351xaflihZJr3BUzayba6NaKVFS3kvY5fel9nnLx0iUHIPY2MnmvV/1NFzf4 Lz6XVyX9HXuE8JR+tfoem8yp0eNhp4Rk6bOEvoyNuUNdSCx26l6X29g2i5LYcpfvcXbb q8goJp3+43eReI28w52PE26geACQKUQsrjm9H+dzN2rUhoLA/4AyUJtOhSq96aJ64brT PIz9oyqV7oqvGHQMXbAe0vP0LYS17FcfWbxaQVtPWfbcaiat08+t1U+UB3Fybn0KgrUl 3oELgpbOmxyaVZAHWi6T7QjcEtpLMGBBeNG2vkR2tbPgl2wSZst3fs9gimq8V16v8pnG YDVw== X-Forwarded-Encrypted: i=1; AJvYcCUog9DGnql3qZYuLGMk7sXew+pbCINW63aKZL1lT7mKy04m8EMhbJZD73WxFElsK4OLv0V5Ou0+nf6xUR1wsHefmAo= X-Gm-Message-State: AOJu0YwtCC3i75SVOZjJvN6DwBvFg6toZV5TlpPDKJ7O81JlXY6RmlrC pCAQ19X8MMKaaqPvgJ3uXH/TT+NGCDKNQb/wb+A6OaPPSu2g0f9y X-Google-Smtp-Source: AGHT+IEu6NRI4665V4QqT6JA1rNlK78B4RLmJGkjGvCHnkxb52MYIdkHZPWroH0KpiziXvS65ZMbWA== X-Received: by 2002:a17:903:186:b0:1e0:b562:b229 with SMTP id z6-20020a170903018600b001e0b562b229mr634154plg.47.1711565984939; Wed, 27 Mar 2024 11:59:44 -0700 (PDT) Received: from fedora (c-73-170-51-167.hsd1.ca.comcast.net. [73.170.51.167]) by smtp.gmail.com with ESMTPSA id p12-20020a170902eacc00b001dc3c3be4adsm9357305pld.297.2024.03.27.11.59.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Mar 2024 11:59:44 -0700 (PDT) Date: Wed, 27 Mar 2024 11:59:42 -0700 From: Vishal Moola To: Kefeng Wang Cc: Andrew Morton , willy@infradead.org, Miaohe Lin , Naoya Horiguchi , David Hildenbrand , Oscar Salvador , Zi Yan , Hugh Dickins , Jonathan Corbet , linux-mm@kvack.org, linux-doc@vger.kernel.org, Baolin Wang Subject: Re: [PATCH 1/6] mm: migrate: add isolate_movable_folio() Message-ID: References: <20240327141034.3712697-1-wangkefeng.wang@huawei.com> <20240327141034.3712697-2-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240327141034.3712697-2-wangkefeng.wang@huawei.com> X-Rspamd-Queue-Id: 5134114001B X-Rspam-User: X-Stat-Signature: azgkoesr8kmze6abx7d747ho8nzct93w X-Rspamd-Server: rspam03 X-HE-Tag: 1711565986-764211 X-HE-Meta: U2FsdGVkX1/ISs3eqPuh25z4fQf59M2QBweWk+KHGU2CJElkL425EHC3jx4TdNAe9DLQenxnvN3A2JWJOJhFHUDwFqwG9XA5kXQcstQVxBT3fyGN4s+1fnAc8sEuivp37GqVXbEDGAsY64cRWlPSgH0oKoNpRFEOC23IQo/1i905HKC2IE0IOBVdT9V6Gs/h2ZmMOzWPlqeg2O6o/qihRi/9shheqLfGix5JxYlaNPqKqhxFnmJUxRTxDcI7NZarogWslXRb/9LNSuSehjbuTHGULOBHZyNp6Cylq2DfEuCRLBSHlbFX2e4DZmF6UcbO5mF1RcPEwowgSgrRMmj7x/k9UcIJqityQ6hp4esrt8Fdikuo/Wi/AMKyiZWt4RN6NU18VbVzaevfjQnp8w5lLDRf1XjLHowrqUagBrHoy/ptbnyPH2xSJl94UHpQSnMTj9RTsvUztYF6G4qcBrgHHglFuCfRobyZSLdjkEiHfhxf1yr8wSOyrqnUm37gsVRPEAW3LLPAWAZJ/1LFBqT97EDDeeA65xj27J3dNf4STduYjH1UoxkPrImrIrzOucJ34pxWpV4btRYS6gRv2h9lvzPerq4r1vGxakqdl6ZqVaiyiJrpuIfH4SPqFnpOev8gujQYx7a7/bjoOl6eELa0pUfophorc11sb/R2FW9LUD8o0q9TEOh5Z4P2E39VN96886AAVNkQ/mSJHGUagaptP/pshuFDgrmqUaqixahrKhbi+7wSpWJMWAWPONJb90X6oZmoGtpiit22BiVwOUKlrLoH2ckgzBW0ZX22M5b80PZrWogBe56KSqNKomR1dukuniUZogx6dgkgyVVAnwlEdLQ7PChXpUazu6A6EKpnCXeR+geKS7BjM0UzqWPQqkEPSMB5xUen7Mf3eIdCIMUk85uv0+92RDV/AIl4LAIjjMsu8KYeTYWR2GNprJqtnqDI8EvrApGJwubfmGN0i4K x/ayoSny 8pmZmJ2/sN7guGBQEYwINVfuEmyj7a0/fbC1/G1GGEQYD4nMliAGfBJfAIsJtfCpFjgGYt+37ioot6gItu6EKbAPqi6NTGdWJvNAKLgKHGj3fjIxfiLmbrlFgFk3Rkb08JcnrXLVOJbkiIscrzIli6DnB9V0Znw4WWrVAT2uGl/Qn4glkTXcCm0VFBN2GIQHTWNxrecJbH4VeSdUpNJ6mVtaN3vadDd5E63+6cJgXC5HryCIIMdNGmxWpVHmD0kV8hsQk9zpKOq1X6FwTn7u1MukWtUxdWqVVWI9pwHpVLHTytykNhylln13LdnywqvsF6nwq0wIMKFiT9ZQZkd5w+tUUTJ1k1KsZyl1X54yKBHRNXBnoaSAvj3YhBd7intBFjfAuyyJK7BI3KBgfBdF21fz0E6gjvcDAdYfk3c2WwHx3tE5f0o7OM2nRSk284XXD3F5SwXBC2arKLgscViz8hTG57IQiHhQ2KsRnXgF4mgZyarURaXQjl9F+cRYThBKjoWkIbN+czIIH7Ig+S+HpCOVPTD/D8hpjyq4K X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 27, 2024 at 10:10:29PM +0800, Kefeng Wang wrote: > Like isolate_lru_page(), make isolate_movable_page() as a wrapper > around isolate_lru_folio(), since isolate_movable_page() always > fails on a tail page, add a warn for tail page and return immediately. > > Signed-off-by: Kefeng Wang > --- > include/linux/migrate.h | 3 +++ > mm/migrate.c | 41 +++++++++++++++++++++++------------------ > 2 files changed, 26 insertions(+), 18 deletions(-) > > diff --git a/include/linux/migrate.h b/include/linux/migrate.h > index f9d92482d117..a6c38ee7246a 100644 > --- a/include/linux/migrate.h > +++ b/include/linux/migrate.h > @@ -70,6 +70,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free, > unsigned int *ret_succeeded); > struct folio *alloc_migration_target(struct folio *src, unsigned long private); > bool isolate_movable_page(struct page *page, isolate_mode_t mode); > +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode); > > int migrate_huge_page_move_mapping(struct address_space *mapping, > struct folio *dst, struct folio *src); > @@ -91,6 +92,8 @@ static inline struct folio *alloc_migration_target(struct folio *src, > { return NULL; } > static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode) > { return false; } > +static inline bool isolate_movable_folio(struct page *page, isolate_mode_t mode) > + { return false; } Wrong argument here. > > static inline int migrate_huge_page_move_mapping(struct address_space *mapping, > struct folio *dst, struct folio *src) > diff --git a/mm/migrate.c b/mm/migrate.c > index 2228ca681afb..b2195b6ff32c 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -57,31 +57,29 @@ > > #include "internal.h" > > -bool isolate_movable_page(struct page *page, isolate_mode_t mode) > +bool isolate_movable_folio(struct folio *folio, isolate_mode_t mode) > { > - struct folio *folio = folio_get_nontail_page(page); > const struct movable_operations *mops; > > /* > - * Avoid burning cycles with pages that are yet under __free_pages(), > + * Avoid burning cycles with folios that are yet under __free_pages(), > * or just got freed under us. > * > - * In case we 'win' a race for a movable page being freed under us and > + * In case we 'win' a race for a movable folio being freed under us and > * raise its refcount preventing __free_pages() from doing its job > - * the put_page() at the end of this block will take care of > - * release this page, thus avoiding a nasty leakage. > + * the folio_put() at the end of this block will take care of > + * release this folio, thus avoiding a nasty leakage. > */ > - if (!folio) > - goto out; > + folio_get(folio); > > if (unlikely(folio_test_slab(folio))) > goto out_putfolio; > /* Pairs with smp_wmb() in slab freeing, e.g. SLUB's __free_slab() */ > smp_rmb(); > /* > - * Check movable flag before taking the page lock because > - * we use non-atomic bitops on newly allocated page flags so > - * unconditionally grabbing the lock ruins page's owner side. > + * Check movable flag before taking the folio lock because > + * we use non-atomic bitops on newly allocated folio flags so > + * unconditionally grabbing the lock ruins folio's owner side. > */ > if (unlikely(!__folio_test_movable(folio))) > goto out_putfolio; > @@ -91,13 +89,13 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) > goto out_putfolio; > > /* > - * As movable pages are not isolated from LRU lists, concurrent > - * compaction threads can race against page migration functions > - * as well as race against the releasing a page. > + * As movable folios are not isolated from LRU lists, concurrent > + * compaction threads can race against folio migration functions > + * as well as race against the releasing a folio. > * > - * In order to avoid having an already isolated movable page > + * In order to avoid having an already isolated movable folio > * being (wrongly) re-isolated while it is under migration, > - * or to avoid attempting to isolate pages being released, > + * or to avoid attempting to isolate folios being released, > * lets be sure we have the page lock > * before proceeding with the movable page isolation steps. > */ > @@ -113,7 +111,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) > if (!mops->isolate_page(&folio->page, mode)) > goto out_no_isolated; > > - /* Driver shouldn't use PG_isolated bit of page->flags */ > + /* Driver shouldn't use PG_isolated bit of folio->flags */ > WARN_ON_ONCE(folio_test_isolated(folio)); > folio_set_isolated(folio); > folio_unlock(folio); > @@ -124,10 +122,17 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode) > folio_unlock(folio); > out_putfolio: > folio_put(folio); > -out: > return false; > } > > +bool isolate_movable_page(struct page *page, isolate_mode_t mode) > +{ > + if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page")) > + return false; This warning doesn't make sense. As of now, we still isolate_movable_page() to be able to take in a tail page, we just don't want to operate on it. > + return isolate_movable_folio((struct folio *)page, mode); > +} > + > static void putback_movable_folio(struct folio *folio) > { > const struct movable_operations *mops = folio_movable_ops(folio); > -- > 2.27.0 >