From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D2F2C3ABA9 for ; Wed, 30 Apr 2025 14:37:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7DE676B00B1; Wed, 30 Apr 2025 10:37:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 78E286B00B3; Wed, 30 Apr 2025 10:37:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E1236B00B9; Wed, 30 Apr 2025 10:37:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 38FCA6B00B1 for ; Wed, 30 Apr 2025 10:37:21 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B6735CABDF for ; Wed, 30 Apr 2025 14:37:22 +0000 (UTC) X-FDA: 83390963124.30.5D52854 Received: from mail-qk1-f182.google.com (mail-qk1-f182.google.com [209.85.222.182]) by imf28.hostedemail.com (Postfix) with ESMTP id 92653C000E for ; Wed, 30 Apr 2025 14:37:20 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=yI0MfOQS; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf28.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.182 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746023840; a=rsa-sha256; cv=none; b=0KuuXO7hSh3iqQeLxxFHOqZ2GlLZS0/VWXDySumHo4jXAD1hJuZKikHgegukv0qsm1Fus+ o383+glwPQuJjqa3+NaiKrxvBMmHyTyzkoLyIvAo5CWsga1vUR+o2QHqluaE75BjIB0b+c 9rw3etEFtU3zqSFPi3KMu/xb37o4yws= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b=yI0MfOQS; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf28.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.182 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746023840; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MzVj9ui+z7nAPJ/VH08Dz8u4s9z3xTyhLsSPXPC81XI=; b=sqxZOoOnma21VyfJjetGaNIQYR1FU2Ttm/YF3Hxl5HNVSo/VdZ3H5N9GcFn+GUTxgE38LZ +o8Er71n/xQQXXn2Sf5WwrE7lf5Acb/SSNdDdtG0aoYNzV/0anzykMkBMsdEk43t7NSE+J wCaMlvuVbKFERjlOJGT5p6ErodRHprk= Received: by mail-qk1-f182.google.com with SMTP id af79cd13be357-7c5aecec8f3so1286657985a.1 for ; Wed, 30 Apr 2025 07:37:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1746023839; x=1746628639; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=MzVj9ui+z7nAPJ/VH08Dz8u4s9z3xTyhLsSPXPC81XI=; b=yI0MfOQS1t2G2aHt8giYgbqkye1AHbbSNh389YmRCaD5YO0vMrBWM08sH0WilTZarR 5uA6pElJBQbDndAMCACa9FcKOlg++i6LypD9TStlZP5lfHU8AEAu30+vy2ahK0R7Dc+3 9raGuzo7Wg0Dj7DkFh9qbAaE9bgMtev/k5/4jJs9VPC8ZHNjkBhzcS+XMuYAeNMM0rH3 xSqmg41TMFFwtOuU7mHPHXKmpWsc7HYX/PX7Fi/0FPfAdT9KPNLoiUhCzH6pVe5vURzj wUb/hmAsGb80ZIKpxS8caNDvrJyYwBM3CyLnNJ3EjeKDimuwo96fqrpVppvKNTF7ZPeT qNEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1746023839; x=1746628639; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=MzVj9ui+z7nAPJ/VH08Dz8u4s9z3xTyhLsSPXPC81XI=; b=UBwwBMvPkQnIqhL6veYOAKQmMLtfdxgXIWoUg66rlHmzHJqVfZ9zhOwxJcqH+je9Tg /4vZaTOHEkHSkrlMP35DIB9U5rL0tQ2shFYN+Ae7lsSpg4TTH9fKdVQj6ZPAtly3qtZl zroRDRVEm5I2Yxlrf+E3uwqutFCmFhJBlTjBhOqAPFOlazKLU+XqYiVeAwTNxgkyxJYb iz8GYgv5MTSZENuL8ixVqrvYrPhcl3p9VxhhZDoYdu8yC1b9f3p5gHjvL+X9uI3SRRLN NJPiKCALvslGxhax3CPgn753hoNtswRwpf/1LKkTzZHri2ogMFkhSGKVyPG15h7pb+sj ALig== X-Forwarded-Encrypted: i=1; AJvYcCUn/M3AnRHlQzPhIGXTmMCtoFhKleFQl7DgnOCm8P/G2XhxGYy6xx7Qr4vvGNi0tWJKak5KhC7RLw==@kvack.org X-Gm-Message-State: AOJu0YzxJ/YQkNqa4kUeRMtPKtf56MpHC6qOYYqB3KulD2zuYRhYU1G+ zvD+eRw/xS0JJMpwcbt4SWPvWhV6QYOaEo99gD3Sd7PoTEOEOpVqmjIwzvWTMqE= X-Gm-Gg: ASbGncsPre4nxRTWvTLxuOrMTglc1aVwZt9/1ypdmzPkeF+Uv734U88Ke7rtqf0zRHT TKyM8JfH4tXNisbzWqiVYLTCeE+7Y/eZ0Ehg7FqLF1jE8QOaQcZoIdtisel8dHIr8byVdqsTWyK uyZeVnuEbpsZGhqL+omPZ9J2vBr9ELsXIdR+vn/1wUnuXigt14OQ6U9LS79/OH1irf0iqn3YpJT 86I7RTXjueiYcui48I0eL+BWLaqN4bEqSjIFM1vNTWrACPprAgtePsGImsVxgPVJwte9r3r7Gc0 XrLTv+BCoyebMd8TUMYp4yPD74woBve0gNcpn2M= X-Google-Smtp-Source: AGHT+IF/g0RVsanqksAWT9Fbsscv/im0UZWq7N41chxmZ092JzV8NwJVeYMDw+WxTk+jdjo8RoJDcQ== X-Received: by 2002:a05:620a:2404:b0:7c5:dfd6:dc7b with SMTP id af79cd13be357-7cac7e26161mr407447985a.22.1746023839465; Wed, 30 Apr 2025 07:37:19 -0700 (PDT) Received: from localhost ([2603:7000:c01:2716:365a:60ff:fe62:ff29]) by smtp.gmail.com with UTF8SMTPSA id af79cd13be357-7c958d871efsm859944785a.84.2025.04.30.07.37.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Apr 2025 07:37:18 -0700 (PDT) Date: Wed, 30 Apr 2025 10:37:14 -0400 From: Johannes Weiner To: Muchun Song Cc: mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, akpm@linux-foundation.org, david@fromorbit.com, zhengqi.arch@bytedance.com, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, hamzamahfooz@linux.microsoft.com, apais@linux.microsoft.com, Hugh Dickins Subject: Re: [PATCH RFC 07/28] mm: thp: use folio_batch to handle THP splitting in deferred_split_scan() Message-ID: <20250430143714.GA2020@cmpxchg.org> References: <20250415024532.26632-1-songmuchun@bytedance.com> <20250415024532.26632-8-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250415024532.26632-8-songmuchun@bytedance.com> X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 92653C000E X-Stat-Signature: youyqe4ryrzpgq1ddxdpjakfjt33kigy X-Rspam-User: X-HE-Tag: 1746023840-695530 X-HE-Meta: U2FsdGVkX19VwPoTXBNC4RCBEnewgigGSsLbtHU98ERHFjIpzLix5NfqiED9izfL6uwQG+dsGDmcSxtcPfaqjOB4Ja/OLm8KwY9k8B8vfTWF01dBNeQybPcQ5rtYcLS7N9XTKCZBRXutfknaZrUKb5suTzJnPE/0ko2vT8x5GqRVtSSzZPhLncx3TcI3NFg1yGFfq6j5OZBJ5H092zcKNJ+gTXu6zCgkCXdKruUleqfBAwYgePZ5l0VG+yei1Ug7TMYf4y7RlIwVgVSvOQrvm9kBE7Ug5FJ4fg6UaPSIoL8OXsTWtDAw1KR5QabFfg1yiDwZG1HaclqX5RjpjfXOk6EvB/Vg0XWz8swfOAd5mFFW58O7b4Y41LE576Yj+vFn1VoExXTc+a07Uu9ZTF97rwgKGyP/xZ3QtArW4Qejje3TGtQE1A+VgYbIjkATQFpgltoqodGaSWTtCu7GmCPi3Qqud+ERAdtNsPD/XidenFyQPDN5+0aW1XBBTmLxSUacc9uDEMgq0zIRfMQBZoVOE5EbnZHNW9+zPei4koyd3Pw4RS3uqHJi3iPU3Vp4qbk8B2GAxR+/Qt0sEwM6giSH7YyDwnq+4RxTJ3jltZpAnNIj7hkIuCLBXwW+cVYKWtBCUQzfHcrBYn+FZxZHEn7c++HxAIdMB4FLVDyyROx/f7vdoA+9Y3l/v9OHMub6DGBKZh0U6fSpUo7O17Y+XGnQTYFmKGKclhCTsPu1ejvPsGu5+5ZtidxyCHRy6an8NWKFvE446H18vHBqGVvkv0i7iopDojE/efoj8MYUhjffsIkmyit56rZNvTnHY8c2WrIbl3U8QNzF8t3eUIjGPegCi1OtXW8+r/ixfukDHxcvxS/XtJGpiboLxJrb9eKZDll0nkPxaES3o1XO691X8kVj63A+sL0mnZracQL9Szho7cT7oxiiMFRVL5C02nuNGiTvvgbXkTTldRX0pG1Dvlm VpSaAN89 6uUzCPXfiRpMyegGYFWK8mZb2HKT/i0j45zBoZKBrQfCfXmfHQ9+Yj4I6lvhkDO535vLqlOHLuoEK4kdF48xcQBZtLSm7JZRUCApaifuo/9It0MZQWj5K3qHrUTOMOkNfPPHkqhCglFaF9ncjYA7AR0uGL4dsSdms4nQjRIhH7ghaYBvTX2lmxYQ3PYyFwuMZBVjfU90prhA27o+bmNiPWYVDhW9oNpSxRSsPscVJIhMcSXpUWkTR7DeU2B5zyvhjCBX4cytd9Kl9wbKYQtiusRzjGKycebWtRVuiKipi+v06FBl1WEQWbtM7vyFyulfKOTxulIw9JDEuQLs9jebDuYG/9sKmQfCgxmQu9ymRhA3h3JnX0P7y6IwLrNnFwxPaVLp7DyHtT8YpIGoFYzoxsvLcnoTP5ZyaYpUfLov7yF3U8t5/q/YWNERCXkEKw5Kdnx0zgu+K3USR+X4+36J+236TlY7y6LOWecov+nBVziWoAnNucbf5g93h1M4xBIiPQjAEONwBM0V5TzCLm2B6zf/IR6v8/ufbLxZj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 15, 2025 at 10:45:11AM +0800, Muchun Song wrote: > The maintenance of the folio->_deferred_list is intricate because it's > reused in a local list. > > Here are some peculiarities: > > 1) When a folio is removed from its split queue and added to a local > on-stack list in deferred_split_scan(), the ->split_queue_len isn't > updated, leading to an inconsistency between it and the actual > number of folios in the split queue. > > 2) When the folio is split via split_folio() later, it's removed from > the local list while holding the split queue lock. At this time, > this lock protects the local list, not the split queue. > > 3) To handle the race condition with a third-party freeing or migrating > the preceding folio, we must ensure there's always one safe (with > raised refcount) folio before by delaying its folio_put(). More > details can be found in commit e66f3185fa04. It's rather tricky. > > We can use the folio_batch infrastructure to handle this clearly. In this > case, ->split_queue_len will be consistent with the real number of folios > in the split queue. If list_empty(&folio->_deferred_list) returns false, > it's clear the folio must be in its split queue (not in a local list > anymore). > > In the future, we aim to reparent LRU folios during memcg offline to > eliminate dying memory cgroups. This patch prepares for using > folio_split_queue_lock_irqsave() as folio memcg may change then. > > Signed-off-by: Muchun Song This is a very nice simplification. And getting rid of the stack list and its subtle implication on all the various current and future list_empty(&folio->_deferred_list) checks should be much more robust. However, I think there is one snag related to this: > --- > mm/huge_memory.c | 69 +++++++++++++++++++++--------------------------- > 1 file changed, 30 insertions(+), 39 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 70820fa75c1f..d2bc943a40e8 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -4220,40 +4220,47 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, > struct pglist_data *pgdata = NODE_DATA(sc->nid); > struct deferred_split *ds_queue = &pgdata->deferred_split_queue; > unsigned long flags; > - LIST_HEAD(list); > - struct folio *folio, *next, *prev = NULL; > - int split = 0, removed = 0; > + struct folio *folio, *next; > + int split = 0, i; > + struct folio_batch fbatch; > + bool done; > > #ifdef CONFIG_MEMCG > if (sc->memcg) > ds_queue = &sc->memcg->deferred_split_queue; > #endif > - > + folio_batch_init(&fbatch); > +retry: > + done = true; > spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > /* Take pin on all head pages to avoid freeing them under us */ > list_for_each_entry_safe(folio, next, &ds_queue->split_queue, > _deferred_list) { > if (folio_try_get(folio)) { > - list_move(&folio->_deferred_list, &list); > - } else { > + folio_batch_add(&fbatch, folio); > + } else if (folio_test_partially_mapped(folio)) { > /* We lost race with folio_put() */ > - if (folio_test_partially_mapped(folio)) { > - folio_clear_partially_mapped(folio); > - mod_mthp_stat(folio_order(folio), > - MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); > - } > - list_del_init(&folio->_deferred_list); > - ds_queue->split_queue_len--; > + folio_clear_partially_mapped(folio); > + mod_mthp_stat(folio_order(folio), > + MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); > } > + list_del_init(&folio->_deferred_list); > + ds_queue->split_queue_len--; > if (!--sc->nr_to_scan) > break; > + if (folio_batch_space(&fbatch) == 0) { > + done = false; > + break; > + } > } > split_queue_unlock_irqrestore(ds_queue, flags); > > - list_for_each_entry_safe(folio, next, &list, _deferred_list) { > + for (i = 0; i < folio_batch_count(&fbatch); i++) { > bool did_split = false; > bool underused = false; > + struct deferred_split *fqueue; > > + folio = fbatch.folios[i]; > if (!folio_test_partially_mapped(folio)) { > underused = thp_underused(folio); > if (!underused) > @@ -4269,39 +4276,23 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, > } > folio_unlock(folio); > next: > + if (did_split || !folio_test_partially_mapped(folio)) > + continue; There IS a list_empty() check in the splitting code that we actually relied on, for cleaning up the partially_mapped state and counter: !list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; if (folio_test_partially_mapped(folio)) { folio_clear_partially_mapped(folio); mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); } /* * Reinitialize page_deferred_list after removing the * page from the split_queue, otherwise a subsequent * split will see list corruption when checking the * page_deferred_list. */ list_del_init(&folio->_deferred_list); With the folios isolated up front, it looks like you need to handle this from the shrinker. Otherwise this looks correct to me. But this code is subtle, I would feel much better if Hugh (CC-ed) could take a look as well. Thanks! > /* > - * split_folio() removes folio from list on success. > * Only add back to the queue if folio is partially mapped. > * If thp_underused returns false, or if split_folio fails > * in the case it was underused, then consider it used and > * don't add it back to split_queue. > */ > - if (did_split) { > - ; /* folio already removed from list */ > - } else if (!folio_test_partially_mapped(folio)) { > - list_del_init(&folio->_deferred_list); > - removed++; > - } else { > - /* > - * That unlocked list_del_init() above would be unsafe, > - * unless its folio is separated from any earlier folios > - * left on the list (which may be concurrently unqueued) > - * by one safe folio with refcount still raised. > - */ > - swap(folio, prev); > - } > - if (folio) > - folio_put(folio); > + fqueue = folio_split_queue_lock_irqsave(folio, &flags); > + list_add_tail(&folio->_deferred_list, &fqueue->split_queue); > + fqueue->split_queue_len++; > + split_queue_unlock_irqrestore(fqueue, flags); > } > + folios_put(&fbatch); > > - spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > - list_splice_tail(&list, &ds_queue->split_queue); > - ds_queue->split_queue_len -= removed; > - split_queue_unlock_irqrestore(ds_queue, flags); > - > - if (prev) > - folio_put(prev); > - > + if (!done) > + goto retry; > /* > * Stop shrinker if we didn't split any page, but the queue is empty. > * This can happen if pages were freed under us. > -- > 2.20.1