From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7A42C54798 for ; Sat, 9 Mar 2024 08:00:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 29A266B0074; Sat, 9 Mar 2024 03:00:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2212B6B0075; Sat, 9 Mar 2024 03:00:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C3586B0078; Sat, 9 Mar 2024 03:00:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E85686B0074 for ; Sat, 9 Mar 2024 02:59:59 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 87A5D160967 for ; Sat, 9 Mar 2024 07:59:59 +0000 (UTC) X-FDA: 81876752118.14.0049988 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf12.hostedemail.com (Postfix) with ESMTP id 6427640012 for ; Sat, 9 Mar 2024 07:59:57 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709971197; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tcsqrUQjm58WJSXJhbj92D6BkSq3GOcXOPNjt/WIlbs=; b=kJ3saxCTDY1belpgDx9W9VB9Nk1I2QZLd+sdv1r45+go3T4iVTK7RF2zABbqz0NANoheT3 J2+gXPBcnX1EYdbpig6/f3sKQDCvNyYy9jPGrudWbdMb92nRjTINO3/J/sdlOgqAX8BHpV 8e5Epl/nbtKLbT6JgawtctQ/ytmgYiU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709971197; a=rsa-sha256; cv=none; b=qaHVWHTqTyeKdHgjKqv6wscdV5cJCJTk1HjT6acOxGcQfu1zFyPqZQAX5Iid0XSenDhtzK WF4M0ezm9NJO51cMToYYZYS436VSrNerl0H4u7IZyCa4plSLmzRSL2aLrUW6uBSHNNAkKT GNakKVTYo1NKm6yqRHmd8ji2j9tdHFk= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=none; spf=pass (imf12.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 95696C15; Sat, 9 Mar 2024 00:00:32 -0800 (PST) Received: from [10.57.67.186] (unknown [10.57.67.186]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0467E3F73F; Fri, 8 Mar 2024 23:59:54 -0800 (PST) Message-ID: <090c9d68-9296-4338-9afa-5369bb1db66c@arm.com> Date: Sat, 9 Mar 2024 07:59:52 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed To: Matthew Wilcox Cc: Zi Yan , Andrew Morton , linux-mm@kvack.org, Yang Shi , Huang Ying References: <367a14f7-340e-4b29-90ae-bc3fcefdd5f4@arm.com> <85cc26ed-6386-4d6b-b680-1e5fba07843f@arm.com> <36bdda72-2731-440e-ad15-39b845401f50@arm.com> <03CE3A00-917C-48CC-8E1C-6A98713C817C@nvidia.com> Content-Language: en-GB From: Ryan Roberts In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 6427640012 X-Rspam-User: X-Stat-Signature: 51t5eegwrod3y1ofpq1wfydkacow5pcs X-Rspamd-Server: rspam03 X-HE-Tag: 1709971197-554595 X-HE-Meta: U2FsdGVkX19eT+FA5Sj45BVQwI5UY7a0Tqld3cWI/TPmVGHuPG/jwakD7C6rVnslsmkW2CP3IH7zjeQj4Kk9DZ+KN/JzNyLd+LDIGNcmW2SsQEZ6zfPkTWLULScDLXY/z8XfvCRXgG7RSvoYsP4CBikjEztWeTqeYcWj7QClCCpvKHhWk5xi/Y4Y99oHIQrBCCQf+y1bCRzPU0Jh/oG9MrQYG5Efla0tImOOOSU11+ZPKtOA0Fagx2/EkVAJnTxt28MMvzN1WCcKwjVEkVW3AH7pXzu5gsIRRq8UaZsxyPl4FDbtJUYmtGa2M89PfI5f3qQ1tSXG8qr8qZnuD7cXebo3tGzOzar1RS9w6ZJpmvjufUk0pyz2wu0cIDKzjm8mClDUpVNNesqXLKvR7OAQLgnJ+vC2AQv67H8hS9mZsHqJKAQRvWa15TW6MHWUHO6eCgyKg9Wxcxm8cH1nDjoAfAB2LXGDJmviA2qm23NflHseTap/dtgqEiJgjq0uwXl8aVYqSnEJIopKEf/5sHRzYOBqGRrB1lnRl2V4JMCHjBXrV0ypVS2l8mCGvvjEpu7SoTJmaFac2Se8/uHHcRoYEkEQoxCMvHro5UlAS9eh3xKV9PyQXIXJRYC/GBXu8kaVgF/SGfHFyzXUADqiyq5iIeSy0QkKoh5PORunRpA9f54ozQx1hqJywC3iYX+1R7+erBWZ0eFK8Tkxzt85X617ruZQXxT+GhzgNN62vzrMU/P+CRN15xcIik/gYebxd4p5ral5wjUd2IGpXQAtrWqCeHlRqheaWHMGzNzn9QuS3qKCQyjen55wpfK3zHAczyfCuKKuaSvyESoLklJ6eQIKSRRkWZX86CtX4TmnBtZNSk4V6L+pW9vHb2Ua9O4E0dN/s1ukHFtEF1oZmqhMGxrfYJ/6+HQ5quO0RB3cvEeeySbmOyMtNh7L5MN5FG5tEdBe6paZoxfzVcTfqPZD1kJ wTW8XWpi MWHqOFxO40O+aJNKMcCdVgznciOtP+LXn5q6jSBu1JhCfXRD2I3+gYHB091KcsmDIAjg9nQLfLRApsbuSSMKPZz7tJJSOg0qvBR764qFZZ/ra3urZES0Vzxjw7L/o8BZ0AEQqvpfEzBmnYaIV0rQaWiickw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 09/03/2024 06:09, Matthew Wilcox wrote: > On Fri, Mar 08, 2024 at 11:44:35AM +0000, Ryan Roberts wrote: >>> The thought occurs that we don't need to take the folios off the list. >>> I don't know that will fix anything, but this will fix your "running out >>> of memory" problem -- I forgot to drop the reference if folio_trylock() >>> failed. Of course, I can't call folio_put() inside the lock, so may >>> as well move the trylock back to the second loop. > > I think this was a bad thought ... The not-taking-folios-off-the-list thought? Yes, agreed. > >> Dumping all the CPU back traces with gdb, all the cores (except one) are >> contending on the the deferred split lock. > > I'm pretty sure that we can call the shrinker on multiple CPUs at the > same time (can you confirm from the backtrace?) Yes, the vast majority of the CPUs were in deferred_split_scan() waiting for the split_queue_lock. > > struct pglist_data *pgdata = NODE_DATA(sc->nid); > struct deferred_split *ds_queue = &pgdata->deferred_split_queue; > > so if two CPUs try to shrink the same node, they're going to try to > process the same set of folios. Which means the split will keep failing > because each of them will have a refcount on the folio, and ... yeah. Ahh, ouch. So this probably explains why things started going slow for me again last night. > > If so, we need to take the folios off the list (or otherwise mark them) > so that they can't be processed by more than one CPU at a time. And > that leads me to this patch (yes, folio_prep_large_rmappable() is > now vestigial, but removing it increases the churn a bit much for this > stage of debugging) Looks sensible on first review. I'll do some testing now to see if I can re-triger the non-NULL mapping issue. Will get back to you in the next couple of hours. > > This time I've boot-tested it. I'm running my usual test-suite against > it now with little expectation that it will trigger. If I have time > I'll try to recreate your setup. > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index fd745bcc97ff..2ca033a6c3d8 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -792,8 +792,6 @@ void folio_prep_large_rmappable(struct folio *folio) > { > if (!folio || !folio_test_large(folio)) > return; > - if (folio_order(folio) > 1) > - INIT_LIST_HEAD(&folio->_deferred_list); > folio_set_large_rmappable(folio); > } > > @@ -3312,7 +3310,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, > struct pglist_data *pgdata = NODE_DATA(sc->nid); > struct deferred_split *ds_queue = &pgdata->deferred_split_queue; > unsigned long flags; > - LIST_HEAD(list); > + struct folio_batch batch; > struct folio *folio, *next; > int split = 0; > > @@ -3321,36 +3319,40 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, > ds_queue = &sc->memcg->deferred_split_queue; > #endif > > + folio_batch_init(&batch); > spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > - /* Take pin on all head pages to avoid freeing them under us */ > + /* Take ref on all folios to avoid freeing them under us */ > list_for_each_entry_safe(folio, next, &ds_queue->split_queue, > _deferred_list) { > - if (folio_try_get(folio)) { > - list_move(&folio->_deferred_list, &list); > - } else { > + list_del_init(&folio->_deferred_list); > + sc->nr_to_scan--; > + if (!folio_try_get(folio)) { > /* We lost race with folio_put() */ > - list_del_init(&folio->_deferred_list); > ds_queue->split_queue_len--; > + } else if (folio_batch_add(&batch, folio) == 0) { > + break; > } > - if (!--sc->nr_to_scan) > + if (!sc->nr_to_scan) > break; > } > spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > > - list_for_each_entry_safe(folio, next, &list, _deferred_list) { > + while ((folio = folio_batch_next(&batch)) != NULL) { > if (!folio_trylock(folio)) > - goto next; > - /* split_huge_page() removes page from list on success */ > + continue; > if (!split_folio(folio)) > split++; > folio_unlock(folio); > -next: > - folio_put(folio); > } > > spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > - list_splice_tail(&list, &ds_queue->split_queue); > + while ((folio = folio_batch_next(&batch)) != NULL) { > + if (!folio_test_large(folio)) > + continue; > + list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); > + } > spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > + folios_put(&batch); > > /* > * Stop shrinker if we didn't split any page, but the queue is empty. > diff --git a/mm/internal.h b/mm/internal.h > index 1dfdc3bde1b0..14c21d06f233 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -432,6 +432,8 @@ static inline void prep_compound_head(struct page *page, unsigned int order) > atomic_set(&folio->_entire_mapcount, -1); > atomic_set(&folio->_nr_pages_mapped, 0); > atomic_set(&folio->_pincount, 0); > + if (order > 1) > + INIT_LIST_HEAD(&folio->_deferred_list); > } > > static inline void prep_compound_tail(struct page *head, int tail_idx) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 025ad1a7df7b..fc9c7ca24c4c 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1007,9 +1007,12 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) > break; > case 2: > /* > - * the second tail page: ->mapping is > - * deferred_list.next -- ignore value. > + * the second tail page: ->mapping is deferred_list.next > */ > + if (unlikely(!list_empty(&folio->_deferred_list))) { > + bad_page(page, "still on deferred list"); > + goto out; > + } > break; > default: > if (page->mapping != TAIL_MAPPING) { >