From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60ECFCCA479 for ; Fri, 17 Jun 2022 17:50:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A2586B0088; Fri, 17 Jun 2022 13:50:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 179696B0075; Fri, 17 Jun 2022 13:50:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E4A576B0073; Fri, 17 Jun 2022 13:50:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 9C01A6B0075 for ; Fri, 17 Jun 2022 13:50:26 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 734CD606BC for ; Fri, 17 Jun 2022 17:50:26 +0000 (UTC) X-FDA: 79588467252.09.F26EC46 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id 3DE2D40089 for ; Fri, 17 Jun 2022 17:50:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hexVHp6R/vt0c20X1cE4DvfxAqKLVnufRDEKvV2oYwk=; b=C9hkAVbokT4ITw3o+dtjicm0oT MGWQTfMo/dp3fgRep7rlaZtV9uStZYT9wBsru8dTQzhwpg4ckCK6FvHACQNojyeftVgGxyzX5noDu EEMjZx0ngQ25BmP+cPngpfr9c/1qcFmqg7d+xcq3kIv7DFVVImH/nnq2uAClG8+781dUWTgZEHddi 0bcog9JAS+YkONuSUt4c9fVSP4pzHaA57/Hoipov7+iDmkmT8iJn8cWIsUzGdW4gnuuz8E5Rwf4or YVQ5RjSZwW6fNlyyu2pCxuUYg2gki//CV/P0oO82gac+Yy7T74vrz/VWBZeyurHLe0oarrw1ZXYLq lxGcAxvw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1o2G6x-0030ZX-CC; Fri, 17 Jun 2022 17:50:23 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 02/22] mm/swap: Add folio_batch_move_lru() Date: Fri, 17 Jun 2022 18:50:00 +0100 Message-Id: <20220617175020.717127-3-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220617175020.717127-1-willy@infradead.org> References: <20220617175020.717127-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655488226; a=rsa-sha256; cv=none; b=0u4gX8zKIkxrsCUbICTZPYosNLcyd7S7fxmpHfPG+npTCL/6bu/KS1u+LjdAqgHEbGwe+1 +W8yZVv+iQeRbIc02smUIYE5B+5mNHsfKqrE4wWjvJELeNm55ma+JUaEzPKRXQHLbQtRQT 4S+CZRCD5hUmlPsMIGHVrizN8j/MKjo= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=C9hkAVbo; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655488226; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hexVHp6R/vt0c20X1cE4DvfxAqKLVnufRDEKvV2oYwk=; b=x+I/h8oSGpS9e4LvABF8s6WmrQRowUhXU9FGjk2S4tOiMLQ4hLuSmj5YTRISuCVLVoOmtu xFC//GR5eprnEdqhCXpRMMJ+9qtkrUz+pV4pA2y+vXijauJRpsX4Uet6gdu/krVbb3frxG wcCvxiOboZ2lye6/qVhIZBSzLjK4MEE= X-Stat-Signature: i8f84gu5p95pxjpxk6wow3a7sff9mawm X-Rspamd-Queue-Id: 3DE2D40089 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=C9hkAVbo; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam07 X-Rspam-User: X-HE-Tag: 1655488225-96509 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Start converting the LRU from pagevecs to folio_batches. Combine the functionality of pagevec_add_and_need_flush() with pagevec_lru_move_fn() in the new folio_batch_add_and_move(). Convert the lru_rotate pagevec to a folio_batch. Adds 223 bytes total to kernel text, because we're duplicating infrastructure. This will be more than made up for in future patches. Signed-off-by: Matthew Wilcox (Oracle) --- mm/swap.c | 78 +++++++++++++++++++++++++++++++++++++++---------------- 1 file changed, 56 insertions(+), 22 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 275a4ea1bc66..a983a1b93e73 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -46,10 +46,10 @@ /* How many pages do we try to swap or page in/out together? */ int page_cluster; -/* Protecting only lru_rotate.pvec which requires disabling interrupts */ +/* Protecting only lru_rotate.fbatch which requires disabling interrupts */ struct lru_rotate { local_lock_t lock; - struct pagevec pvec; + struct folio_batch fbatch; }; static DEFINE_PER_CPU(struct lru_rotate, lru_rotate) = { .lock = INIT_LOCAL_LOCK(lock), @@ -214,18 +214,6 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, pagevec_reinit(pvec); } -static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec) -{ - struct folio *folio = page_folio(page); - - if (!folio_test_unevictable(folio)) { - lruvec_del_folio(lruvec, folio); - folio_clear_active(folio); - lruvec_add_folio_tail(lruvec, folio); - __count_vm_events(PGROTATED, folio_nr_pages(folio)); - } -} - /* return true if pagevec needs to drain */ static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page) { @@ -238,6 +226,52 @@ static bool pagevec_add_and_need_flush(struct pagevec *pvec, struct page *page) return ret; } +typedef void (*move_fn_t)(struct lruvec *lruvec, struct folio *folio); + +static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) +{ + int i; + struct lruvec *lruvec = NULL; + unsigned long flags = 0; + + for (i = 0; i < folio_batch_count(fbatch); i++) { + struct folio *folio = fbatch->folios[i]; + + /* block memcg migration while the folio moves between lru */ + if (!folio_test_clear_lru(folio)) + continue; + + lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags); + move_fn(lruvec, folio); + + folio_set_lru(folio); + } + + if (lruvec) + unlock_page_lruvec_irqrestore(lruvec, flags); + folios_put(fbatch->folios, folio_batch_count(fbatch)); + folio_batch_init(fbatch); +} + +static void folio_batch_add_and_move(struct folio_batch *fbatch, + struct folio *folio, move_fn_t move_fn) +{ + if (folio_batch_add(fbatch, folio) && !folio_test_large(folio) && + !lru_cache_disabled()) + return; + folio_batch_move_lru(fbatch, move_fn); +} + +static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio) +{ + if (!folio_test_unevictable(folio)) { + lruvec_del_folio(lruvec, folio); + folio_clear_active(folio); + lruvec_add_folio_tail(lruvec, folio); + __count_vm_events(PGROTATED, folio_nr_pages(folio)); + } +} + /* * Writeback is about to end against a folio which has been marked for * immediate reclaim. If it still appears to be reclaimable, move it @@ -249,14 +283,13 @@ void folio_rotate_reclaimable(struct folio *folio) { if (!folio_test_locked(folio) && !folio_test_dirty(folio) && !folio_test_unevictable(folio) && folio_test_lru(folio)) { - struct pagevec *pvec; + struct folio_batch *fbatch; unsigned long flags; folio_get(folio); local_lock_irqsave(&lru_rotate.lock, flags); - pvec = this_cpu_ptr(&lru_rotate.pvec); - if (pagevec_add_and_need_flush(pvec, &folio->page)) - pagevec_lru_move_fn(pvec, pagevec_move_tail_fn); + fbatch = this_cpu_ptr(&lru_rotate.fbatch); + folio_batch_add_and_move(fbatch, folio, lru_move_tail_fn); local_unlock_irqrestore(&lru_rotate.lock, flags); } } @@ -595,19 +628,20 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec) */ void lru_add_drain_cpu(int cpu) { + struct folio_batch *fbatch; struct pagevec *pvec = &per_cpu(lru_pvecs.lru_add, cpu); if (pagevec_count(pvec)) __pagevec_lru_add(pvec); - pvec = &per_cpu(lru_rotate.pvec, cpu); + fbatch = &per_cpu(lru_rotate.fbatch, cpu); /* Disabling interrupts below acts as a compiler barrier. */ - if (data_race(pagevec_count(pvec))) { + if (data_race(folio_batch_count(fbatch))) { unsigned long flags; /* No harm done if a racing interrupt already did this */ local_lock_irqsave(&lru_rotate.lock, flags); - pagevec_lru_move_fn(pvec, pagevec_move_tail_fn); + folio_batch_move_lru(fbatch, lru_move_tail_fn); local_unlock_irqrestore(&lru_rotate.lock, flags); } @@ -824,7 +858,7 @@ static inline void __lru_add_drain_all(bool force_all_cpus) struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) || - data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) || + data_race(folio_batch_count(&per_cpu(lru_rotate.fbatch, cpu))) || pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) || pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) || pagevec_count(&per_cpu(lru_pvecs.lru_lazyfree, cpu)) || -- 2.35.1