From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E899AC433EF for ; Tue, 24 May 2022 06:27:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BB0D6B0074; Tue, 24 May 2022 02:27:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 866F26B0075; Tue, 24 May 2022 02:27:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 757B58D0002; Tue, 24 May 2022 02:27:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 6868A6B0074 for ; Tue, 24 May 2022 02:27:26 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 48627612E6 for ; Tue, 24 May 2022 06:27:26 +0000 (UTC) X-FDA: 79499654892.08.F008C66 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) by imf26.hostedemail.com (Postfix) with ESMTP id D39A2140012 for ; Tue, 24 May 2022 06:27:21 +0000 (UTC) Received: by mail-pj1-f50.google.com with SMTP id ev18so15948168pjb.4 for ; Mon, 23 May 2022 23:27:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YQmbzG/x1UDz5U0yQVXExNL9W+QNwv5yyDN0QDopwLM=; b=SbFCQ3vJYsGUAEX8nyldT87US7H/Zi6r40ReQrz8S1DFmwuWGEoV2ZsawlDNV2F+b9 cBbWPLt6WyLG/WEs6j6or7s/bkrS5fhQCtCJYspYV7YbDoGUD1Kpw7Cz19/bYZvOSmvn C7kr1O/qam5v/KZxRh4lh9qzOJtWk0NaaM5tm7fJrsj7KBzxuKuwY074AEzpGvXjXhRa OSdKGe9kRdi5MZEOyUCYPsJ6eblgDNZpMXrb7JA12Yd9Uc5u620Py3qTO2xYaCVNavdX iwklkT8fllKtpNXMNWn4waK8JwI1CM/uTxteV3N+Nq25H+HgDd0owN20+UIAf1N6TPfK 7S+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YQmbzG/x1UDz5U0yQVXExNL9W+QNwv5yyDN0QDopwLM=; b=MSq73/g6U+EyMC2iEwm+E6QqJrhb/kgy2WZaUD8gBYUs5rz3K9bLNYDAglpRmzLVXF O2AnIWC6ZTuEZVFDbRrHzLEo4KRxnks+oKJGMK05rBAGoIMjqCEXUKz2gsoE6k+ModuI f15Trvkffj71jgH7kyzxfJ+woYaYEdZV02rmqTPrG5UmdiBDezUdTi2DX+kVzcibRCf7 8CAzJXxTraO5aJ8lmKwAJ/Cc6fHMqAJDDJ4fE+cvCZjw9bVHdmiZCtSQ4YkSyr/6mj56 nl4fuUlFl/5G5X9jY4b5caAOh5Cu4MMAauC1W4ODnCri3n41pnOA4OSApcSMTHAHAQvi /pPA== X-Gm-Message-State: AOAM531bD4MrAU5UcG55uz74aDHdrA+w+ykr4u9ddM1wTq/4p5vV8OWT mRCOPSyabU+jF9DmGmYfbrs1laiRMcRyzg== X-Google-Smtp-Source: ABdhPJyvk99DuGpFOsiT2uRB5kw+R+itIfmHziS8mh58MTBgrfJrSmlj+pJB/4xFtnMj2WkZMk1iTA== X-Received: by 2002:a17:90b:4ace:b0:1df:cb33:5e7e with SMTP id mh14-20020a17090b4ace00b001dfcb335e7emr2962784pjb.5.1653373644635; Mon, 23 May 2022 23:27:24 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([2408:8207:18da:2310:f940:af17:c2f5:8656]) by smtp.gmail.com with ESMTPSA id h5-20020a170902f54500b0016168e90f2dsm6254455plf.219.2022.05.23.23.27.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 May 2022 23:27:24 -0700 (PDT) From: Muchun Song To: hannes@cmpxchg.org, mhocko@kernel.org, roman.gushchin@linux.dev, shakeelb@google.com Cc: cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, duanxiongchun@bytedance.com, longman@redhat.com, Muchun Song Subject: [PATCH v4 02/11] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Date: Tue, 24 May 2022 14:05:42 +0800 Message-Id: <20220524060551.80037-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.1 (Apple Git-133) In-Reply-To: <20220524060551.80037-1-songmuchun@bytedance.com> References: <20220524060551.80037-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D39A2140012 X-Stat-Signature: z3cud4yxcfbmbhkn6cqf9m4pph96o9it X-Rspam-User: Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=SbFCQ3vJ; spf=pass (imf26.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.50 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1653373641-577669 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we reuse the objcg APIs to charge LRU pages, the folio_memcg() can be changed when the LRU pages reparented. In this case, we need to acquire the new lruvec lock. lruvec = folio_lruvec(folio); // The page is reparented. compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); // Acquired the wrong lruvec lock and need to retry. But compact_lock_irqsave() only take lruvec lock as the parameter, we cannot aware this change. If it can take the page as parameter to acquire the lruvec lock. When the page memcg is changed, we can use the folio_memcg() detect whether we need to reacquire the new lruvec lock. So compact_lock_irqsave() is not suitable for us. Similar to folio_lruvec_lock_irqsave(), introduce compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in the compaction routine. Signed-off-by: Muchun Song --- mm/compaction.c | 31 +++++++++++++++++++++++++++---- 1 file changed, 27 insertions(+), 4 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index fe915db6149b..817098817302 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } +static struct lruvec * +compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags, + struct compact_control *cc) +{ + struct lruvec *lruvec; + + lruvec = folio_lruvec(folio); + + /* Track if the lock is contended in async mode */ + if (cc->mode == MIGRATE_ASYNC && !cc->contended) { + if (spin_trylock_irqsave(&lruvec->lru_lock, *flags)) + goto out; + + cc->contended = true; + } + + spin_lock_irqsave(&lruvec->lru_lock, *flags); +out: + lruvec_memcg_debug(lruvec, folio); + + return lruvec; +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -844,6 +867,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* Time to isolate some pages for migration */ for (; low_pfn < end_pfn; low_pfn++) { + struct folio *folio; if (skip_on_failure && low_pfn >= next_skip_pfn) { /* @@ -1065,18 +1089,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!TestClearPageLRU(page)) goto isolate_fail_put; - lruvec = folio_lruvec(page_folio(page)); + folio = page_folio(page); + lruvec = folio_lruvec(folio); /* If we already hold the lock, we can skip some rechecking */ if (lruvec != locked) { if (locked) unlock_page_lruvec_irqrestore(locked, flags); - compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); + lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc); locked = lruvec; - lruvec_memcg_debug(lruvec, page_folio(page)); - /* Try get exclusive access under lock */ if (!skip_updated) { skip_updated = true; -- 2.11.0