From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF30EC30653 for ; Wed, 3 Jul 2024 11:15:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 26C266B0096; Wed, 3 Jul 2024 07:15:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F5426B009A; Wed, 3 Jul 2024 07:15:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 06FC26B009B; Wed, 3 Jul 2024 07:15:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id DB56E6B0096 for ; Wed, 3 Jul 2024 07:15:47 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 587B61C1D16 for ; Wed, 3 Jul 2024 11:15:47 +0000 (UTC) X-FDA: 82298186334.05.6D6759E Received: from m16.mail.126.com (m16.mail.126.com [117.135.210.7]) by imf07.hostedemail.com (Postfix) with ESMTP id CB08F40019 for ; Wed, 3 Jul 2024 11:15:43 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=126.com header.s=s110527 header.b=lTaJyjns; spf=pass (imf07.hostedemail.com: domain of yangge1116@126.com designates 117.135.210.7 as permitted sender) smtp.mailfrom=yangge1116@126.com; dmarc=pass (policy=none) header.from=126.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720005333; a=rsa-sha256; cv=none; b=vqF7xsI2cAOeAah8AO9dT6P7AEFAgxxgGS22yhadd+xLPDisnTN/eC92zya0RrmwKi8o1y GfTdZjkkepihe1CDYCn4E5Gd1HLF1MDHikJUbNnbMarlbM2B1OO+fmCVs18rbfIuUD5/zr KwTAXJJPjhr6ByuxsYVONjvXVDkqAv4= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=126.com header.s=s110527 header.b=lTaJyjns; spf=pass (imf07.hostedemail.com: domain of yangge1116@126.com designates 117.135.210.7 as permitted sender) smtp.mailfrom=yangge1116@126.com; dmarc=pass (policy=none) header.from=126.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720005333; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Q42ysf7Ux/6y3GzYGr+UkZ7htmtWB+6tElFWJ0Z5Pms=; b=Zz6Ddpf6+y9eTKzirhzlZahgL924d4yr0r2ktGTZ6aXNIk88NfvBFxUXOJZulFmS13MLQF oBbzqj03FsqkxIPljb4YPANVzmnetl/uhOQpz80Uxn+n05kBMLcaRFpmLCQO9Y0hLcaqFM h+kPk3Cl0w+waLAbsKLbbvuSWYBj39w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=Message-ID:Date:MIME-Version:Subject:From: Content-Type; bh=Q42ysf7Ux/6y3GzYGr+UkZ7htmtWB+6tElFWJ0Z5Pms=; b=lTaJyjnscpY+tCKv9KDv6VVgz1qPRpLaCGQAP2wCQRniqxAczMJQSyL47PBpu8 pt9mpA/C7OF6auqyvKe9qOyPWiWDcCqGmqcL4VvbbPcpSlrGqpVnYkDq7QlhQTS7 bwY/0Wkx6enOxqxAzmMXkwcA9VUFZXXJo0rVU34WOINYY= Received: from [172.21.22.210] (unknown [118.242.3.34]) by gzga-smtp-mta-g0-4 (Coremail) with SMTP id _____wD3P4_UMoVmXL0xAQ--.30814S2; Wed, 03 Jul 2024 19:15:33 +0800 (CST) Message-ID: Date: Wed, 3 Jul 2024 19:15:32 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH V2] mm/gup: Clear the LRU flag of a page before adding to LRU batch To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, david@redhat.com, baolin.wang@linux.alibaba.com, liuzixing@hygon.cn References: <1719038884-1903-1-git-send-email-yangge1116@126.com> From: Ge Yang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-CM-TRANSID:_____wD3P4_UMoVmXL0xAQ--.30814S2 X-Coremail-Antispam: 1Uf129KBjvJXoWxKF4UGrWfGr15Jr48GFW8tFb_yoW7ZFW8pF WxGrnIqFWDGFsrur47Xr15AF1Fk393XF4UAFWxGFy7AFn8Z3WqkF1xKw1Uua9xAr15uFn7 u3WUXFnYg3WUJFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jYOJ5UUUUU= X-Originating-IP: [118.242.3.34] X-CM-SenderInfo: 51dqwwjhrrila6rslhhfrp/1tbiWRURG2VLbI6TgQABsZ X-Stat-Signature: irjoii8g3apjdog181m7dmnkizmmmzsb X-Rspamd-Queue-Id: CB08F40019 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1720005343-153321 X-HE-Meta: U2FsdGVkX1+3Blx+h1bTH7Ysr3HRRMRoITjbSgeBDMJw41J9jxT58H1LjjxxRw5IZQMgvf3p7/uRzFnCMM4Y7IfYHfPMtQkhi6Mo7GEHVrl15RCqiEgQjp7KqrZC2x/KsyQZn+22/LdO9zVIWGhMZyNKYaeAYlk8iB9aJ0ZV+vDGoirdBHeYpjK96nJzTBnLVH6ZBncXgOzy318Eic3eKkTuTjzQQtZvvHlLhieKa2uvvkpFBwDNLpSIqPmo6KRfKsBnsJnLUc7jItUvsXh4xMBlfaU6sufWvzHOY49Ab0sQZFdGPOkkB2LV4Jb6mSeQT+0M+CN/AliUW4abUrfyAx7clzQo/NYiUrGdENjKBr3DTnbO8GWm1xkUm/xMmp9S8IoVptpvik+XNFrIgNNIQULeZ1zzUxb8nJsFyoHuhMnIqtk3qFnlJiFuxXK6A/KLDiGgOyJbfvNwUn1aBmepEtlfl9FgWstjhfZ8XONkzE0Q31PR6RZvZp0J++DANMP29F/ES3d3h+S61+ejfWvlGtu/wrX497KZNvkrF/V/T7g6pwRmV3c1Zhjwr+HH1NV3o9ayKFaulEdMp1Dr6xRJynm5knp61qcNWbH/bufCbG+6ohjBzH1sUzaa248AzVJ8q0e+0pDw65ZF47EfvGaHDP+FLczxLZ/Yt3VNnljOb5cF2+BHF7iTbq49JUODMA+ojRWFYRd6NcM07zWMtdvOquEZGFpeCe2V5EaDeEprctExEChHw5dh61DqPx/q34iMW64+Q0S/jjWGqdR8lWlMwzrPmVjUa82s/zeSjWxJHHC3YBhpDYJfs9sePRALCFPKHOs2ZYRGtAXioimDfhLPWnGBT9IWno0oVRf6Jd/UXqk0EZxiRbi37ajG0bBAHnOw6QanOFnRa7EM12MRIp2RnnhqmBL3F90zObUNL+yVY7KaxF6Hby3oefKD6rf9HPj7cu4C5n/1c3Q+nTlHue1 28T8EBsN XdwBDb8zSEBaw8+TFRTlUBD2abUlaA5azmJ/sO7zSFpSSDnjxCPkVw4L8rKkaeWZpN88S4FbIRNrfypXCJZetTzmmO483aXZeG0L8RDxn8uvtB+M5XN3GRVczkGn3REqIVqcFtaf7VIoLwEoXxgxgoceZ15ZOcgixHkOuoV9SYbRnIlALcM3rbv8M1Haff8iPm0RQp8WkhvKW92tyYC8tXweWp3f/pJxEVPV54raPTJvgc7QtU1nrDFAxlX82Tq6BEr35Dc+RbsPt6+Eu6pxMKk4knfL4TglBC1hT2JC2vPQ8zX+q5jGfdgAW8N+CgHlMZMFqrfUhXCZzf4En9ZhepXexipNJGe33GxE7cIUrnf1PqfeNwhINi8Jqc8s5WQpwIsKAd+bEZ6jKLDXFytslsMUDoCBE3MpV+Wr6fnj+S0oSQwICGYP5qhE0iB/81px+eXr84zuAUx5isTwwRjVeOKOXxWGMaIlM8SM90yIvGVhBoxVYWhrm1MsThQYw+nhFVfzObarPzaoWFCOTeiVxNBKl4QggpJrpNA7xQc2a3B9l/fI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2024/7/3 17:46, Barry Song 写道: > On Sat, Jun 22, 2024 at 6:48 PM wrote: >> >> From: yangge >> >> If a large number of CMA memory are configured in system (for example, the >> CMA memory accounts for 50% of the system memory), starting a virtual >> virtual machine, it will call pin_user_pages_remote(..., FOLL_LONGTERM, >> ...) to pin memory. Normally if a page is present and in CMA area, >> pin_user_pages_remote() will migrate the page from CMA area to non-CMA >> area because of FOLL_LONGTERM flag. But the current code will cause the >> migration failure due to unexpected page refcounts, and eventually cause >> the virtual machine fail to start. >> >> If a page is added in LRU batch, its refcount increases one, remove the >> page from LRU batch decreases one. Page migration requires the page is not >> referenced by others except page mapping. Before migrating a page, we >> should try to drain the page from LRU batch in case the page is in it, >> however, folio_test_lru() is not sufficient to tell whether the page is >> in LRU batch or not, if the page is in LRU batch, the migration will fail. >> >> To solve the problem above, we modify the logic of adding to LRU batch. >> Before adding a page to LRU batch, we clear the LRU flag of the page so >> that we can check whether the page is in LRU batch by folio_test_lru(page). >> Seems making the LRU flag of the page invisible a long time is no problem, >> because a new page is allocated from buddy and added to the lru batch, >> its LRU flag is also not visible for a long time. >> >> Cc: > > you have Cced stable, what is the fixes tag? Thanks,I will add it in next version. > >> Signed-off-by: yangge >> --- >> mm/swap.c | 43 +++++++++++++++++++++++++++++++------------ >> 1 file changed, 31 insertions(+), 12 deletions(-) >> >> diff --git a/mm/swap.c b/mm/swap.c >> index dc205bd..9caf6b0 100644 >> --- a/mm/swap.c >> +++ b/mm/swap.c >> @@ -211,10 +211,6 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) >> for (i = 0; i < folio_batch_count(fbatch); i++) { >> struct folio *folio = fbatch->folios[i]; >> >> - /* block memcg migration while the folio moves between lru */ >> - if (move_fn != lru_add_fn && !folio_test_clear_lru(folio)) >> - continue; >> - >> folio_lruvec_relock_irqsave(folio, &lruvec, &flags); >> move_fn(lruvec, folio); >> >> @@ -255,11 +251,16 @@ static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio) >> void folio_rotate_reclaimable(struct folio *folio) >> { >> if (!folio_test_locked(folio) && !folio_test_dirty(folio) && >> - !folio_test_unevictable(folio) && folio_test_lru(folio)) { >> + !folio_test_unevictable(folio)) { >> struct folio_batch *fbatch; >> unsigned long flags; >> >> folio_get(folio); >> + if (!folio_test_clear_lru(folio)) { >> + folio_put(folio); >> + return; >> + } >> + >> local_lock_irqsave(&lru_rotate.lock, flags); >> fbatch = this_cpu_ptr(&lru_rotate.fbatch); >> folio_batch_add_and_move(fbatch, folio, lru_move_tail_fn); >> @@ -352,11 +353,15 @@ static void folio_activate_drain(int cpu) >> >> void folio_activate(struct folio *folio) >> { >> - if (folio_test_lru(folio) && !folio_test_active(folio) && >> - !folio_test_unevictable(folio)) { >> + if (!folio_test_active(folio) && !folio_test_unevictable(folio)) { >> struct folio_batch *fbatch; >> >> folio_get(folio); >> + if (!folio_test_clear_lru(folio)) { >> + folio_put(folio); >> + return; >> + } >> + >> local_lock(&cpu_fbatches.lock); >> fbatch = this_cpu_ptr(&cpu_fbatches.activate); >> folio_batch_add_and_move(fbatch, folio, folio_activate_fn); >> @@ -700,6 +705,11 @@ void deactivate_file_folio(struct folio *folio) >> return; >> >> folio_get(folio); >> + if (!folio_test_clear_lru(folio)) { >> + folio_put(folio); >> + return; >> + } >> + >> local_lock(&cpu_fbatches.lock); >> fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate_file); >> folio_batch_add_and_move(fbatch, folio, lru_deactivate_file_fn); >> @@ -716,11 +726,16 @@ void deactivate_file_folio(struct folio *folio) >> */ >> void folio_deactivate(struct folio *folio) >> { >> - if (folio_test_lru(folio) && !folio_test_unevictable(folio) && >> - (folio_test_active(folio) || lru_gen_enabled())) { >> + if (!folio_test_unevictable(folio) && (folio_test_active(folio) || >> + lru_gen_enabled())) { >> struct folio_batch *fbatch; >> >> folio_get(folio); >> + if (!folio_test_clear_lru(folio)) { >> + folio_put(folio); >> + return; >> + } >> + >> local_lock(&cpu_fbatches.lock); >> fbatch = this_cpu_ptr(&cpu_fbatches.lru_deactivate); >> folio_batch_add_and_move(fbatch, folio, lru_deactivate_fn); >> @@ -737,12 +752,16 @@ void folio_deactivate(struct folio *folio) >> */ >> void folio_mark_lazyfree(struct folio *folio) >> { >> - if (folio_test_lru(folio) && folio_test_anon(folio) && >> - folio_test_swapbacked(folio) && !folio_test_swapcache(folio) && >> - !folio_test_unevictable(folio)) { >> + if (folio_test_anon(folio) && folio_test_swapbacked(folio) && >> + !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) { >> struct folio_batch *fbatch; >> >> folio_get(folio); >> + if (!folio_test_clear_lru(folio)) { >> + folio_put(folio); >> + return; >> + } >> + >> local_lock(&cpu_fbatches.lock); >> fbatch = this_cpu_ptr(&cpu_fbatches.lru_lazyfree); >> folio_batch_add_and_move(fbatch, folio, lru_lazyfree_fn); >> -- >> 2.7.4 >>