From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 095EAC001DF for ; Fri, 20 Oct 2023 02:45:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A15D8D01B5; Thu, 19 Oct 2023 22:45:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8517C8D00E7; Thu, 19 Oct 2023 22:45:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 73FB48D01B5; Thu, 19 Oct 2023 22:45:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 62C3C8D00E7 for ; Thu, 19 Oct 2023 22:45:39 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 304E0140D95 for ; Fri, 20 Oct 2023 02:45:39 +0000 (UTC) X-FDA: 81364299198.23.2540EE0 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) by imf08.hostedemail.com (Postfix) with ESMTP id 29C5816000C for ; Fri, 20 Oct 2023 02:45:35 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf08.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697769937; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=39DbCa42OTSmVMzGgxYsCrixbP9qsxI6gFDzJ1U7D0U=; b=g0nHMxI53uq/w+HPHZsczycfchfcyEf4h2FX6SJhr0VIreN9n/06Txjpw+V2wjbN0HMr+v lnl3slwD1zf6JGakRp+Ipq1rYuriVi8X8e0ae6pFxPiowr8P/bYfXvVHojd438LlWFAe1j J5xo/H3LB5AuHczBFAOlZTMUn+IbJWs= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf08.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697769937; a=rsa-sha256; cv=none; b=dlfIiq1C3NtGNhGFTCBcUAJHfBlO91HyBdHVi347KazEhMfG8fjOy1ivMcagZ7PlL9Z69o cjqfnXjbG5BTwFVtIrGsz/MPsTmmCHCF9ZUPCX4M8WwoMgyVahCeNrWHJatnBHeEzFFJA3 ntyd573S0UXeCNlagFnX+HzJRd8xgzU= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R691e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VuVat2z_1697769930; Received: from 30.97.48.56(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VuVat2z_1697769930) by smtp.aliyun-inc.com; Fri, 20 Oct 2023 10:45:31 +0800 Message-ID: Date: Fri, 20 Oct 2023 10:45:45 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH] mm: migrate: record the mlocked page status to remove unnecessary lru drain To: "Yin, Fengwei" , "Huang, Ying" , Zi Yan Cc: akpm@linux-foundation.org, mgorman@techsingularity.net, hughd@google.com, vbabka@suse.cz, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <64899ad0bb78cde88b52abed1a5a5abbc9919998.1697632761.git.baolin.wang@linux.alibaba.com> <1F80D8DA-8BB5-4C7E-BC2F-030BF52931F7@nvidia.com> <87il73uos1.fsf@yhuang6-desk2.ccr.corp.intel.com> <2ad721be-b81e-d279-0055-f995a8cfe180@linux.alibaba.com> <27f40fc2-806a-52a9-3697-4ed9cd7081d4@intel.com> <05d596f3-c59c-76c3-495e-09f8573cf438@linux.alibaba.com> From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: n1q5eawrmsxjwk7kwf95kyxx58ukfwio X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 29C5816000C X-HE-Tag: 1697769935-337718 X-HE-Meta: U2FsdGVkX18+kf6H/0XJJkuWJ591m2uiRwcUcMhMJTjJPGO37cxi1nLNaqzD0FTRT8M4Y/4ZDwckAcZnsenEigR5fD9MBN7ykLUaH7/IfPd2V1wwt78IlqaL5gIjzrOrpRCE/aeqX3uphoTERijZ0q7LzMgI36b1l92OUoIvdmpAZpts/XT9/4G6gMFyaMKdopxveKTKPApC+qPqfgQTWsEqib7CNsnQMfjXsjfMZBTW18TpJOBaWvmOlSr09tEKlPzYgcueXiCddl1ziTzkT69bnrraFWwG8X5OHv3ut+jIEpqon7Ip4zUJGW2BajfTTQVwnctpal6Z4tYhdoV1FEywE/reRLSY9f1UdsUgoKs8wLDKPkOKrhzVTScc7g2Z98G9DIFLSjAHKkNFpkQ8x4XG00SO9nFvyIvxNQ0ARkcIe+0lSUKu3PuDuph+LhLwB1PpZJw6jeFRNGFkbxMP+/kK4RteRgV0Wi+/nJmcvQFg7GMHpaQlupgM0kYpxnzKanby9Mm/Ys7Y14kI+hbVJpb1iq3f6/9MHwSlTbsMKw3eLjFqNJDuN/ZfWZDJn8ppHS8F6hyGEEpa0z8WuxCcnUZOyPQMZZ1SFTgB23pjtG5d+aK/0DHcAXWXlGORnmaaspOXLDx4dRwOTS+bBVBTgClxnHps+dR0FKHAKI27ceUEiAyDcZH6exucmLWAp9d234JS9p/iwUociqnFNx7lzAqDEQKWVmSTyxIk06kln4lVDDlk5eZcwiryyMOML/dONoPjZuoNHmpiC/07zurbVQV/G3K0txHFRMAwfFu0wrwrl+MYw6EwvcMQGRrsZo9PR+ww6pJFZjG8yGBFnB/lSevPKjkmWmqFBsI72iQKat4bWvhzSjog4N+nKPzKDnAp0jBIPJGh+gtwrehSpFTTpGAypCOSbGZume6a79LGR/vekecPf35qkNoawbA/kwk8jWo3ILyjMKy2ulkVBWK SObcspKu 8g70+XM+BRYJJ8v+C3n0yV2CQtE4/Og3IyB2UUCVOB/2SuW/qaG1ElWzE4BELKZtO2Edi/M6o3dbCLUT7qLSII7pigiKvmVJavcOw X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/20/2023 10:30 AM, Yin, Fengwei wrote: > > > On 10/20/2023 10:09 AM, Baolin Wang wrote: >> >> >> On 10/19/2023 8:07 PM, Yin, Fengwei wrote: >>> >>> >>> On 10/19/2023 4:51 PM, Baolin Wang wrote: >>>> >>>> >>>> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>>>> Hi Baolin, >>>>> >>>>> On 10/19/23 15:25, Baolin Wang wrote: >>>>>> >>>>>> >>>>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>>>> Zi Yan writes: >>>>>>> >>>>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>>>> >>>>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>>>>       - 18.75% compact_zone >>>>>>>>>          - 17.39% migrate_pages >>>>>>>>>             - 13.79% migrate_pages_batch >>>>>>>>>                - 11.66% migrate_folio_move >>>>>>>>>                   - 7.02% lru_add_drain >>>>>>>>>                      + 7.02% lru_add_drain_cpu >>>>>>>>>                   + 3.00% move_to_new_folio >>>>>>>>>                     1.23% rmap_walk >>>>>>>>>                + 1.92% migrate_folio_unmap >>>>>>>>>             + 3.20% migrate_pages_sync >>>>>>>>>          + 0.90% isolate_migratepages >>>>>>>>> >>>>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>>>> for the heavy concurrent scenarios. >>>>>>>> >>>>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>>>> >>>>>>> lru_add_drain() is called after the page reference count checking in >>>>>>> move_to_new_folio().  So, I don't this is an issue. >>>>>> >>>>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>>>> >>>>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>>>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>>>> >>>>> >>>>> But I have question: why do we need use page_was_mlocked instead of check >>>>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >>>> >>>> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). >>> >>> Yes. This will clear mlock bit. >>> >>> What about set dst folio mlocked if source is before try_to_migrate_one()? And >>> then check whether dst folio is mlocked after? And need clear mlocked if migration >>> fails. I suppose the change is minor. Just a thought. Thanks. >> >> IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. >> >> Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. >> >> So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :) > > Can these concerns be addressed by clear dst mlocked after lru_add_drain() but before > remove_migration_pte()? IMHO, that seems too hacky to me. I still prefer to rely on the migration process of the mlcock pages.