From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D93D3C001E0 for ; Fri, 20 Oct 2023 02:09:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 135F48D01B0; Thu, 19 Oct 2023 22:09:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E4658D00E7; Thu, 19 Oct 2023 22:09:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3BD38D01B0; Thu, 19 Oct 2023 22:09:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E7A108D00E7 for ; Thu, 19 Oct 2023 22:09:34 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A9E07A06DF for ; Fri, 20 Oct 2023 02:09:34 +0000 (UTC) X-FDA: 81364208268.26.F37D8E2 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by imf01.hostedemail.com (Postfix) with ESMTP id 595C440012 for ; Fri, 20 Oct 2023 02:09:30 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf01.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697767773; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/DdfRfZZLcI5dmGlwf7yXU46HcHU0g4tZAEmtYqEssg=; b=7qUZ5gdTjCIWc7FXq8kwzA1iQ6r07mYFBS5SpY9ZWDs59MihcpCFuLykLA+cVD7jWBLwaD W1FWhLrIousFY7Q4g7xw3iOwf0i04zToX7XM50GVBtNsi+imhFYLM8Zgxa7ha9zV2d4lqa NS4Y1527T6yyc/bqbjpIW7ci8KIlbmM= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=alibaba.com; spf=pass (imf01.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697767773; a=rsa-sha256; cv=none; b=ndFITRH55MHv7op8LG4Y0UvruX2+tW/LgUvs/rzhOkPQj+D171MAnOrXDwHpqj2hv3Hg3a 1sm5TRkZOFgdtjsbIh1PlkMwiK9hiSx48xzQndpHlqbH/FfGCx98COLcaHB6OVqonWpfxs HTAQhfIaTieNXRlTaf13ORufNg+7Glk= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R781e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046059;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=9;SR=0;TI=SMTPD_---0VuVIn4o_1697767761; Received: from 30.97.48.56(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0VuVIn4o_1697767761) by smtp.aliyun-inc.com; Fri, 20 Oct 2023 10:09:22 +0800 Message-ID: <05d596f3-c59c-76c3-495e-09f8573cf438@linux.alibaba.com> Date: Fri, 20 Oct 2023 10:09:35 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.15.1 Subject: Re: [PATCH] mm: migrate: record the mlocked page status to remove unnecessary lru drain To: "Yin, Fengwei" , "Huang, Ying" , Zi Yan Cc: akpm@linux-foundation.org, mgorman@techsingularity.net, hughd@google.com, vbabka@suse.cz, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <64899ad0bb78cde88b52abed1a5a5abbc9919998.1697632761.git.baolin.wang@linux.alibaba.com> <1F80D8DA-8BB5-4C7E-BC2F-030BF52931F7@nvidia.com> <87il73uos1.fsf@yhuang6-desk2.ccr.corp.intel.com> <2ad721be-b81e-d279-0055-f995a8cfe180@linux.alibaba.com> <27f40fc2-806a-52a9-3697-4ed9cd7081d4@intel.com> From: Baolin Wang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: nafckn6r4zr6hc5mdcgdq6wtqh9sbah8 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 595C440012 X-HE-Tag: 1697767770-432308 X-HE-Meta: U2FsdGVkX1+v5rg7wSGOgpaAKbxJVF7eXwNQEdnNH2xjln8y+8smkFTjmtnz/fimnyQSvaTB1n8CWkcXX9ApMR2huFjFxx5hs7GF9Qou2eyTxntqcwTbZpqWcMJuCwQxwH/wexmYY1RxMEuDwXWLbXKgtdHyjjQwecBxnBMoIKEZo8cl6oSX8erKRbfr0kaMaLp/pF4pwuI4qWN4KQdbJMDpQUCy74fqFmmx4MMV6Fjy77ojGCQw8ciWSMDRJYkL/fDmmqeSYjTn6u9d1PeO2VUVzAiwF1Vgnq4CKdFZuHsP64qidSgTaOG30kMCNvN2xYJRjUIxiKIbobtLqkHqVOCGmBw6s20l90IUOFofiBQe1T6n8eB1PDvO1gicZfJwyZpYpAN1jsxvpmH8fa+9l3wJUNPVfrVvCeSMIGONmrHCcCEEK51iKyX+Zk+ZtjqvP6SV3VZVC1axWdKMd6W1MbtL5Gr2tB4sXYkYIp7OErqblUOd9Wmsu8rxMfe/lTxTIOmFxwPmKb3xldz+A9c03HbQE2o6LBNFgUyewvEO4vc7QaaBMBd76ZHm3rByze1s865sbvN4dM6Bj5h+beiULxfNsdmxyg8d6r5VvOdt+MOVIH/pieBYFvczmyM0+tpwG3pUwdMmfD74JyfHyZH/Lr7MVwdAe3Gfwa+I6Wd5x2yMSVwOZNrrsvPG/eqpLQA/8PudJeXAANpDLWSuZZDCB/4hnNfmK0rytRt/6/KbdiYHxoHkPU9NkGnrv566S+09VdrlWLev8drsmsPk3/QctnkmxlvycwQLBTxVTJiBFDdAJj13ZPBt05CKP0gXBPRlQB7j1Da03vKPtX7iY2KsBvjtOgAgPPQgif62phfWP8PAer/07rhAjqksFxIjJiDc9dsNPxW3e3q6E+zkcnNAYZqXQBxDKhHesw0axpqLPmROM5/FBh5bhWXHRcPynlGZeNJLtLnYxg0gEweaouH btrk1UXi GnZAWHfyr3JtLARHhwpMo8ZglXYHgjlez6rqFGUPE/f9Sh2m6L6cRCZxNLHL6Is8U3FKh8MDpWaz2yVMbse2Uj9bElI6GQnSmc2/7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10/19/2023 8:07 PM, Yin, Fengwei wrote: > > > On 10/19/2023 4:51 PM, Baolin Wang wrote: >> >> >> On 10/19/2023 4:22 PM, Yin Fengwei wrote: >>> Hi Baolin, >>> >>> On 10/19/23 15:25, Baolin Wang wrote: >>>> >>>> >>>> On 10/19/2023 2:09 PM, Huang, Ying wrote: >>>>> Zi Yan writes: >>>>> >>>>>> On 18 Oct 2023, at 9:04, Baolin Wang wrote: >>>>>> >>>>>>> When doing compaction, I found the lru_add_drain() is an obvious hotspot >>>>>>> when migrating pages. The distribution of this hotspot is as follows: >>>>>>>      - 18.75% compact_zone >>>>>>>         - 17.39% migrate_pages >>>>>>>            - 13.79% migrate_pages_batch >>>>>>>               - 11.66% migrate_folio_move >>>>>>>                  - 7.02% lru_add_drain >>>>>>>                     + 7.02% lru_add_drain_cpu >>>>>>>                  + 3.00% move_to_new_folio >>>>>>>                    1.23% rmap_walk >>>>>>>               + 1.92% migrate_folio_unmap >>>>>>>            + 3.20% migrate_pages_sync >>>>>>>         + 0.90% isolate_migratepages >>>>>>> >>>>>>> The lru_add_drain() was added by commit c3096e6782b7 ("mm/migrate: >>>>>>> __unmap_and_move() push good newpage to LRU") to drain the newpage to LRU >>>>>>> immediately, to help to build up the correct newpage->mlock_count in >>>>>>> remove_migration_ptes() for mlocked pages. However, if there are no mlocked >>>>>>> pages are migrating, then we can avoid this lru drain operation, especailly >>>>>>> for the heavy concurrent scenarios. >>>>>> >>>>>> lru_add_drain() is also used to drain pages out of folio_batch. Pages in folio_batch >>>>>> have an additional pin to prevent migration. See folio_get(folio); in folio_add_lru(). >>>>> >>>>> lru_add_drain() is called after the page reference count checking in >>>>> move_to_new_folio().  So, I don't this is an issue. >>>> >>>> Agree. The purpose of adding lru_add_drain() is to address the 'mlock_count' issue for mlocked pages. Please see commit c3096e6782b7 and related comments. Moreover I haven't seen an increase in the number of page migration failures due to page reference count checking after this patch. >>> >>> I agree with your. My understanding also is that the lru_add_drain() is only needed >>> for mlocked folio to correct mlock_count. Like to hear the confirmation from Huge. >>> >>> >>> But I have question: why do we need use page_was_mlocked instead of check >>> folio_test_mlocked(src)? Does page migration clear the mlock flag? Thanks. >> >> Yes, please see the call trace: try_to_migrate_one() ---> page_remove_rmap() ---> munlock_vma_folio(). > > Yes. This will clear mlock bit. > > What about set dst folio mlocked if source is before try_to_migrate_one()? And > then check whether dst folio is mlocked after? And need clear mlocked if migration > fails. I suppose the change is minor. Just a thought. Thanks. IMO, this will break the mlock related statistics in mlock_folio() when the remove_migration_pte() rebuilds the mlock status and mlock count. Another concern I can see is that, during the page migration, a concurrent munlock() can be called to clean the VM_LOCKED flags for the VMAs, so the remove_migration_pte() should not rebuild the mlock status and mlock count. But the dst folio's mlcoked status is still remained, which is wrong. So your suggested apporach seems not easy, and I think my patch is simple with re-using existing __migrate_folio_record() and __migrate_folio_extract() :)