From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFACAC52D7C for ; Tue, 13 Aug 2024 07:23:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 529A46B0088; Tue, 13 Aug 2024 03:23:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B3E66B008C; Tue, 13 Aug 2024 03:23:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 32BA26B0092; Tue, 13 Aug 2024 03:23:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 10CB06B0088 for ; Tue, 13 Aug 2024 03:23:12 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 821BAC068C for ; Tue, 13 Aug 2024 07:23:11 +0000 (UTC) X-FDA: 82446380982.27.9AC1760 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf23.hostedemail.com (Postfix) with ESMTP id 7D71B14000B for ; Tue, 13 Aug 2024 07:23:09 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723533778; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5irUFm9LoViJ1uZHou87Vm1rB/kPFhjhp31H7uqFaVc=; b=noP2XpYOsmhvv0idAwl29OZzeKasn9uqw6V6IS3J3sfCcmT1uTFagaakA54eNJ/hkij0pW Y3bEvZfB05EbBXO2RPJ7KuRV3Qn891Ib3L3rePYgxmQ0hk6IZ8zTnYMidTjDtnhudAbozO mh/Clx30ynYvcyGNprPs/tt/DXi5sng= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; spf=pass (imf23.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723533778; a=rsa-sha256; cv=none; b=yyPwV/MoVnBCj1eymmuNWloXb37GIC2tKgts2h22CsuWosTmaiXRolrUxJEsmSPEYjUw/C T1ALll41G4dD0zRQ1wmeEvbvLu8noqN1Q+8FHsI6ohEuiSVa7EYulZlW6AHt66C484N4op TNRXL3n7tFOOOPG9SDurlQ9fmx/PoB4= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6AA5612FC; Tue, 13 Aug 2024 00:23:34 -0700 (PDT) Received: from [10.162.42.21] (e116581.arm.com [10.162.42.21]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A88463F40C; Tue, 13 Aug 2024 00:22:58 -0700 (PDT) Message-ID: <391d4f4f-e642-4c11-a36b-190874963f8a@arm.com> Date: Tue, 13 Aug 2024 12:52:55 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: Retry migration earlier upon refcount mismatch From: Dev Jain To: "Huang, Ying" Cc: akpm@linux-foundation.org, shuah@kernel.org, david@redhat.com, willy@infradead.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, osalvador@suse.de, baolin.wang@linux.alibaba.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, ioworker0@gmail.com, gshan@redhat.com, mark.rutland@arm.com, kirill.shutemov@linux.intel.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, broonie@kernel.org, mgorman@techsingularity.net, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org References: <20240809103129.365029-1-dev.jain@arm.com> <20240809103129.365029-2-dev.jain@arm.com> <87frrauwwv.fsf@yhuang6-desk2.ccr.corp.intel.com> <15dbe4ac-a036-4029-ba08-e12a236f448a@arm.com> <87bk1yuuzu.fsf@yhuang6-desk2.ccr.corp.intel.com> <95b72817-5444-4ced-998a-1cb90f42bf49@arm.com> <8734naurhm.fsf@yhuang6-desk2.ccr.corp.intel.com> <9d84e4e8-ac54-4eb1-a113-3f32aea864c9@arm.com> Content-Language: en-US In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Stat-Signature: fggfjtmz186bsoq888k4o3jhxnzs8ho3 X-Rspamd-Queue-Id: 7D71B14000B X-Rspamd-Server: rspam11 X-HE-Tag: 1723533789-342333 X-HE-Meta: U2FsdGVkX1/syCbgj1+wgEz86QGmN4vgsf1PzyfeYfkpSquZdonG55p2FN5Z4QFeER28Ym6ROBw5CPBBz4XVdpOdcBqWGR4/oH/mtb9k7AAipgFFIGm+LNuDmoxbPintLDtIw8SLvoIRanNgxuQsaP5tG6bmslXNSB2H6zh1jkBqnpGSR5N9Ngf+2jMLwVT3sSk0wAd2hi6FJhcpiyksU+uKRbhz5RexV7m/ne/N1yhZx92u5/O8VvbotikLcxDyCjbi9gLeGOiAs4gOpzXuLd9A/K7VwoivA41aZjGLsI7/qK60C7bgwPCqTic4hjsozJRA/eIaf07bnnkKAC/up1ZjbSk+QcYLzakjLHzXk97EDlKnOMrb41wCmuyKjhXLedU+2vchNbZztcRFSaKeHusgxx5amOgXee3UcPqw2DTsPNLAzvsUtOrvt+t4pvXN6ox/e4DSqs27I8GeXQPAV/OU1n8YaYuq7n2CvVP9xYknGYzT5qpqdAGbDUSj5fM2avt+ZFENeynyVq9Aih8CU96w17d0gohiaqayMFzJYw9mGWIM2+V1CJBPDyCaqnJezBB37uLRcNVl8rbV17qtyZDj2BQWJP58HYxMnd/4IR3DjNyXLcOsNmd6n3UdBz6vhq42MqKOydd5lEVdl1LrhemZNlc6bNa2paL3hh6CwyLZt9ykS0fps9XDv1ZIdXgUbNp6IOSf/Gu3MP9oXNIHacA32Q5P8zlzYs+T3ho00KHg831fhz7jqJIaeYkXJ9eo9qmShVYBkl3tlps/iiNrDnpZZVfcfsmsuMjNg7VDQeAoYTT7nrT21cpY4EgdjjQa1VFqdOqDeLb4YJTy9vB6qJUuiG7GnJ0CiKGmKThAJpDzlUPOiK+/4zW7LXMrwwRfSfSx3FyF4aslb6caDbQVB+3fhtqTZWiqS9Tai1/WUV+VM2JIXYhnMpRrKzNtqUN0ypsWn7d3eAxOFNpa8Mi FPKCaSGk /4wnY2g+Fvv3EzW654AYS/K2dLppO30YTULY6reyVJ6KjcqqK7RCu9MW0UEtD1fJsWyzWSK1A2drmfxFuLj2fdpTpujhfvQvO42mFmPXYkhH5fyLIHg24N+lG/wwPTorvoggvdwJBH9L2UyeQbP9uqUwPNp6YkcJbDNuX7hq+oM117cyVRZ+DGo8HZ2PUm2+aef6kORwgijrsoD0pzo5jwBzRNUwSdpOFGjKBVPEsk5E22XjoS9HV9j5SZklZFGLLsX43tY/JWeINQL3MhV0/DjBTOv99Ikp0F2x9YX4Z1Z99LCwEGKFpkyopqzK+/FDxs2EsGrOzQZ0Il4pGn1C8WEMqrzRKaqc7E7LLq0ZTL+m3hcem1kZ5dg2glePmxoASCid/AmibKV/SHtXU/Uwzio96Ww== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 8/13/24 10:30, Dev Jain wrote: > > On 8/12/24 17:38, Dev Jain wrote: >> >> On 8/12/24 13:01, Huang, Ying wrote: >>> Dev Jain writes: >>> >>>> On 8/12/24 11:45, Huang, Ying wrote: >>>>> Dev Jain writes: >>>>> >>>>>> On 8/12/24 11:04, Huang, Ying wrote: >>>>>>> Hi, Dev, >>>>>>> >>>>>>> Dev Jain writes: >>>>>>> >>>>>>>> As already being done in __migrate_folio(), wherein we backoff >>>>>>>> if the >>>>>>>> folio refcount is wrong, make this check during the unmapping >>>>>>>> phase, upon >>>>>>>> the failure of which, the original state of the PTEs will be >>>>>>>> restored and >>>>>>>> the folio lock will be dropped via migrate_folio_undo_src(), >>>>>>>> any racing >>>>>>>> thread will make progress and migration will be retried. >>>>>>>> >>>>>>>> Signed-off-by: Dev Jain >>>>>>>> --- >>>>>>>>     mm/migrate.c | 9 +++++++++ >>>>>>>>     1 file changed, 9 insertions(+) >>>>>>>> >>>>>>>> diff --git a/mm/migrate.c b/mm/migrate.c >>>>>>>> index e7296c0fb5d5..477acf996951 100644 >>>>>>>> --- a/mm/migrate.c >>>>>>>> +++ b/mm/migrate.c >>>>>>>> @@ -1250,6 +1250,15 @@ static int >>>>>>>> migrate_folio_unmap(new_folio_t get_new_folio, >>>>>>>>         } >>>>>>>>           if (!folio_mapped(src)) { >>>>>>>> +        /* >>>>>>>> +         * Someone may have changed the refcount and maybe >>>>>>>> sleeping >>>>>>>> +         * on the folio lock. In case of refcount mismatch, >>>>>>>> bail out, >>>>>>>> +         * let the system make progress and retry. >>>>>>>> +         */ >>>>>>>> +        struct address_space *mapping = folio_mapping(src); >>>>>>>> + >>>>>>>> +        if (folio_ref_count(src) != >>>>>>>> folio_expected_refs(mapping, src)) >>>>>>>> +            goto out; >>>>>>>>             __migrate_folio_record(dst, old_page_state, anon_vma); >>>>>>>>             return MIGRATEPAGE_UNMAP; >>>>>>>>         } >>>>>>> Do you have some test results for this?  For example, after >>>>>>> applying the >>>>>>> patch, the migration success rate increased XX%, etc. >>>>>> I'll get back to you on this. >>>>>> >>>>>>> My understanding for this issue is that the migration success >>>>>>> rate can >>>>>>> increase if we undo all changes before retrying.  This is the >>>>>>> current >>>>>>> behavior for sync migration, but not for async migration.  If >>>>>>> so, we can >>>>>>> use migrate_pages_sync() for async migration too to increase >>>>>>> success >>>>>>> rate?  Of course, we need to change the function name and comments. >>>>>> As per my understanding, this is not the current behaviour for sync >>>>>> migration. After successful unmapping, we fail in >>>>>> migrate_folio_move() >>>>>> with -EAGAIN, we do not call undo src+dst (rendering the loop around >>>>>> migrate_folio_move() futile), we do not push the failed folio >>>>>> onto the >>>>>> ret_folios list, therefore, in _sync(), _batch() is never tried >>>>>> again. >>>>> In migrate_pages_sync(), migrate_pages_batch(,MIGRATE_ASYNC) will be >>>>> called first, if failed, the folio will be restored to the original >>>>> state (unlocked).  Then migrate_pages_batch(,_SYNC*) is called again. >>>>> So, we unlock once.  If it's necessary, we can unlock more times via >>>>> another level of loop. >>>> Yes, that's my point. We need to undo src+dst and retry. >>> For sync migration, we undo src+dst and retry now, but only once.  You >>> have shown that more retrying increases success rate. >>> >>>> We will have >>>> to decide where we want this retrying to be; do we want to change the >>>> return value, end up in the while loop wrapped around _sync(), and >>>> retry >>>> there by adding another level of loop, or do we want to make use of >>>> the >>>> existing retry loops, one of which is wrapped around _unmap(); the >>>> latter >>>> is my approach. The utility I see for the former approach is that, >>>> in case >>>> of a large number of page migrations (which should usually be the >>>> case), >>>> we are giving more time for the folio to get retried. The latter >>>> does not >>>> give much time and discards the folio if it did not succeed under 7 >>>> times. >>> Because it's a race, I guess that most folios will be migrated >>> successfully in the first pass. >>> >>> My concerns of your method are that it deal with just one case >>> specially.  While retrying after undoing all appears more general. >> >> >> Makes sense. Also, please ignore my "change the return value" >> thing, I got confused between unmap_folios, ret_folios, etc. >> Now I think I understood what the lists are doing :) >> >>> >>> If it's really important to retry after undoing all, we can either >>> convert two retying loops of migrate_pages_batch() into one loop, or >>> remove retry loop in migrate_pages_batch() and retry in its caller >>> instead. >> >> And if I implemented this correctly, the following makes the test >> pass always: >> https://www.codedump.xyz/diff/Zrn7EdxzNXmXyNXe > > > Okay, I did mess up with the implementation, leading to a false > positive. Let me try again :) Hopefully this should do the job: https://www.codedump.xyz/diff/ZrsIV8JSOPYx5V_u But the result is worse than the patch proposed; I rarely hit a 3 digit number of successes of move_pages(). But, on a base kernel without any changes, when I apply David's suggestion to change the test, if I choose 7 as the number of retries (= NR_MAX_MIGRATE_SYNC_RETRY) in the test, I can touch even 4 digits. I am puzzled. We can also try merging the for loops of unmap and move... > >> >> >> >>> >>> -- >>> Best Regards, >>> Huang, Ying >> >