From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D80E8C3DA7F for ; Mon, 12 Aug 2024 12:08:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 574F26B0098; Mon, 12 Aug 2024 08:08:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4FE1E6B009A; Mon, 12 Aug 2024 08:08:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39E5D6B009F; Mon, 12 Aug 2024 08:08:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 188EA6B0098 for ; Mon, 12 Aug 2024 08:08:51 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 727061A1D3F for ; Mon, 12 Aug 2024 12:08:50 +0000 (UTC) X-FDA: 82443472020.07.C68EAB9 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf30.hostedemail.com (Postfix) with ESMTP id 64CD78001A for ; Mon, 12 Aug 2024 12:08:48 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf30.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723464460; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Vy7zIDADTJV0RItZGAZ9ZZJhwNf1gApD0MzMRMTnfRE=; b=8G5cDGr72UA7M4By2ArR6GC9P721emBUn2Szym25mBgzCzhjXlC+tQWPSyL72YwUbFUYj7 HGjceTsQRKAp46VP6JSHvxP0oVUXUGgyfhDM7mijeExHr1SCH0CnFP93N7Kr2QFvhE5HQG DEsGPixBig6BGQs5be5Nf9XcXusWSho= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723464460; a=rsa-sha256; cv=none; b=CxI9+iZAuBwde+/f9xGA3xwQv9EEyyjF2KUvcGiPSg5+pSjBsCNmZsE2cE3pBsWVRceQx1 qAiVTktoJiiD/ILhjz8Tmbko/6oQnn7sm5N2U2+DSeCferThsyNUlGRE8pkkPvSg5Rjtip Ae4z/cDOnny/uCtMlXkZNteKZgla74w= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf30.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4D770FEC; Mon, 12 Aug 2024 05:09:13 -0700 (PDT) Received: from [10.162.43.141] (e116581.arm.com [10.162.43.141]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3E9013F73B; Mon, 12 Aug 2024 05:08:38 -0700 (PDT) Message-ID: <9d84e4e8-ac54-4eb1-a113-3f32aea864c9@arm.com> Date: Mon, 12 Aug 2024 17:38:35 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 1/2] mm: Retry migration earlier upon refcount mismatch To: "Huang, Ying" Cc: akpm@linux-foundation.org, shuah@kernel.org, david@redhat.com, willy@infradead.org, ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, osalvador@suse.de, baolin.wang@linux.alibaba.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, ioworker0@gmail.com, gshan@redhat.com, mark.rutland@arm.com, kirill.shutemov@linux.intel.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, broonie@kernel.org, mgorman@techsingularity.net, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org References: <20240809103129.365029-1-dev.jain@arm.com> <20240809103129.365029-2-dev.jain@arm.com> <87frrauwwv.fsf@yhuang6-desk2.ccr.corp.intel.com> <15dbe4ac-a036-4029-ba08-e12a236f448a@arm.com> <87bk1yuuzu.fsf@yhuang6-desk2.ccr.corp.intel.com> <95b72817-5444-4ced-998a-1cb90f42bf49@arm.com> <8734naurhm.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Language: en-US From: Dev Jain In-Reply-To: <8734naurhm.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 64CD78001A X-Stat-Signature: z5akej3ct6pqz6ocseaueg5m3mpubbtc X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1723464528-400055 X-HE-Meta: U2FsdGVkX18nVEowvt3oygD4iPVXZP+3URIJrFFGHVieT0N/LbqRmZ4hGp2G5XlUQoEaBjo6TbuHWvRIUX4RfBt6pCNRedE3O8ROSaBKggOxgxsrqTh95Bm73OVLNwSPfeK/hlEtoRTwKzsCYg1cn5+4vt796GWX1wJppZw7yh2D8ttOOq1wJW7WMfTGeZwsSDSCV6cj6EDufVsrb3U8udgJk+0afo47ncPnOXVHOc4BIo5EuKIJ82VHEUHepeHz+W72ct4Se8bVQU60WHQeoSoS2jhlv/Px71bGcL6iWmhZf1iICM0FExp8gWlep2E3RdVLRxJ+rsgdNYo7FA00yqg6052xazZLLj99FZAXQBcFbwpJcejogxfoRltKbqXxwobkbP+m8kSiSaEairVMlsVc5TVGNJCICj5NCAzbfleLYpJN3Fn9Oz/1i/yGbMyGaR1lAdEDH72Cb9gjZLYX7bbIATPzGgqZskJM90a3wVAgS4xFO1KtjbGS5odjumvBc1aqLmGfRoabeTye+eBK1SpwAxmlMTAhTrVEfiIfL4Yn0D0gmjRWhHSWdql176B/QdfZptMUXvyOmqqRfvtXlXd/o8O89X8UaYst6f+jFUmW7gudPslrMUf1IcZlldh2kHFkoXSRl7KeacByppFUTPeNXqDdLc9hGWT7UlNKygmsYj9hIwXw4qUG7m1nZppx/p7JnhtdMqA8axN/ltlnm14VWcKXtt/otzbm/JxeLAgMEqAIgDr28wgx1+IyUwDdaIMMGpMR4HC82/VjVGOpKfngcqVIhUeKx5VbicEX0kEqeIts2gQ6u3lm7XOUqi03nXYWcDhpDK/nGo7RYPUGMv1mqF4GwJ9o5I+iu+OFqVG96qCz4z8t9amh6soCEGeS3LPUTSE9puYmFf/7qK0NpNHok0iHlpPju2Wj+APabdcagimRKyYxrf05ojGsQYZ8zfVR/HrQm15uy4WLkKF KEMSBseW ZpSqN4HTKzQCAZEYrRArbhDeZyj6BTa2rZDZXwURPigp0lWVZ186y2YmclfDiyDKJ4jDAn/r+OYHne5WqYHc0xWQDM1g9MUMQ1Fw4x2bZub2xswUDCQgTyKHIZ4JZ80R4oQXxbnjvi5LdeJcevFa3kankZCVj8umIipzksxlDaCLSTJMp9EgaRiEISwmOg2T4iuI+vgpy60PTea8x3MIY3EtST0CJsNeNyhtdHor3fu1KzB2u7+olMipU3Ml4XalTWYxKybRbztUXCmyGUASIQ3DEivF69aiwqXAVc/pnbsJ5Gw9aJNfm2m4c6OBsx2d1A88bfx5Zktrgp97DFQxSEfK4ovGp5aW/KckB57QzO7ktJ42rX5BMTAgPVRjwJUM/696u X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 8/12/24 13:01, Huang, Ying wrote: > Dev Jain writes: > >> On 8/12/24 11:45, Huang, Ying wrote: >>> Dev Jain writes: >>> >>>> On 8/12/24 11:04, Huang, Ying wrote: >>>>> Hi, Dev, >>>>> >>>>> Dev Jain writes: >>>>> >>>>>> As already being done in __migrate_folio(), wherein we backoff if the >>>>>> folio refcount is wrong, make this check during the unmapping phase, upon >>>>>> the failure of which, the original state of the PTEs will be restored and >>>>>> the folio lock will be dropped via migrate_folio_undo_src(), any racing >>>>>> thread will make progress and migration will be retried. >>>>>> >>>>>> Signed-off-by: Dev Jain >>>>>> --- >>>>>> mm/migrate.c | 9 +++++++++ >>>>>> 1 file changed, 9 insertions(+) >>>>>> >>>>>> diff --git a/mm/migrate.c b/mm/migrate.c >>>>>> index e7296c0fb5d5..477acf996951 100644 >>>>>> --- a/mm/migrate.c >>>>>> +++ b/mm/migrate.c >>>>>> @@ -1250,6 +1250,15 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>>>>> } >>>>>> if (!folio_mapped(src)) { >>>>>> + /* >>>>>> + * Someone may have changed the refcount and maybe sleeping >>>>>> + * on the folio lock. In case of refcount mismatch, bail out, >>>>>> + * let the system make progress and retry. >>>>>> + */ >>>>>> + struct address_space *mapping = folio_mapping(src); >>>>>> + >>>>>> + if (folio_ref_count(src) != folio_expected_refs(mapping, src)) >>>>>> + goto out; >>>>>> __migrate_folio_record(dst, old_page_state, anon_vma); >>>>>> return MIGRATEPAGE_UNMAP; >>>>>> } >>>>> Do you have some test results for this? For example, after applying the >>>>> patch, the migration success rate increased XX%, etc. >>>> I'll get back to you on this. >>>> >>>>> My understanding for this issue is that the migration success rate can >>>>> increase if we undo all changes before retrying. This is the current >>>>> behavior for sync migration, but not for async migration. If so, we can >>>>> use migrate_pages_sync() for async migration too to increase success >>>>> rate? Of course, we need to change the function name and comments. >>>> As per my understanding, this is not the current behaviour for sync >>>> migration. After successful unmapping, we fail in migrate_folio_move() >>>> with -EAGAIN, we do not call undo src+dst (rendering the loop around >>>> migrate_folio_move() futile), we do not push the failed folio onto the >>>> ret_folios list, therefore, in _sync(), _batch() is never tried again. >>> In migrate_pages_sync(), migrate_pages_batch(,MIGRATE_ASYNC) will be >>> called first, if failed, the folio will be restored to the original >>> state (unlocked). Then migrate_pages_batch(,_SYNC*) is called again. >>> So, we unlock once. If it's necessary, we can unlock more times via >>> another level of loop. >> Yes, that's my point. We need to undo src+dst and retry. > For sync migration, we undo src+dst and retry now, but only once. You > have shown that more retrying increases success rate. > >> We will have >> to decide where we want this retrying to be; do we want to change the >> return value, end up in the while loop wrapped around _sync(), and retry >> there by adding another level of loop, or do we want to make use of the >> existing retry loops, one of which is wrapped around _unmap(); the latter >> is my approach. The utility I see for the former approach is that, in case >> of a large number of page migrations (which should usually be the case), >> we are giving more time for the folio to get retried. The latter does not >> give much time and discards the folio if it did not succeed under 7 times. > Because it's a race, I guess that most folios will be migrated > successfully in the first pass. > > My concerns of your method are that it deal with just one case > specially. While retrying after undoing all appears more general. Makes sense. Also, please ignore my "change the return value" thing, I got confused between unmap_folios, ret_folios, etc. Now I think I understood what the lists are doing :) > > If it's really important to retry after undoing all, we can either > convert two retying loops of migrate_pages_batch() into one loop, or > remove retry loop in migrate_pages_batch() and retry in its caller > instead. And if I implemented this correctly, the following makes the test pass always: https://www.codedump.xyz/diff/Zrn7EdxzNXmXyNXe > > -- > Best Regards, > Huang, Ying