From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFDD7C54E58 for ; Wed, 20 Mar 2024 16:03:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44EBB6B008A; Wed, 20 Mar 2024 12:03:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FF276B008C; Wed, 20 Mar 2024 12:03:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2C7416B0092; Wed, 20 Mar 2024 12:03:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 1ECFB6B008A for ; Wed, 20 Mar 2024 12:03:01 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 568B01C06CD for ; Wed, 20 Mar 2024 16:03:00 +0000 (UTC) X-FDA: 81917886120.22.6232CDF Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id B2482100012 for ; Wed, 20 Mar 2024 16:02:53 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hdwhtcHV; dmarc=none; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710950575; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ml4jVPHJCd5/UtmiBCSa0uU8hAgWpRkM7OQMSO//QPw=; b=fovURoZOac+vckySljrsYfqrCLFXG4ABN5mlcQlTxuff7JYqbawv539yDFQTifv+ce4ToO VnsqlJKFlc4MywRFv4GIubCXoYzB6oce/hT4euHErcgnzz+Z6bVTKLlDLl3Y5DruJEwvJN Yjs83vGt79uI9wlcqZ2wFRL7AThljEo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=hdwhtcHV; dmarc=none; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710950575; a=rsa-sha256; cv=none; b=7chllaY8nZfWcZp3QHpuLpzX6VmCkUxsacAOFWVr4swLzocDRZ5sLhYIC49AriDp0hqDdb Dnaqqw/Jfc6YSopKkno6WQPMWEXbxRniXVJ64hOzBcd9H0oo4qm2gS+Vj9upn/OVU6Uf8S hTvThUQUvnnswsDuDlW/gBZ/J61zNj0= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ml4jVPHJCd5/UtmiBCSa0uU8hAgWpRkM7OQMSO//QPw=; b=hdwhtcHVAOejkXQy/AyeOX7Zwa PdFKdvvIW1Ufr9GkUMjCix7l0d1UFk4Y61Jdxz/vMrMHf6bcqt8eYN7lKGNEx8FrwgYsFGNkUveGu 7uudldc9YTIK+8k+m5HefHNVWy8fQJrPgZLTRA6a8a7GTwjl3aBmGIG4l6ugjJcrtdSZAxqFeM52a gXI4xyPZCw9Lkj1Yok2qxdBTyPrywZog9QP8y43gkl55ewAocO+v5RedA8ti08XPzXDJBNBl2WNs0 QTCfH8krpMK7aB9prrBKFWQRoLe8avTG5dgyHXkup74ZYrINIQAcUeE5u/aN+iO4yG0rd7NIWwl4U xXGK/uFA==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rmyOt-00000004e56-2Rq5; Wed, 20 Mar 2024 16:02:47 +0000 Date: Wed, 20 Mar 2024 16:02:47 +0000 From: Matthew Wilcox To: Zi Yan Cc: linux-mm@kvack.org, Andrew Morton , Yang Shi , Huang Ying , "Kirill A . Shutemov" , Ryan Roberts , Baolin Wang , "Yin, Fengwei" , SeongJae Park , linux-kernel@vger.kernel.org Subject: Re: [PATCH v4] mm/migrate: split source folio if it is on deferred split list Message-ID: References: <20240320014511.306128-1-zi.yan@sent.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240320014511.306128-1-zi.yan@sent.com> X-Rspamd-Queue-Id: B2482100012 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: btnuopru41iyg6fjs1tmp6cduiy8dghi X-HE-Tag: 1710950573-171196 X-HE-Meta: U2FsdGVkX198pQUPATUe2uVdvR2D5VeH0AG3J4BMH21i3fnBl2LpcIOWSphO2cv1iFnfquVt0zBbqYcUVQjwZPcnGS3cQ7k7ZHrCoJzL4eLNDZMhys5PtSsCBcukP/iOQZcKjq5zIQkI1aVuByKdhtHo6pqkRz6Ws7cyzEhawxFaR/N+wWtq+Ahv6o1u3yu4XmGh0X0ASK+w5drSRiKfBT85nbe5tMum3ZdzbljEKrq/t5ZBuoyA5TtBzjJx/RKKzu69xsQS1eTn49zFC0J+KVb/+Ua5cRN4s/enqgLXA2+E9yynball/wIDpSxt8/07DDXwtZy2PZ/eto5W+W0CKmH4YCyzIQpmqUoGjwsyxQvYhkHLLZ7JbaFoMUMJP9TCCR+UPcVku/ERbLqJOK6D2imidk4L28y/eLMSRpf/hiw+lt/ysPrGu09+tBZn0I+qdvcuMqUth5OxhoeZ4TFtTAmhTXM+wf3OphEmuBkEd1FFwy2NTPLEUL4UD25hQcuaJCSazJwSuFiqJ/6RDuqg8Crj6ibqh2/B/uFAhgZ7sJ1I/QN9z1GUnktXJ2Oma5gEeOtTq+O7+FRrKw+pXMLqkEJCoWy35v5m0IJhyTgH3YX8v8RJhOYTL6g3InrrGEVWgZnzFrz+Fk1EA0dvQW8zReNne3843nCIJ/xCbrlclhPjBU7qYno+frn2TP3xo68MB1xe+oJZIxOJeTr36u/PgmgCojasFqflkux+JWfBJdKD1AyST752miC8WHr0/MThcGkh0wZgdffYefj25tgFzivjGi3oOmRaL+j/2Zb5j5KOD3C6/VTx3smZ+6ghJRSrLgYjXfo6XLxv2kRtcALcVg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Mar 19, 2024 at 09:45:11PM -0400, Zi Yan wrote: > +++ b/mm/migrate.c > @@ -1654,25 +1654,65 @@ static int migrate_pages_batch(struct list_head *from, > > /* > * Large folio migration might be unsupported or > - * the allocation might be failed so we should retry > - * on the same folio with the large folio split > + * the folio is on deferred split list so we should > + * retry on the same folio with the large folio split > * to normal folios. > * > * Split folios are put in split_folios, and > * we will migrate them after the rest of the > * list is processed. > */ > - if (!thp_migration_supported() && is_thp) { > - nr_failed++; > - stats->nr_thp_failed++; > - if (!try_split_folio(folio, split_folios)) { > - stats->nr_thp_split++; > - stats->nr_split++; > + if (is_thp) { > + bool is_on_deferred_list = false; > + > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > + /* > + * Check without taking split_queue_lock to > + * reduce locking overheads. The worst case is > + * that if the folio is put on the deferred > + * split list after the check, it will be > + * migrated and not put back on the list. > + * The migrated folio will not be split > + * via shrinker during memory pressure. > + */ > + if (!data_race(list_empty(&folio->_deferred_list))) { > + struct deferred_split *ds_queue; > + unsigned long flags; > + > + ds_queue = > + get_deferred_split_queue(folio); > + spin_lock_irqsave(&ds_queue->split_queue_lock, > + flags); > + /* > + * Only check if the folio is on > + * deferred split list without removing > + * it. Since the folio can be on > + * deferred_split_scan() local list and > + * removing it can cause the local list > + * corruption. Folio split process > + * below can handle it with the help of > + * folio_ref_freeze(). > + */ > + is_on_deferred_list = > + !list_empty(&folio->_deferred_list); > + spin_unlock_irqrestore(&ds_queue->split_queue_lock, > + flags); > + } > +#endif > + if (!thp_migration_supported() || > + is_on_deferred_list) { > + nr_failed++; > + stats->nr_thp_failed++; > + if (!try_split_folio(folio, > + split_folios)) { > + stats->nr_thp_split++; > + stats->nr_split++; > + continue; > + } > + stats->nr_failed_pages += nr_pages; > + list_move_tail(&folio->lru, ret_folios); > continue; > } > - stats->nr_failed_pages += nr_pages; > - list_move_tail(&folio->lru, ret_folios); > - continue; > } I don't think we need to try quite this hard. I don't think we need to take the lock to be certain if it's on the deferred list -- is there anything preventing the folio being added to the deferred list after we drop the lock? I also don't think we should account this as a thp split since those are treated by callers as failures. So maybe this? +++ b/mm/migrate.c @@ -1652,6 +1652,17 @@ static int migrate_pages_batch(struct list_head *from, cond_resched(); + /* + * The rare folio on the deferred split list should + * be split now. It should not count as a failure. + */ + if (nr_pages > 2 && + !list_empty(&folio->_deferred_list)) { + if (try_split_folio(folio, from) == 0) { + is_large = is_thp = false; + nr_pages = 1; + } + } /* * Large folio migration might be unsupported or * the allocation might be failed so we should retry