From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9DF6C27C4F for ; Thu, 13 Jun 2024 08:46:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D6D96B00AC; Thu, 13 Jun 2024 04:46:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 360BF6B00AD; Thu, 13 Jun 2024 04:46:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18BAA6B00AF; Thu, 13 Jun 2024 04:46:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id EAF2D6B00AC for ; Thu, 13 Jun 2024 04:46:02 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 9CBF71C16B4 for ; Thu, 13 Jun 2024 08:46:02 +0000 (UTC) X-FDA: 82225232964.15.8E20449 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 4461C100010 for ; Thu, 13 Jun 2024 08:46:00 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aFpHMPjO; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718268358; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zt40U0YKl1XkS1z3mZMGhzHGn8+f98+3SDNVyvS22/c=; b=qi+lKYtU6e1b6kKpz0LwGpICHPCOhRV4DVIuY6YgMT3PeuzgAqqVOPRHCdzayhV70SEu6s 8xVqIIM7aiD+29zacRB3AxVBOfTKn8FybxnJ+Zx1wGLGEL6LesjImTBpgEOnvhB9+xuBxY QNk2ZsPmv70JPzOce58xCQCMYG9plfs= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=aFpHMPjO; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718268358; a=rsa-sha256; cv=none; b=4enIPD0/W4CL0BzycwT9AZ1IoISuFEgVZyuZLkHfUSFqRjHWkIvzY2IRBvaJSasPYkSnXh e9NffhTm+ded7cGvJXa9dEYfFl5r4WeI0umvjVitB/fItq4iMPGeTXZXKHCWG8rfvOo2w6 i/l2+K4EPpyPuvZRQSbLeWMVPjLwvZI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1718268359; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=zt40U0YKl1XkS1z3mZMGhzHGn8+f98+3SDNVyvS22/c=; b=aFpHMPjOGKJDzVQdBidWEpdKAbkovDAc9x5ng/+Hq8CM4OM0I8Bwy4+YC1nv+NoBtH57hI EuUq6s3zDuYl3Ut4m7af6uGYqXHdk/A/oB77brkFr80X/btTWDhl+rBpVNgEBCdXwrxAPW AF2HtHL1UnQyKEaGUebuSRLazE1qggk= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-502-D9vWsA1qNDSWmC8mqbEfkw-1; Thu, 13 Jun 2024 04:45:52 -0400 X-MC-Unique: D9vWsA1qNDSWmC8mqbEfkw-1 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-35f1fcd3bbcso394289f8f.0 for ; Thu, 13 Jun 2024 01:45:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718268351; x=1718873151; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:references:cc:to:from:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=zt40U0YKl1XkS1z3mZMGhzHGn8+f98+3SDNVyvS22/c=; b=MHXJhke879Zm4Vo3u8rzumSttKlqd87KOym12V//HOcPqJDNFzembqPt0uVqVc8bWy w9J2OKctQ2FtUZC5q8c0gcoghhMULOiNqxnroRaIxquOV+PJPZxbVagflX6iuek6YEzQ 1dJ8P9PQ0GIBuVrA9p7QIr10GiClmF+cxiDblyZylhcRO+PU+JAYdfYAFZ5hnqF7a8Vc FGvOcwEx8hIq9sQGvu6THzT97XoYqr6WuSbPOcz6xBAxQoIEG49hnrpENG5JXHo/GGls xaH5txv05IuPjZ+Jcd5ivZUL8fCfHQvo33+egCQPjAAHBK1SM+fKMEwtibb833qg27rB Njkw== X-Forwarded-Encrypted: i=1; AJvYcCXrCh//ZGrmOJEQFHb02q9D9WgBEheABf88WT7AbxxE8C3Bln2uF+lKrOs0jazg3AHI1RwDrm39FWUhTsJM99Z+SNA= X-Gm-Message-State: AOJu0YzteD8C3eZmRAPD+YEzLTLTIEhfdzolmYXtQQN9nAR/FUCXbu8L kXkvmahroO33+HEyu9sH/6JQpkFvErTD3h34vvJ3Kp1gx4CF57fomxtFG7K05FJunYq7MddodeI ELYsSP7mQvK4CJ0wyPUgeWXTjRByStoQEj5Z0P9OGPdsCHSjw X-Received: by 2002:a5d:6749:0:b0:35f:2363:ddda with SMTP id ffacd0b85a97d-35fdf79a246mr2338299f8f.15.1718268350780; Thu, 13 Jun 2024 01:45:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFIwAIp74TkJrAj0am2u2G7ybyiyGjl9tGG4AJnzdqx3AfuxtHsaCvSC/5LTOcYW4zvOsa1ew== X-Received: by 2002:a5d:6749:0:b0:35f:2363:ddda with SMTP id ffacd0b85a97d-35fdf79a246mr2338287f8f.15.1718268350293; Thu, 13 Jun 2024 01:45:50 -0700 (PDT) Received: from ?IPV6:2003:cb:c703:fe00:10fe:298:6bf1:d163? (p200300cbc703fe0010fe02986bf1d163.dip0.t-ipconnect.de. [2003:cb:c703:fe00:10fe:298:6bf1:d163]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-360750935ecsm1040766f8f.3.2024.06.13.01.45.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 13 Jun 2024 01:45:49 -0700 (PDT) Message-ID: Date: Thu, 13 Jun 2024 10:45:48 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v7 3/4] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop From: David Hildenbrand To: Lance Yang Cc: 21cnbao@gmail.com, akpm@linux-foundation.org, baolin.wang@linux.alibaba.com, fengwei.yin@intel.com, libang.li@antgroup.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, maskray@google.com, mhocko@suse.com, minchan@kernel.org, peterx@redhat.com, ryan.roberts@arm.com, shy828301@gmail.com, sj@kernel.org, songmuchun@bytedance.com, wangkefeng.wang@huawei.com, willy@infradead.org, xiehuan09@gmail.com, ziy@nvidia.com, zokeefe@google.com References: <20240610120209.66311-1-ioworker0@gmail.com> <20240610120618.66520-1-ioworker0@gmail.com> <933c7339-2dbd-464b-b342-e4cff7ad75a3@redhat.com> Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <933c7339-2dbd-464b-b342-e4cff7ad75a3@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 4461C100010 X-Stat-Signature: 4w3mumk646ea4a8nhwxys5jpt8m19ycz X-Rspam-User: X-HE-Tag: 1718268360-349383 X-HE-Meta: U2FsdGVkX18TW/ifeca++tfDu8ckNG22S6xQAJrw7yoGs5AJfX1SNQDOoMBQ2wK7MT161nYavH/vOafttEKBHHdnnqEPxZp72xtMJmehx2RFdOWc3cvMt2C6DAKUGiW4C8rB34YKLnhCeHbzZelnW0uHRXOtSn6zg4Ct+lUsTa6xejQDtl2g2BhQ/mhyD4xNqCQkJGxF3D3xZGtrG6SQnXrlbRZS0X4/IuxX1w2WLPNkF7BY8GnTuv6QFhfQZXht5y5uVnbUlKFfbSmvPegwXjNqPQZdLyyyK03KfiTNc9ChYDvHr1gfN79AEJX3Zs+co2J3jmSD7L7u+TGJsIPIyb1wvdDsSiKJzK0L+DDVr5PuTsrPQ0IQqth/UDFzhAFmol/yMXXr6Va923MozgEf3jUxTJmjo6kvAi4QlKY1E2++33acX+xgN2aQbIrs5Xs4W4uJofj2CSqqyvaR/ZypXyhJaikTGxhlWlRdFi4nexM7YdRQ52bJ2Szhzv48CixHcXTDOgOki+Nl9HimMTFgunEhYlja+aXaSRHp7/7WhFvG5JvSb+no7sNvYug7lBovGLQD/miU7AHTyASe5dBVrGqpr7nt28wOV35FScW5lTQzALNgOuqK9ytd6LO4wA8/4lrZ+6E0lYCrPIzKbKmbWn5pOyf5mGbQQh6YIMsNbq8ZlOoYp7jBVzDUoUs8yeOHECR5pw3igKflkiinmWMdq0IIJMsI5N/ksS525svuRul4uz07vFtJYVvqaFkGpNoWQX+oR534Y/VMqD4spkRwl3AkNQi48x/yd4TrBVHZAPbunr3yEBNBKdm7DGrwsriS0Sda9BV5xbbI6nAitW585YkPo/qpBmAhu2wKJ/PUVbxllZg0u/fdNA47wWzolj6n2doNmjkQFLFzAoQgrm15ndDOSnXe71++hYgGGfmRvkAiCINIeICwT2qCKvFQLdeXcuc3qIV1v6PSIFEYUUE JFsE6K3t wnZ8Fnboo23ntJsBhjoCnw9QnAC3xeZhY3eIQBaqPCvSJ/rgUeyjNBZGrddfK9/ODeCBrWmKJZOJ3wuLcYhocGcZ7Wox5sbHBZ3xj67ysJ2/4bfrB18ZlGWFFx2oFYua9RVLdSAIQ9I6Bp000UkqQrl8ATbYWiPUPyavOduttiDniuWTbougA4ghMsheO+dXYsSMsgBMpxGouHPMJHsGz4RnAtkX9UJXEn3IxGB+P0yMrC7Q15NSgozXqR0tqqaXR/ToEzWgPIN+qHcN45j/t+CUjTXHRCGSIwFgpVLEDVnXNS08wkybMLOkAd76XmyJ4oV/m9k9OKvjtgc7VTq4d1cZWWg0HvmtOoPFELc0QtbTEl3JuPLqm9TWfdUM/HmgZfLuLTB3M3Lq+dlI6LsCJc31fW3zKkww0F2v/gsQ21opTm+JGUAEX4s3HXEF1S7CowZmhUWjvTr7znLDkIsaDDljrVT1LMkJjzgXNOGsSAJQypGxLRmDL7jY81hQ1kLYZUHJ7Walmurd7t7m9t08b4KOtjJhqbQs+otMM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 13.06.24 10:34, David Hildenbrand wrote: > On 10.06.24 14:06, Lance Yang wrote: >> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped >> folios, start the pagewalk first, then call split_huge_pmd_address() to >> split the folio. >> >> Suggested-by: David Hildenbrand >> Suggested-by: Baolin Wang >> Signed-off-by: Lance Yang >> --- >> include/linux/huge_mm.h | 6 ++++++ >> mm/huge_memory.c | 42 +++++++++++++++++++++-------------------- >> mm/rmap.c | 21 +++++++++++++++------ >> 3 files changed, 43 insertions(+), 26 deletions(-) >> >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h >> index 088d66a54643..4670c6ee118b 100644 >> --- a/include/linux/huge_mm.h >> +++ b/include/linux/huge_mm.h >> @@ -415,6 +415,9 @@ static inline bool thp_migration_supported(void) >> return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION); >> } >> >> +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, >> + pmd_t *pmd, bool freeze, struct folio *folio); >> + >> #else /* CONFIG_TRANSPARENT_HUGEPAGE */ >> >> static inline bool folio_test_pmd_mappable(struct folio *folio) >> @@ -477,6 +480,9 @@ static inline void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, >> unsigned long address, bool freeze, struct folio *folio) {} >> static inline void split_huge_pmd_address(struct vm_area_struct *vma, >> unsigned long address, bool freeze, struct folio *folio) {} >> +static inline void split_huge_pmd_locked(struct vm_area_struct *vma, >> + unsigned long address, pmd_t *pmd, >> + bool freeze, struct folio *folio) {} >> >> #define split_huge_pud(__vma, __pmd, __address) \ >> do { } while (0) >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >> index e6d26c2eb670..d2697cc8f9d4 100644 >> --- a/mm/huge_memory.c >> +++ b/mm/huge_memory.c >> @@ -2581,6 +2581,27 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, >> pmd_populate(mm, pmd, pgtable); >> } >> >> +void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address, >> + pmd_t *pmd, bool freeze, struct folio *folio) >> +{ >> + VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio)); >> + VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE)); >> + VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); >> + VM_BUG_ON(freeze && !folio); > > Curious: could we actually end up here without a folio right now? That > would mean, that try_to_unmap_one() would be called with folio==NULL. > >> + >> + /* >> + * When the caller requests to set up a migration entry, we >> + * require a folio to check the PMD against. Otherwise, there >> + * is a risk of replacing the wrong folio. >> + */ >> + if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || >> + is_pmd_migration_entry(*pmd)) { >> + if (folio && folio != pmd_folio(*pmd)) >> + return; >> + __split_huge_pmd_locked(vma, pmd, address, freeze); >> + } >> +} >> + >> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, >> unsigned long address, bool freeze, struct folio *folio) >> { >> @@ -2592,26 +2613,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, >> (address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE); >> mmu_notifier_invalidate_range_start(&range); >> ptl = pmd_lock(vma->vm_mm, pmd); >> - >> - /* >> - * If caller asks to setup a migration entry, we need a folio to check >> - * pmd against. Otherwise we can end up replacing wrong folio. >> - */ >> - VM_BUG_ON(freeze && !folio); >> - VM_WARN_ON_ONCE(folio && !folio_test_locked(folio)); >> - >> - if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) || >> - is_pmd_migration_entry(*pmd)) { >> - /* >> - * It's safe to call pmd_page when folio is set because it's >> - * guaranteed that pmd is present. >> - */ >> - if (folio && folio != pmd_folio(*pmd)) >> - goto out; >> - __split_huge_pmd_locked(vma, pmd, range.start, freeze); >> - } >> - >> -out: >> + split_huge_pmd_locked(vma, range.start, pmd, freeze, folio); >> spin_unlock(ptl); >> mmu_notifier_invalidate_range_end(&range); >> } >> diff --git a/mm/rmap.c b/mm/rmap.c >> index ddffa30c79fb..b77f88695588 100644 >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >> if (flags & TTU_SYNC) >> pvmw.flags = PVMW_SYNC; >> >> - if (flags & TTU_SPLIT_HUGE_PMD) >> - split_huge_pmd_address(vma, address, false, folio); >> - >> /* >> * For THP, we have to assume the worse case ie pmd for invalidation. >> * For hugetlb, it could be much worse if we need to do pud >> @@ -1668,9 +1665,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >> mmu_notifier_invalidate_range_start(&range); >> >> while (page_vma_mapped_walk(&pvmw)) { >> - /* Unexpected PMD-mapped THP? */ >> - VM_BUG_ON_FOLIO(!pvmw.pte, folio); >> - >> /* >> * If the folio is in an mlock()d vma, we must not swap it out. >> */ >> @@ -1682,6 +1676,21 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma, >> goto walk_done_err; >> } >> >> + if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) { >> + /* >> + * We temporarily have to drop the PTL and start once >> + * again from that now-PTE-mapped page table. >> + */ >> + split_huge_pmd_locked(vma, pvmw.address, pvmw.pmd, >> + false, folio); >> + flags &= ~TTU_SPLIT_HUGE_PMD; >> + page_vma_mapped_walk_restart(&pvmw); > > If, for some reason, split_huge_pmd_locked() would fail, we would keep > looping and never hit the VM_BUG_ON_FOLIO() below. Maybe we'd want to > let split_huge_pmd_locked() return whether splitting succeeded, and > handle that case differently? I assume it could fail if we race with concurrent split? Or isn't that possible? -- Cheers, David / dhildenb