From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15E8EC71136 for ; Wed, 11 Jun 2025 14:01:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A7AA26B00A2; Wed, 11 Jun 2025 10:01:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A51F46B00A5; Wed, 11 Jun 2025 10:01:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 940A96B00AA; Wed, 11 Jun 2025 10:01:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7086C6B00A2 for ; Wed, 11 Jun 2025 10:01:06 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0D1C810127A for ; Wed, 11 Jun 2025 14:01:06 +0000 (UTC) X-FDA: 83543281332.22.19ABEC6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 54D551C0008 for ; Wed, 11 Jun 2025 14:01:03 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=V+8CtJt4; spf=pass (imf21.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749650463; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=O00w9NUXwe28lczeStTFDb6tJyisiybPqnPnGe9q4+Q=; b=14a6NAgLACX3LEkasL3qCaoBtqEIXy1DwrqDTIhgF8cDkPcxPMi2Sn+9cxUNRiJM3rduIk xpwryoZMrnY6eTJCHhCdHIzkhyFEBI20Scnkg6zmIqQjNSCY78zP4cNiyY5JUShifJAR5J xXypl1wcdHtd9i8AposIH3wyzriNCWU= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=V+8CtJt4; spf=pass (imf21.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749650463; a=rsa-sha256; cv=none; b=iuTSfRc1HQQXF+M0mj2y4dfCzRNMQ/NOfX++4Rbq30pz4nF3AndejK3/AMHaPxkPYToB5N i7khrY5En1pz+K4R0LUMoCCtJpjiuAVtFmgvk1qhpx4h3zuAMDRLpNkOV743u9kxR+G+Pt EXZxh+AI+hCE05pEvrU+apD8G/OETv4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1749650462; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=O00w9NUXwe28lczeStTFDb6tJyisiybPqnPnGe9q4+Q=; b=V+8CtJt4OOt2qlSHubcO9yTfCC8xqIJ2JUW+lTlcTtjUEUeG85Vxtg76v3O/s7ciobv+Tc H/QkwViVmzkexELDA8bb7t+cwpgAaBO/ILFP4fTLsItPUKQTwHbAfDmDy5nbeNAuzG+ffv rdpe2UIN1tzbUhjlfv1CrKkUFVapCl4= Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-84-iBxFfsOrM86kIHRWdm94CQ-1; Wed, 11 Jun 2025 10:01:01 -0400 X-MC-Unique: iBxFfsOrM86kIHRWdm94CQ-1 X-Mimecast-MFC-AGG-ID: iBxFfsOrM86kIHRWdm94CQ_1749650460 Received: by mail-pl1-f197.google.com with SMTP id d9443c01a7336-2350804a43eso104076415ad.0 for ; Wed, 11 Jun 2025 07:01:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749650460; x=1750255260; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=O00w9NUXwe28lczeStTFDb6tJyisiybPqnPnGe9q4+Q=; b=e79RiwCYavAcpj7QxrPw5k7ucRccXQLvPmodL3OxPAw/wyHRHWdOYti30qrn7LY+OJ YUkFDg2bGz/11EM75Zl3ODk+6SSSIFZYS/SVzJD/lOOWk15hEPlooiRbuxl98iunTtKg 42M3TtvXPXCaeCcvegapr+hH7S6UipwZsG7la7WnNDT2h1ut6gZTN6QkuMEiTt7bVxKV feoZC70Xt/kTyxnYbR7yzRA2cMKmywj+3/evrneQVtU/cQIB30jfMRkaFdzyywRDmUXg /7ehMldpJUASagDyFVSC2h+54WK+YT7zwB/fujXT0RYZ+GiO8Mw4ySAUoIsB8ScxwYHc Spvw== X-Forwarded-Encrypted: i=1; AJvYcCUIG4dASNRn06W1ee+AJm2zmgET9C5Hp9opMLEHTh2XYo4JPQ+7jdoW2m+CUYfNASIMlzTAYAwB/Q==@kvack.org X-Gm-Message-State: AOJu0YxpWX+pyTAMZn788q8YT+XDVkkxypxpRrLENaCzpLD8AewVUfmg szpYcMoe0sq2g2NQF0Rh8RXF5WEBJ9/cC4mHZDHVgcRfpQwEawNDupQB3rhdcSRr+E1Vcb78lC4 U/Eci/b5L5+hMiFmWaIaOsnPwi2jn8t4cozOgdp0bAYPZs8WS4mwOCKNpte0fe2s= X-Gm-Gg: ASbGncv6v7YrxMNgKYzTpjwXxNlH6JSO4BkA4VYSwxk/13Rm4t0z7ZuznjBnVh42dLp tkRZwmMN2V6yWBzWeaoiR3wlvB8K2BFh9ozWRhB7sU90+q39KwnXel8C4siDky3+sdYYCDZOY16 0UQN2kVufVbMpbnAmfl+OpBaMOKxHhiCmncDptm/IzIAw5THhQ6dBAeMntBpj2DfYgjyXYJDvPR cIMTkn6XhSoe8KjKX5ZoaYAIL+vjNTgiNWq6za347lcXJV3IsY7UbY2uUqSij4g/XTqxbfDgE6a 83gGNZHck/W6iaedt82t8RYKXEjoOmKgcrmbhJVzsm1Q X-Received: by 2002:a17:902:f54c:b0:235:ea0d:ae10 with SMTP id d9443c01a7336-23641a9a44amr54903855ad.12.1749650459780; Wed, 11 Jun 2025 07:00:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFjD5dEBHqYP+J40L6QO31gcp2UGI2QFVAzDC/8AtkH2FxqpBtJysXFpPQA+E4bADJ4VxXZmw== X-Received: by 2002:a05:6102:4191:b0:4e5:996d:f23c with SMTP id ada2fe7eead31-4e7bafaa30amr2764481137.20.1749650445776; Wed, 11 Jun 2025 07:00:45 -0700 (PDT) Received: from [10.32.64.156] (nat-pool-muc-t.redhat.com. [149.14.88.26]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6fb09ab8a19sm81555616d6.8.2025.06.11.07.00.42 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 11 Jun 2025 07:00:45 -0700 (PDT) Message-ID: <43d9cb6e-1b8f-47b9-8c19-58fc7c74a71e@redhat.com> Date: Wed, 11 Jun 2025 16:00:41 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 2/2] mm: Optimize mremap() by PTE batching To: Dev Jain , akpm@linux-foundation.org Cc: Liam.Howlett@oracle.com, lorenzo.stoakes@oracle.com, vbabka@suse.cz, jannh@google.com, pfalcato@suse.de, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterx@redhat.com, ryan.roberts@arm.com, mingo@kernel.org, libang.li@antgroup.com, maobibo@loongson.cn, zhengqi.arch@bytedance.com, baohua@kernel.org, anshuman.khandual@arm.com, willy@infradead.org, ioworker0@gmail.com, yang@os.amperecomputing.com, baolin.wang@linux.alibaba.com, ziy@nvidia.com, hughd@google.com References: <20250610035043.75448-1-dev.jain@arm.com> <20250610035043.75448-3-dev.jain@arm.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <20250610035043.75448-3-dev.jain@arm.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: xGYLKdAP6XI2Hec3jV95eBO8csRn0DTs8x3qnL6rFlo_1749650460 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 54D551C0008 X-Stat-Signature: expp3w74mtexhz7bbqoi3fzomm1816gf X-Rspam-User: X-HE-Tag: 1749650463-56821 X-HE-Meta: U2FsdGVkX19UJl2ZJ+dydZzfslvzG+kM4HKnBL9JwYnqWrb/Y6Ee1GEKdhk8Z02hbDU2C8/dKYdjDLJXSX9Bd/zOD6Jf0DX72towbbQ71fH2Z5o0ZbAffewSSXqVdBG7WyY+m4yUoFWdj9ycpWPvVnsyRQ1gG0Qva9JXTIuzZRGG08/KYUzvRus4LJv302VjO9jzkrKj/VlnU82UtuBxrYME38AhqI7r3nkJimC7DHdTxgB0aRZ4DtEodBc8TC/lXl7nmvTT8kEbfecGZewpySVF39HQpQIPn452nBMEt+ISWNirHxeA80su3vXWf92VqQv5SAN+qxDpRP1PUFC92hQa5f/0Ea5ootsC6kU9IU2kSYzt3+uirvNvyv6POugeARSM+bveewBkLa7Ng1jtsP1Tg2wMzf+AP0JUsnTBwm3aaHsLv/n8ouIyQYcbWVmbTGIJULO2/voHhyvznNcQhe2QigEpsjxQOck/HLQVkoiLew1MC6AIuE9m80iytOe83+1wzE0YkVMOPJxE0ruwfFtYAS4+6kDGAP3gftI0gORXuQRnPlJ9wya5uErql9a8GsKCHMCvEYgSuHR0v8qLnX1eHo7/JTO+ThWoPLexsNCwF77ls5SxNx0hQC6FfTw9UFWmaJC9n+HscKClc48Ksx2Cp13ZES978Ky7UpQHBcQWztp9ZqLP2GM8oHIYFWJ9D+uPvxMUQrZoxzHcnMx3+T+IO71AlHekWxGtRNA9JNmAdLiXdHs4rtsoZ47dGIYhmb7JKESMtnGsR1KacSoYNKWhq/Kn+6SKxvlFtuTPrQIK+IxMzvU+1hSIo45C1ERYnlautsz8zkigQbhKotbeqnIMpu3yUTSoltmb/iT1ipQYdX4ZL1nqWy+CnXqXnk88kkZva2bDaTnbiYPoFgTbw8b8AT3S5yZHL/mAwuTjXpo14tebNZ32TrUIksEzIGY2fEbo30DzBtULgp2BU83 4HgbX4Nv vRD4OEk4YSLS9qGiQ9FvCU7jE+qX5In+m9ekgdZG5EXJO+9KDpJORBpZT5SrRorndTFw54SLH9CTdicS+WPn0dV9HL398F12Fi8hSybHbrZ5u57O/YNOjc2cc9hQltmIT/aHlS+KCGWTCgslwIBHyLbOmYooPWfgKYjfrlzv4bYalipBHVV6PLgj7JkS0j2IUb1CLSp8dHACWRkhph2I7vmYK0bIww0WizilIbrtPPtMJqaXPG741H+YSIycG7/7IJQa6F+2Gl84U3eqlp0WzmuFCr4eBRC+fIkMh+CFZ3GIendebRdPh3bpHpjKkgN5dwwfGegrqOTK7NVu7iboqXKdoGLL+q7ozrznq2DBd1I4RPIef6VTO0HJVmvoNS+85avbEuMpkrvqQlJ6cLsjMqs6iSMVvB6yXxmC5P9pGS7X3iQB+GBAvjS++nFc03MeAkkn+x4WCXIDuWd/9MK0yK4ptaQUD3tD4akjv X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10.06.25 05:50, Dev Jain wrote: > Use folio_pte_batch() to optimize move_ptes(). On arm64, if the ptes > are painted with the contig bit, then ptep_get() will iterate through all 16 > entries to collect a/d bits. Hence this optimization will result in a 16x > reduction in the number of ptep_get() calls. Next, ptep_get_and_clear() > will eventually call contpte_try_unfold() on every contig block, thus > flushing the TLB for the complete large folio range. Instead, use > get_and_clear_full_ptes() so as to elide TLBIs on each contig block, and only > do them on the starting and ending contig block. > > For split folios, there will be no pte batching; nr_ptes will be 1. For > pagetable splitting, the ptes will still point to the same large folio; > for arm64, this results in the optimization described above, and for other > arches (including the general case), a minor improvement is expected due to > a reduction in the number of function calls. > > Signed-off-by: Dev Jain > --- > mm/mremap.c | 39 ++++++++++++++++++++++++++++++++------- > 1 file changed, 32 insertions(+), 7 deletions(-) > > diff --git a/mm/mremap.c b/mm/mremap.c > index 180b12225368..18b215521ada 100644 > --- a/mm/mremap.c > +++ b/mm/mremap.c > @@ -170,6 +170,23 @@ static pte_t move_soft_dirty_pte(pte_t pte) > return pte; > } > > +static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr, > + pte_t *ptep, pte_t pte, int max_nr) > +{ > + const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; > + struct folio *folio; > + > + if (max_nr == 1) > + return 1; > + > + folio = vm_normal_folio(vma, addr, pte); > + if (!folio || !folio_test_large(folio)) > + return 1; > + > + return folio_pte_batch(folio, addr, ptep, pte, max_nr, flags, NULL, > + NULL, NULL); > +} > + > static int move_ptes(struct pagetable_move_control *pmc, > unsigned long extent, pmd_t *old_pmd, pmd_t *new_pmd) > { > @@ -177,7 +194,7 @@ static int move_ptes(struct pagetable_move_control *pmc, > bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma); > struct mm_struct *mm = vma->vm_mm; > pte_t *old_ptep, *new_ptep; > - pte_t pte; > + pte_t old_pte, pte; > pmd_t dummy_pmdval; > spinlock_t *old_ptl, *new_ptl; > bool force_flush = false; > @@ -185,6 +202,8 @@ static int move_ptes(struct pagetable_move_control *pmc, > unsigned long new_addr = pmc->new_addr; > unsigned long old_end = old_addr + extent; > unsigned long len = old_end - old_addr; > + int max_nr_ptes; > + int nr_ptes; > int err = 0; > > /* > @@ -236,14 +255,16 @@ static int move_ptes(struct pagetable_move_control *pmc, > flush_tlb_batched_pending(vma->vm_mm); > arch_enter_lazy_mmu_mode(); > > - for (; old_addr < old_end; old_ptep++, old_addr += PAGE_SIZE, > - new_ptep++, new_addr += PAGE_SIZE) { > + for (; old_addr < old_end; old_ptep += nr_ptes, old_addr += nr_ptes * PAGE_SIZE, > + new_ptep += nr_ptes, new_addr += nr_ptes * PAGE_SIZE) { > VM_WARN_ON_ONCE(!pte_none(*new_ptep)); > > - if (pte_none(ptep_get(old_ptep))) > + nr_ptes = 1; > + max_nr_ptes = (old_end - old_addr) >> PAGE_SHIFT; > + old_pte = ptep_get(old_ptep); > + if (pte_none(old_pte)) > continue; > > - pte = ptep_get_and_clear(mm, old_addr, old_ptep); > /* > * If we are remapping a valid PTE, make sure > * to flush TLB before we drop the PTL for the > @@ -255,8 +276,12 @@ static int move_ptes(struct pagetable_move_control *pmc, > * the TLB entry for the old mapping has been > * flushed. > */ > - if (pte_present(pte)) > + if (pte_present(old_pte)) { > + nr_ptes = mremap_folio_pte_batch(vma, old_addr, old_ptep, > + old_pte, max_nr_ptes); > force_flush = true; > + } > + pte = get_and_clear_full_ptes(mm, old_addr, old_ptep, nr_ptes, 0); > pte = move_pte(pte, old_addr, new_addr); > pte = move_soft_dirty_pte(pte); > > @@ -269,7 +294,7 @@ static int move_ptes(struct pagetable_move_control *pmc, > else if (is_swap_pte(pte)) > pte = pte_swp_clear_uffd_wp(pte); > } > - set_pte_at(mm, new_addr, new_ptep, pte); > + set_ptes(mm, new_addr, new_ptep, pte, nr_ptes); What I dislike is that some paths work on a single PTE, and we implicitly have to know that they don't apply for !pte_present. Like if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) Will not get batched yet. And that is hidden inside the pte_marker_uffd_wp check ... Should we properly separate both paths (present vs. !present), and while at it, do some more cleanups? I'm thinking of the following on top (only compile-tested) diff --git a/mm/mremap.c b/mm/mremap.c index 18b215521adae..b88abf02b34e0 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -155,21 +155,6 @@ static void drop_rmap_locks(struct vm_area_struct *vma) i_mmap_unlock_write(vma->vm_file->f_mapping); } -static pte_t move_soft_dirty_pte(pte_t pte) -{ - /* - * Set soft dirty bit so we can notice - * in userspace the ptes were moved. - */ -#ifdef CONFIG_MEM_SOFT_DIRTY - if (pte_present(pte)) - pte = pte_mksoft_dirty(pte); - else if (is_swap_pte(pte)) - pte = pte_swp_mksoft_dirty(pte); -#endif - return pte; -} - static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte, int max_nr) { @@ -260,7 +245,6 @@ static int move_ptes(struct pagetable_move_control *pmc, VM_WARN_ON_ONCE(!pte_none(*new_ptep)); nr_ptes = 1; - max_nr_ptes = (old_end - old_addr) >> PAGE_SHIFT; old_pte = ptep_get(old_ptep); if (pte_none(old_pte)) continue; @@ -277,24 +261,34 @@ static int move_ptes(struct pagetable_move_control *pmc, * flushed. */ if (pte_present(old_pte)) { + max_nr_ptes = (old_end - old_addr) >> PAGE_SHIFT; nr_ptes = mremap_folio_pte_batch(vma, old_addr, old_ptep, old_pte, max_nr_ptes); force_flush = true; - } - pte = get_and_clear_full_ptes(mm, old_addr, old_ptep, nr_ptes, 0); - pte = move_pte(pte, old_addr, new_addr); - pte = move_soft_dirty_pte(pte); - - if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) - pte_clear(mm, new_addr, new_ptep); - else { - if (need_clear_uffd_wp) { - if (pte_present(pte)) - pte = pte_clear_uffd_wp(pte); - else if (is_swap_pte(pte)) + + pte = get_and_clear_full_ptes(mm, old_addr, old_ptep, + nr_ptes, 0); + /* + * Moving present PTEs requires special care on some + * archs. + */ + pte = move_pte(pte, old_addr, new_addr); + /* make userspace aware that this pte moved. */ + pte = pte_mksoft_dirty(pte); + if (need_clear_uffd_wp) + pte = pte_clear_uffd_wp(pte); + set_ptes(mm, new_addr, new_ptep, pte, nr_ptes); + } else if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) { + pte_clear(mm, old_addr, old_ptep); + } else { + pte_clear(mm, old_addr, old_ptep); + if (is_swap_pte(pte)) { + if (need_clear_uffd_wp) pte = pte_swp_clear_uffd_wp(pte); + /* make userspace aware that this pte moved. */ + pte = pte_swp_mksoft_dirty(pte); } - set_ptes(mm, new_addr, new_ptep, pte, nr_ptes); + set_pte_at(mm, new_addr, new_ptep, pte); } } Note that I don't know why we had the existing - if (need_clear_uffd_wp && pte_marker_uffd_wp(pte)) - pte_clear(mm, new_addr, new_ptep); I thought we would always expect that the destination pte is already pte_none() ? -- Cheers, David / dhildenb