From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28DDCCD128D for ; Tue, 2 Apr 2024 14:26:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AD8DE6B0083; Tue, 2 Apr 2024 10:26:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A888C6B0085; Tue, 2 Apr 2024 10:26:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 901F06B0095; Tue, 2 Apr 2024 10:26:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6C9446B0083 for ; Tue, 2 Apr 2024 10:26:37 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 2FA241A0952 for ; Tue, 2 Apr 2024 14:26:37 +0000 (UTC) X-FDA: 81964817634.29.BB971A2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id E0819140013 for ; Tue, 2 Apr 2024 14:26:33 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="JzT/y9nf"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712067994; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HXRnD9lfEKmNnojFdS56xW3Fz8f99DyVSqoVxzJi32k=; b=W7/8RZ7fl6H0zdv2t4d8rKDQcLruF4Xrp0fGr5JTMbYF3C0saEGMERmXwUCXv7RdXeKoEo 6K0rIRG1ozaoRmnlKY3XfPJW5Ij6EoIzvr56uUzQiIMpqQE+6OKYFCOBKzACLzvFU/O4wv L1S4kxPP9qGbFCxwx13QPA8hy7Bstls= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="JzT/y9nf"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712067994; a=rsa-sha256; cv=none; b=0xOP2Zr2SLQT+bGvjEb9cCmiIdWfVAv9DsjypJD7TZooIft9oWX32Pa62OG8CBmiIwj3Uw T23fmCam+h+7e0IBpEahk1RuYToib7+imoDGw5saundrqBZ5bI2h85R7HJLlLK9znnOYMN SwLJJI8fBa5ccyLqzbkgQXyj8i0YJt8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1712067993; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=HXRnD9lfEKmNnojFdS56xW3Fz8f99DyVSqoVxzJi32k=; b=JzT/y9nfe1tDyCmKDc4GnMYdL3NLtsMabdE0slN3MdMu+RXGa8W1ryfV5D5lIjvV+GaoaQ NwHmh3PqTXfXz+U5LwAilSlPZlULB0dH1izFOTTDBraCntaGTAoDGIUy8Sm3xo70+nG0+U jw29N9FFN7BQmJBwiCNYVkn2GX1J/r4= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-122-csy5UF2PNvSPlluYjZqFVA-1; Tue, 02 Apr 2024 10:26:31 -0400 X-MC-Unique: csy5UF2PNvSPlluYjZqFVA-1 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-4148b739698so23386115e9.0 for ; Tue, 02 Apr 2024 07:26:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712067990; x=1712672790; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=HXRnD9lfEKmNnojFdS56xW3Fz8f99DyVSqoVxzJi32k=; b=rn2DWINA9OyFv04kjx1m9hj8g0OhZKyO4GlhXHZ8Kjo0xueSYnq4qLmyi8tXWsyafK JGT5aD40VNrNZBMoFLOY8gsK25HM7OHDjT3v3gcsWJ2tGHWoE3kQH9oOZNjm7Z+MfCMb tUlAKb6NMoWfLmrlZdLauqk1ZQq5BCPmf0vvAD3R7hH7U3rgDMXc6T9kQhcSug4PICZX LWPz4EPJ+xFsHBbda8FT2LnlvpBLN/mYtUCIQT18RjvhH2AEfiHAppwZcJ2KLDEL4zrX 9xmvoPwJ12QzOnwsa7v9nwWXyF9rdcf5pirVzSIKMgypkR304sP9BV/s+EbjelTVlB+V lRYQ== X-Forwarded-Encrypted: i=1; AJvYcCXWzBkk0PBdZ6TVYVZZ6t9nUSZXc9NdIP0JurYSaZAjXzFvK5m87XO981scCsoBxTlTL+Jq+eTu+euoTTb/LKbtfcs= X-Gm-Message-State: AOJu0Ywa2BR3oCTX+noS9X1nIXJIRKm+ki7yTF5BxG7BSh7rV5Q9bHZE T/LsoNSjRBH0ll+UA4vcgFy8Z41RfiwnLlhWuFMsXR5QcoZ3DuKd7ofDwayfi48mtX5Kqro3rS/ LI3OOdXbpOyoyoKK7s+O9FvVMWNJDYJ5qfwtSGM9VgreI6vwT X-Received: by 2002:a05:600c:1f85:b0:414:5ec3:8fcb with SMTP id je5-20020a05600c1f8500b004145ec38fcbmr10818612wmb.23.1712067990605; Tue, 02 Apr 2024 07:26:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEdWLqQJKk60fsFxov4KZYdp9krKsAU0iQGX4o7aTOoHflRqJNldmdVgf0ZbaTGScE21RvLZA== X-Received: by 2002:a05:600c:1f85:b0:414:5ec3:8fcb with SMTP id je5-20020a05600c1f8500b004145ec38fcbmr10818593wmb.23.1712067990142; Tue, 02 Apr 2024 07:26:30 -0700 (PDT) Received: from ?IPV6:2003:cb:c732:e600:4982:2903:710f:f20a? (p200300cbc732e60049822903710ff20a.dip0.t-ipconnect.de. [2003:cb:c732:e600:4982:2903:710f:f20a]) by smtp.gmail.com with ESMTPSA id v14-20020a05600c470e00b00415f496b9b7sm2516777wmo.39.2024.04.02.07.26.29 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 02 Apr 2024 07:26:29 -0700 (PDT) Message-ID: Date: Tue, 2 Apr 2024 16:26:28 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v12 2/8] mm/gup: Introduce check_and_migrate_movable_folios() To: Vivek Kasireddy , dri-devel@lists.freedesktop.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christoph Hellwig , Jason Gunthorpe , Peter Xu References: <20240225080008.1019653-1-vivek.kasireddy@intel.com> <20240225080008.1019653-3-vivek.kasireddy@intel.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <20240225080008.1019653-3-vivek.kasireddy@intel.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: E0819140013 X-Stat-Signature: 1ycni4dr1g7e98qts9caur5a4q3sthjh X-Rspam-User: X-HE-Tag: 1712067993-573555 X-HE-Meta: U2FsdGVkX18b/HUt/DDS1Nv7MnrkwTINmTrfl4iKvy8Aog6EUjNBMiyJNP5rL4YJWl0cO0U4Ytx6FlS7CUz2tg4CLnU02njMNBp0YMAArafeBdG4do8OimrX1G/wsauT26gP6DMbaZFztIf30VFZ+95rJJs6O/bfDlgBaHpnmOmTLJytNwMHEt4uD1uib1VMuuDi8kvxVtP2/1X4eP4R3SHL8QFmzZMuJt0uiVL3nmaDbnP3TXLvm1o7mHXC3pDkmVwO0WP9sGcAA1F6NO3lHYnNCtZ7FIPKJ4VBd+krttY95LiIJRDpHy1M9W/eQCfLn6vRl4TiP3GbOP05c9O3+MTg8wgJx1Al6aaWyBPL6jvspg41Boa3wl+/3YubSC5bjiY6vlowgvTddRaYykhMzTZ8aZmxqN37OVPqssNSlZk5NYO0VlzPNoVXBU2Bm+8In3K7b2ehXew71DmBCbAYFiQAhgrNh4Zdth3ZnNNtgw8Q04QyGfD+HmVJ1HwEjhDEEHE9t6sJBQZ6GE7g7y/VFaB+JtvODnHE5Dw766/w5+kiPoYFzzuDMPMLIShX77DRImlt7MvIKdGXNDP0ZCtExT6NeKLB3woDPKAVyFAThKtKmmgtMiPFe8r1y5JJ65nlRyHyXTaJEi/GRNO2IhxLYY6v8tRmwzlFHBy6zsmtctT1FWdyt/V0G8oH17u1NJmjNTzN9MAfo51urm6zjufWb3trhUJ2wXB4mbwufZUPjZ4SW/UjbdtDOnCXnk5ttC5JVyKtrfWmvEOa8fNSUA3jPj2np/D+JirMQdTsfctYLRhjQB0WX3bpsisocIERMGn9yhnTc+AHZsTRQ/WRAleY91nj+ckJWXWFIwgOkZ77CyowgPVlxOm4pOiDIwj+qeENK/cquNL6776ncdGnCV7jPctGiT0uNiarQvo+qj6lwHwJWjvcwvuuYBdDhdOlPqW0jqhBxuAoJ9QZN3BaLKw U4shP1WD Vw5GokiDU20ZrGx1ENu7by/3JSBAhFIIZmoP0np8wVujGQja82imQbes12BitaD1XDDxtS/yBRGHoVpdJIt3Y7IEGUhoA4iDRP9QSl0la81nJFycaRIplux0DrCPKQjlcY2B+lOO6dDi48FXR+ge3ORzIjw07wtIQ/Dmv0ZYWA44BI05RIMKOkGX1wIB9SdBJvTZVMaLIg0vy5eZK/RC7Zv8B2Rmg9+2wgvNSgp/q3SKObHl+jrrdjF9jTSJdzI4LS4WkkoM7Itcb9nlYVfr0JQ6ISnliqbyOQiTclQ5wBGXmJLdR/b2Mc99eB/iKEI0LL2JbLDLCzJwZ+MTkSq9LRwRcMdhOXk8+ogegTmZ082p0HrT7x6l3Dh7Fx3iVsWkOJKw44P2Hj0LhHTTvuDV1TY/c4DiU40dV1XrEgovY0yck5yUyraKWrzkkfcU4D35DIduo5sso8ZzohCFgEeWU9+bJtd8m9v270fTh3RzRIUx79ZbuZAX3T6p4S6/kpDbjSTjYCqeEU4imP55XAMJIofd45Fqmk9oyI9jDU5AI330kHFo6DnRq6r0Wjz13qWMlrh7vNp8Z1OjXO84Uvyw/HKSxMXHXQSZU8xdR X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 25.02.24 08:56, Vivek Kasireddy wrote: > This helper is the folio equivalent of check_and_migrate_movable_pages(). > Therefore, all the rules that apply to check_and_migrate_movable_pages() > also apply to this one as well. Currently, this helper is only used by > memfd_pin_folios(). > > This patch also includes changes to rename and convert the internal > functions collect_longterm_unpinnable_pages() and > migrate_longterm_unpinnable_pages() to work on folios. Since they > are also used by check_and_migrate_movable_pages(), a temporary > array is used to collect and share the folios with these functions. > > Cc: David Hildenbrand > Cc: Matthew Wilcox > Cc: Christoph Hellwig > Cc: Jason Gunthorpe > Cc: Peter Xu > Suggested-by: David Hildenbrand > Signed-off-by: Vivek Kasireddy > --- > mm/gup.c | 129 +++++++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 92 insertions(+), 37 deletions(-) > > diff --git a/mm/gup.c b/mm/gup.c > index 0a45eda6aaeb..1410af954a4e 100644 > --- a/mm/gup.c > +++ b/mm/gup.c > @@ -2099,20 +2099,24 @@ struct page *get_dump_page(unsigned long addr) > > #ifdef CONFIG_MIGRATION > /* > - * Returns the number of collected pages. Return value is always >= 0. > + * Returns the number of collected folios. Return value is always >= 0. > */ > -static unsigned long collect_longterm_unpinnable_pages( > - struct list_head *movable_page_list, > - unsigned long nr_pages, > +static unsigned long collect_longterm_unpinnable_folios( > + struct list_head *movable_folio_list, > + unsigned long nr_folios, > + struct folio **folios, > struct page **pages) This function really shouldn't consume both folios and pages. Either use "folios" and handle the conversion from pages->folios in the caller, or handle it similar to release_pages() where we can pass either and simply always do page_folio() on the given pointer, using essentially an abstracted pointer type and always calling page_folio() on that thing. The easiest is likely to just do the page->folio conversion in the caller by looping over the arrays once more. See below. Temporary memory allocation can be avoided by using an abstracted pointer type. [...] > > + folio = folios[i]; > if (folio == prev_folio) > continue; > prev_folio = folio; > @@ -2126,7 +2130,7 @@ static unsigned long collect_longterm_unpinnable_pages( > continue; > > if (folio_test_hugetlb(folio)) { > - isolate_hugetlb(folio, movable_page_list); > + isolate_hugetlb(folio, movable_folio_list); > continue; > } > > @@ -2138,7 +2142,7 @@ static unsigned long collect_longterm_unpinnable_pages( > if (!folio_isolate_lru(folio)) > continue; > > - list_add_tail(&folio->lru, movable_page_list); > + list_add_tail(&folio->lru, movable_folio_list); > node_stat_mod_folio(folio, > NR_ISOLATED_ANON + folio_is_file_lru(folio), > folio_nr_pages(folio)); > @@ -2148,27 +2152,28 @@ static unsigned long collect_longterm_unpinnable_pages( > } > > /* > - * Unpins all pages and migrates device coherent pages and movable_page_list. > - * Returns -EAGAIN if all pages were successfully migrated or -errno for failure > - * (or partial success). > + * Unpins all folios and migrates device coherent folios and movable_folio_list. > + * Returns -EAGAIN if all folios were successfully migrated or -errno for > + * failure (or partial success). > */ > -static int migrate_longterm_unpinnable_pages( > - struct list_head *movable_page_list, > - unsigned long nr_pages, > - struct page **pages) > +static int migrate_longterm_unpinnable_folios( > + struct list_head *movable_folio_list, > + unsigned long nr_folios, > + struct folio **folios) > { > int ret; > unsigned long i; > > - for (i = 0; i < nr_pages; i++) { > - struct folio *folio = page_folio(pages[i]); > + for (i = 0; i < nr_folios; i++) { > + struct folio *folio = folios[i]; > > if (folio_is_device_coherent(folio)) { > /* > - * Migration will fail if the page is pinned, so convert > - * the pin on the source page to a normal reference. > + * Migration will fail if the folio is pinned, so > + * convert the pin on the source folio to a normal > + * reference. > */ > - pages[i] = NULL; > + folios[i] = NULL; > folio_get(folio); > gup_put_folio(folio, 1, FOLL_PIN); > > @@ -2181,23 +2186,23 @@ static int migrate_longterm_unpinnable_pages( > } > > /* > - * We can't migrate pages with unexpected references, so drop > + * We can't migrate folios with unexpected references, so drop > * the reference obtained by __get_user_pages_locked(). > - * Migrating pages have been added to movable_page_list after > + * Migrating folios have been added to movable_folio_list after > * calling folio_isolate_lru() which takes a reference so the > - * page won't be freed if it's migrating. > + * folio won't be freed if it's migrating. > */ > - unpin_user_page(pages[i]); > - pages[i] = NULL; > + unpin_folio(folios[i]); Aha, that's where you call unpin_folio() on an anon folio. Then simply drop the sanity check from inside unpin_folio() in patch #1. > + folios[i] = NULL; > } > > - if (!list_empty(movable_page_list)) { > + if (!list_empty(movable_folio_list)) { > struct migration_target_control mtc = { > .nid = NUMA_NO_NODE, > .gfp_mask = GFP_USER | __GFP_NOWARN, > }; > > - if (migrate_pages(movable_page_list, alloc_migration_target, > + if (migrate_pages(movable_folio_list, alloc_migration_target, > NULL, (unsigned long)&mtc, MIGRATE_SYNC, > MR_LONGTERM_PIN, NULL)) { > ret = -ENOMEM; > @@ -2205,15 +2210,15 @@ static int migrate_longterm_unpinnable_pages( > } > } > > - putback_movable_pages(movable_page_list); > + putback_movable_pages(movable_folio_list); This really needs a cleanup (independent of your work). We should rename it to putback_movable_folios: it only operates on folios. > > return -EAGAIN; > > err: > - for (i = 0; i < nr_pages; i++) > - if (pages[i]) > - unpin_user_page(pages[i]); > - putback_movable_pages(movable_page_list); > + for (i = 0; i < nr_folios; i++) > + if (folios[i]) > + unpin_folio(folios[i]); Can unpin_folios() be used? > + putback_movable_pages(movable_folio_list); > > return ret; > } > @@ -2237,16 +2242,60 @@ static int migrate_longterm_unpinnable_pages( > static long check_and_migrate_movable_pages(unsigned long nr_pages, > struct page **pages) > { > + unsigned long nr_folios = nr_pages; > unsigned long collected; > - LIST_HEAD(movable_page_list); > + LIST_HEAD(movable_folio_list); > + struct folio **folios; > + long ret; > > - collected = collect_longterm_unpinnable_pages(&movable_page_list, > - nr_pages, pages); > + folios = kmalloc_array(nr_folios, sizeof(*folios), GFP_KERNEL); > + if (!folios) > + return -ENOMEM; > + > + collected = collect_longterm_unpinnable_folios(&movable_folio_list, > + nr_folios, folios, > + pages); > + if (!collected) { > + kfree(folios); > + return 0; > + } > + > + ret = migrate_longterm_unpinnable_folios(&movable_folio_list, > + nr_folios, folios); > + kfree(folios); > + return ret; This function should likely be a pure rapper around check_and_migrate_movable_folios(). For example: static long check_and_migrate_movable_pages(unsigned long nr_pages, struct page **pages) { struct folio **folios; long ret; folios = kmalloc_array(nr_folios, sizeof(*folios), GFP_KERNEL); if (!folios) return -ENOMEM; /* TODO, convert all pages to folios. */ ret = check_and_migrate_movable_folios(nr_folios, folios); kfree(folios); return ret; } > +} > + > +/* > + * Check whether all folios are *allowed* to be pinned. Rather confusingly, all ... "to be pinned possibly forever ("longterm")". > + * folios in the range are required to be pinned via FOLL_PIN, before calling > + * this routine. > + * > + * If any folios in the range are not allowed to be pinned, then this routine > + * will migrate those folios away, unpin all the folios in the range and return > + * -EAGAIN. The caller should re-pin the entire range with FOLL_PIN and then > + * call this routine again. > + * [...] -- Cheers, David / dhildenb