From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB1B6C02198 for ; Fri, 14 Feb 2025 22:00:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9E971280002; Fri, 14 Feb 2025 17:00:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 998CB280001; Fri, 14 Feb 2025 17:00:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 812AB280002; Fri, 14 Feb 2025 17:00:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5AD9F280001 for ; Fri, 14 Feb 2025 17:00:01 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B6209120AE7 for ; Fri, 14 Feb 2025 22:00:00 +0000 (UTC) X-FDA: 83119918560.01.4C28CA2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 433568000F for ; Fri, 14 Feb 2025 21:59:58 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=MhxTa1dA; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1739570398; a=rsa-sha256; cv=none; b=hBauL5pQwY0sALW4F3+/CbHKABqmC5LdzKysmE4K8ZCbytM8zWf4tL+GfHXyQPBIWx3Ztt 9VPwZyWLFCaTyw8XU9KJLGfsSc8VlvS941CPxaSn0cHNhl4Rpt1aSobyFtmMkE9aYPxnAQ uAm9VJ9URl0U8ffrVdGqJIjYdumFu2Y= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=MhxTa1dA; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1739570398; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1UXxy6mQxDzeF30KOcZxZ8FOown0bDjggxUSAN6E34g=; b=ZN2Z8+g5c5y1Arl8AuQ1VMZQ8HD4v95z9Rq7EQHpPzX9nnKYtyx5SlPBKpCDH59VUuV/Sz DY52YNgOMDeA6UBzP4/hJe2qZIgKmMlb7dllbXfQCDHCwlvG+94xQ/Vli/qzI/+UpNEzAh cqZv1Fob+hftm3QmT8R98ZSt6ALXsGk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1739570397; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=1UXxy6mQxDzeF30KOcZxZ8FOown0bDjggxUSAN6E34g=; b=MhxTa1dAfSfdXW5MdCVZuFPbSAXHtx6yqtflIP+Naz08BdKgM0v9LJjCaYvPMlmzDI5a5q /48tqmp4stetJAASlOxaN6GhrX1qYShcs7kMiv2a7KP6snFnqJIV7+8hgqD4xIzLRR2mEH ejLGdfWK0Rckfeqqbvi9++8M9K5AeXk= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-589-ZF4wR_p8PbWjw4xSHo4xfw-1; Fri, 14 Feb 2025 16:59:54 -0500 X-MC-Unique: ZF4wR_p8PbWjw4xSHo4xfw-1 X-Mimecast-MFC-AGG-ID: ZF4wR_p8PbWjw4xSHo4xfw_1739570393 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-439640a1a8dso11718435e9.0 for ; Fri, 14 Feb 2025 13:59:54 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1739570393; x=1740175193; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=1UXxy6mQxDzeF30KOcZxZ8FOown0bDjggxUSAN6E34g=; b=ltA806ckUVhLR2Mmxh+yhadO0R3wXM5hSPUXEpJ6SmxVI0fSPSOhaZLjDUf23yBHfN 0GTpGQT+CRHAA7ZL8ieHR0g8N9a9130rOPypfi8aTq1Z4NGh369ff+zfybXf+/LtMyRq wiA5Re+vJT6WlhbZO2kTTbyCeKmcPAe22NLZNWSNAZ5b4Q9+hMxFhqWrGk6R1yI09DD5 KQLc04zbnAtKNxf1RKc1t2hsSV3hQHzkVyleCUXc93oaKUmLek+57dYc/xTW6QqOcqUd rqMnngBG5zaaJW1X1gVFuqplS8IvzSmoh7eaadC9K0289BqBAagdBsq27Ap1Lnx938c4 Y4zA== X-Forwarded-Encrypted: i=1; AJvYcCWuVdQENmpkZxo5s4YcYWYtGTEopDynCtjexWA/gy/WUW1aEEs0ITdFZOSKfyyzVTVmKicNJTrz9Q==@kvack.org X-Gm-Message-State: AOJu0Yw53eSNcv+rUpf54XECoyx/Pi9yozpN/sw7hq+hhVxCCS7b9vJa C9dA5c4oneiKBHm3dsQuQZdA1WkI37x3V6Wh1kWKx7VTF7/bW65lCJ9vDOG4zSK+4xfykZsHpWl 5uiQOYNJEdziRE/qInqsmHPn6CCwQ2lI24Upzmc8kBJFxX7Ti X-Gm-Gg: ASbGncvr4Hn4oBsiz8GbBBH+TKhz8MAUvcEIcjQ2Vuk3uqgaNagB984o88Wg0tzLeH8 N4nbUMN7HzyDSbMIBEn93pu5xwf2TCTrhWRobt7TAi9gmAhQ6qYYXPgzqr1+tAHulWYvqdZhMPl XMxHM3l1yMdU0WvcYJVLcquise1/BP3UarYCMecu9Vl7viwkKZ4EPKhrk8cKMT6sN64X136z++d MMQb7lMIULQeES8OWBwgE7+2cS/T4/xdvLlzA7VTbT189kJAp24eAnESo8N3VuRcSDXbJUhnG2X bnfb/QbqtHP0ricprcxSZ9EBLeDv4YN8ZFy5pux4kZCjBo6aLAXrTuWez2Ok03vxhiWjP5hNyf9 Y+JHO4LopHw+eRAeoxfEVYLnHND/6V8iB X-Received: by 2002:a05:600c:1c97:b0:439:403a:6b77 with SMTP id 5b1f17b1804b1-4396e6c944dmr11952345e9.10.1739570393131; Fri, 14 Feb 2025 13:59:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IF2oEBu1OS6oJJ34rUUqc1BPOBx9m8tNT2xQLLSgNg9HQNPbmC+/jnLVSFbyq6TTxeC5btulw== X-Received: by 2002:a05:600c:1c97:b0:439:403a:6b77 with SMTP id 5b1f17b1804b1-4396e6c944dmr11952055e9.10.1739570392683; Fri, 14 Feb 2025 13:59:52 -0800 (PST) Received: from ?IPV6:2003:d8:2f22:1000:d72d:fd5f:4118:c70b? (p200300d82f221000d72dfd5f4118c70b.dip0.t-ipconnect.de. [2003:d8:2f22:1000:d72d:fd5f:4118:c70b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4395a06da56sm85991305e9.23.2025.02.14.13.59.50 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 14 Feb 2025 13:59:51 -0800 (PST) Message-ID: Date: Fri, 14 Feb 2025 22:59:50 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v7 2/8] mm/huge_memory: add two new (not yet used) functions for folio_split() To: Zi Yan , linux-mm@kvack.org, Andrew Morton , "Kirill A . Shutemov" , "Matthew Wilcox (Oracle)" Cc: Ryan Roberts , Hugh Dickins , Yang Shi , Miaohe Lin , Kefeng Wang , Yu Zhao , John Hubbard , Baolin Wang , linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org References: <20250211155034.268962-1-ziy@nvidia.com> <20250211155034.268962-3-ziy@nvidia.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <20250211155034.268962-3-ziy@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: bO44L2x4LkxjDodQZvP62K7t_USNUagiA-zzM_kuHJs_1739570393 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 433568000F X-Stat-Signature: 6f7e4wgtfprhtdfsw14gefiop3wi7oii X-Rspam-User: X-HE-Tag: 1739570398-29872 X-HE-Meta: U2FsdGVkX1+RgKx/Kn5INSmQpyknbkwAO5iQL+KJ9r5a8QQqyYUdm+UzZPbxz7+Fip3jbNxilpZlWsL7WDxDHUFQiIp/g+ucvfC0QpMkQ9NraMUeG4XgIWvotrHRD3hK6ugR+ioNoZIb+3FdfQ6QzNrZVNvSUSoD5b1nFpcWZzgvWOwsZsA3oXYES+2xkmbfKQs8lUmiIczgV3D7TS/sZyDx9jVgBw3RCyIYFbZJrdFdaws79WQipxOw75AFYqb31eYeKx3sBqJi03DhbrONp6Tyj6qe7lNQA8ZKVrrFKVQNURA7g61FNpvRLO8pQTgjt/OCGSKXlcvDDFcOcEao4a5Yl4lkd1DiBtmF9cfRhodtVdj6cpHh8vMsIGWamx1ewuZpUNdn0RufE74Fp1rsGNAK26/SvgwUZ3q5yLAE6kEut1D6XF2GaSQuWshOn7X+tnNJephYk79xexQCA2r9NVSrYtOk3mYAn5TCU4Rzky3vV/GiLtMuRchNGlTwxjkbSNS9KyNahJF4cffAZde8LUUk46kL5zfLNONdGBjAwvLU+T0/PSq/qTJZ5ZOCOKAaU+Y7Doi53IcgJ/u47ZSFuzEiDqlWdFfpgT2P+RFFkcy56qlaff6u2Gy3Aym3a7zJIHVJj1+jJSt7D6IC2o6mESfn1K9juWvdLxGpQ4kQ8ZQDmlFMnGKx/Mr5tdoCa6KX9dPbcurMSq+FFj7izvp5x1COnMj2hu/ekFpBFWCoBaAmHHkF4lgWcf7KFVaACPs2txOxT8ykxGqN2vAwdb39LXPvx6EJUYWMhAbYUBwkTGB/tAtNo5u6o4k8zLRsajEJzR2aymtWi19fG78HIivuEVKxfJwllZsjoiPtXjcrUWImgO2kAspB3y8AoSzUobDWAVI8u8pe5tB4hYnwWQPvHWyxTae3NXtRzvu41elutquIuwtnDKcM/6dErE2HnOy+90BoCr/tDACestImFhX 602KrAGP skkAh0KOhM+RCWFRSwLsawFpPpghonO9eIJmym97u+Mk+0Z2XGXJsBpyGAa3JxahLmf7YWaAVFXac92YWwo/EkkPOBBJev53Rboveqvi1rkfzP6gp26XA9pf0WfUUVJJyh9Lm0U3WjcReDmsSWchxHkEz/RVfxXOb1kgTCtkUcB6C4i9t3gtXDU1d2Vu9H7fcUjgSmVWZRJQTfmapk+TfwdHjFvcwriritqvIIqfgqPkhpAy0JsO4dIwNj9+qlQ+QLZLgAke/+MuemuN/exoNFUdArJMaZiwAcvxBq+wyODHsitqoKygpRKmfKehwNgS0bVMh0D2T7k4RQ/4m3gZw+NvFxte8kdV3kP/4HNSHBaNTzjTRIfhFrOs+qsq6iZN/ku8JH2NUiQvpBcIwxqBVcPYBz53Ss4zjzeWQ4izRa9IKHv+zZD8gB4X8utwfw5VHnA+OSnfyrHzjvvDkjGjmTU87lw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11.02.25 16:50, Zi Yan wrote: > This is a preparation patch, both added functions are not used yet. > > The added __split_unmapped_folio() is able to split a folio with > its mapping removed in two manners: 1) uniform split (the existing way), > and 2) buddy allocator like split. > > The added __split_folio_to_order() can split a folio into any lower order. > For uniform split, __split_unmapped_folio() calls it once to split > the given folio to the new order. For buddy allocator split, > __split_unmapped_folio() calls it (folio_order - new_order) times > and each time splits the folio containing the given page to one lower > order. > > Signed-off-by: Zi Yan > --- > mm/huge_memory.c | 349 ++++++++++++++++++++++++++++++++++++++++++++++- > 1 file changed, 348 insertions(+), 1 deletion(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index a0277f4154c2..12d3f515c408 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3262,7 +3262,6 @@ static void remap_page(struct folio *folio, unsigned long nr, int flags) > static void lru_add_page_tail(struct folio *folio, struct page *tail, > struct lruvec *lruvec, struct list_head *list) > { > - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); > VM_BUG_ON_FOLIO(PageLRU(tail), folio); > lockdep_assert_held(&lruvec->lru_lock); > > @@ -3506,6 +3505,354 @@ bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins) > caller_pins; > } > > +/* > + * It splits @folio into @new_order folios and copies the @folio metadata to > + * all the resulting folios. > + */ > +static int __split_folio_to_order(struct folio *folio, int new_order) > +{ > + int curr_order = folio_order(folio); > + long nr_pages = folio_nr_pages(folio); > + long new_nr_pages = 1 << new_order; > + long index; > + > + if (curr_order <= new_order) > + return -EINVAL; > + > + /* > + * Skip the first new_nr_pages, since the new folio from them have all > + * the flags from the original folio. > + */ > + for (index = new_nr_pages; index < nr_pages; index += new_nr_pages) { > + struct page *head = &folio->page; > + struct page *new_head = head + index; > + > + /* > + * Careful: new_folio is not a "real" folio before we cleared PageTail. > + * Don't pass it around before clear_compound_head(). > + */ > + struct folio *new_folio = (struct folio *)new_head; > + > + VM_BUG_ON_PAGE(atomic_read(&new_head->_mapcount) != -1, new_head); > + > + /* > + * Clone page flags before unfreezing refcount. > + * > + * After successful get_page_unless_zero() might follow flags change, > + * for example lock_page() which set PG_waiters. > + * > + * Note that for mapped sub-pages of an anonymous THP, > + * PG_anon_exclusive has been cleared in unmap_folio() and is stored in > + * the migration entry instead from where remap_page() will restore it. > + * We can still have PG_anon_exclusive set on effectively unmapped and > + * unreferenced sub-pages of an anonymous THP: we can simply drop > + * PG_anon_exclusive (-> PG_mappedtodisk) for these here. > + */ > + new_head->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; > + new_head->flags |= (head->flags & > + ((1L << PG_referenced) | > + (1L << PG_swapbacked) | > + (1L << PG_swapcache) | > + (1L << PG_mlocked) | > + (1L << PG_uptodate) | > + (1L << PG_active) | > + (1L << PG_workingset) | > + (1L << PG_locked) | > + (1L << PG_unevictable) | > +#ifdef CONFIG_ARCH_USES_PG_ARCH_2 > + (1L << PG_arch_2) | > +#endif > +#ifdef CONFIG_ARCH_USES_PG_ARCH_3 > + (1L << PG_arch_3) | > +#endif > + (1L << PG_dirty) | > + LRU_GEN_MASK | LRU_REFS_MASK)); > + > + /* ->mapping in first and second tail page is replaced by other uses */ > + VM_BUG_ON_PAGE(new_nr_pages > 2 && new_head->mapping != TAIL_MAPPING, > + new_head); > + new_head->mapping = head->mapping; > + new_head->index = head->index + index; > + > + /* > + * page->private should not be set in tail pages. Fix up and warn once > + * if private is unexpectedly set. > + */ > + if (unlikely(new_head->private)) { > + VM_WARN_ON_ONCE_PAGE(true, new_head); > + new_head->private = 0; > + } > + > + if (folio_test_swapcache(folio)) > + new_folio->swap.val = folio->swap.val + index; > + > + /* Page flags must be visible before we make the page non-compound. */ > + smp_wmb(); > + > + /* > + * Clear PageTail before unfreezing page refcount. > + * > + * After successful get_page_unless_zero() might follow put_page() > + * which needs correct compound_head(). > + */ > + clear_compound_head(new_head); > + if (new_order) { > + prep_compound_page(new_head, new_order); > + folio_set_large_rmappable(new_folio); > + > + folio_set_order(folio, new_order); > + } > + > + if (folio_test_young(folio)) > + folio_set_young(new_folio); > + if (folio_test_idle(folio)) > + folio_set_idle(new_folio); > + > + folio_xchg_last_cpupid(new_folio, folio_last_cpupid(folio)); > + } > + > + if (!new_order) > + ClearPageCompound(&folio->page); > + > + return 0; > +} > + > +/* > + * It splits an unmapped @folio to lower order smaller folios in two ways. > + * @folio: the to-be-split folio > + * @new_order: the smallest order of the after split folios (since buddy > + * allocator like split generates folios with orders from @folio's > + * order - 1 to new_order). > + * @page: in buddy allocator like split, the folio containing @page will be > + * split until its order becomes @new_order. > + * @list: the after split folios will be added to @list if it is not NULL, > + * otherwise to LRU lists. > + * @end: the end of the file @folio maps to. -1 if @folio is anonymous memory. > + * @xas: xa_state pointing to folio->mapping->i_pages and locked by caller > + * @mapping: @folio->mapping > + * @uniform_split: if the split is uniform or not (buddy allocator like split) > + * > + * > + * 1. uniform split: the given @folio into multiple @new_order small folios, > + * where all small folios have the same order. This is done when > + * uniform_split is true. > + * 2. buddy allocator like (non-uniform) split: the given @folio is split into > + * half and one of the half (containing the given page) is split into half > + * until the given @page's order becomes @new_order. This is done when > + * uniform_split is false. > + * > + * The high level flow for these two methods are: > + * 1. uniform split: a single __split_folio_to_order() is called to split the > + * @folio into @new_order, then we traverse all the resulting folios one by > + * one in PFN ascending order and perform stats, unfreeze, adding to list, > + * and file mapping index operations. > + * 2. non-uniform split: in general, folio_order - @new_order calls to > + * __split_folio_to_order() are made in a for loop to split the @folio > + * to one lower order at a time. The resulting small folios are processed > + * like what is done during the traversal in 1, except the one containing > + * @page, which is split in next for loop. > + * > + * After splitting, the caller's folio reference will be transferred to the > + * folio containing @page. The other folios may be freed if they are not mapped. > + * > + * In terms of locking, after splitting, > + * 1. uniform split leaves @page (or the folio contains it) locked; > + * 2. buddy allocator like (non-uniform) split leaves @folio locked. > + * > + * > + * For !uniform_split, when -ENOMEM is returned, the original folio might be > + * split. The caller needs to check the input folio. > + */ > +static int __split_unmapped_folio(struct folio *folio, int new_order, > + struct page *page, struct list_head *list, pgoff_t end, > + struct xa_state *xas, struct address_space *mapping, > + bool uniform_split) > +{ > + struct lruvec *lruvec; > + struct address_space *swap_cache = NULL; > + struct folio *origin_folio = folio; > + struct folio *next_folio = folio_next(folio); > + struct folio *new_folio; > + struct folio *next; > + int order = folio_order(folio); > + int split_order; > + int start_order = uniform_split ? new_order : order - 1; > + int nr_dropped = 0; > + int ret = 0; > + bool stop_split = false; > + > + if (folio_test_anon(folio) && folio_test_swapcache(folio)) { > + /* a swapcache folio can only be uniformly split to order-0 */ > + if (!uniform_split || new_order != 0) > + return -EINVAL; > + > + swap_cache = swap_address_space(folio->swap); > + xa_lock(&swap_cache->i_pages); > + } > + > + if (folio_test_anon(folio)) > + mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1); > + > + /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ > + lruvec = folio_lruvec_lock(folio); > + > + folio_clear_has_hwpoisoned(folio); > + > + /* > + * split to new_order one order at a time. For uniform split, > + * folio is split to new_order directly. > + */ > + for (split_order = start_order; > + split_order >= new_order && !stop_split; > + split_order--) { > + int old_order = folio_order(folio); > + struct folio *release; > + struct folio *end_folio = folio_next(folio); > + int status; > + > + /* order-1 anonymous folio is not supported */ > + if (folio_test_anon(folio) && split_order == 1) > + continue; > + if (uniform_split && split_order != new_order) > + continue; > + > + if (mapping) { > + /* > + * uniform split has xas_split_alloc() called before > + * irq is disabled to allocate enough memory, whereas > + * non-uniform split can handle ENOMEM. > + */ > + if (uniform_split) > + xas_split(xas, folio, old_order); > + else { > + xas_set_order(xas, folio->index, split_order); > + xas_try_split(xas, folio, old_order, > + GFP_NOWAIT); > + if (xas_error(xas)) { > + ret = xas_error(xas); > + stop_split = true; > + goto after_split; > + } > + } > + } > + > + /* complete memcg works before add pages to LRU */ > + split_page_memcg(&folio->page, old_order, split_order); > + split_page_owner(&folio->page, old_order, split_order); > + pgalloc_tag_split(folio, old_order, split_order); > + > + status = __split_folio_to_order(folio, split_order); > + Stumbling over that code (sorry for the late reply ... ). That looks weird. We split memcg/owner/pgalloc ... and then figure out in __split_folio_to_order() that we don't want to ... split? Should that all be moved into __split_folio_to_order() and performed only when we really want to split? -- Cheers, David / dhildenb