From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FC99C3ABC0 for ; Wed, 7 May 2025 12:11:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E01096B0089; Wed, 7 May 2025 08:11:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DB03C6B0092; Wed, 7 May 2025 08:11:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C9F296B0093; Wed, 7 May 2025 08:11:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id ACAEF6B0089 for ; Wed, 7 May 2025 08:11:31 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 553491A1D54 for ; Wed, 7 May 2025 12:11:33 +0000 (UTC) X-FDA: 83415997266.07.D88EA65 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 5405020003 for ; Wed, 7 May 2025 12:11:29 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oS9Jr+7E; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746619891; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6gtZw/19cy9nmFSY5os2fTfYQ3iFqfNfyyYDRhMXkkk=; b=vMvD64Y5b9FXF2unkvemjMNnTp/2b7zMXDSvkeiYPpOJ28SW88namPZNiTJeB52cir6Fn+ V2HmAHpkifr6sLV4VsV3NcnIXoqAMaAz+3h7Z4LN/S93IXsIw6bSt4Pb1sf3kr3R96tW6l mJsKToz65WmYUyaq7OUcOzz/Xa5ciNY= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oS9Jr+7E; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746619891; a=rsa-sha256; cv=none; b=xZ6Cb4KENrVkd5LM8deVUPmULMPYVBrcSrxCndCLA32vf6JuihJ9hUlTraF1eeASqoVKB2 LENAzIqFMX/iJJOc3g9ArnVe4xCrU3218A9J/p0iW7KF6ZBWv/S4ug75IHclYeeLwDLK9X 0f9VpTYUfxL/MqV07jPbaACSPZo/By4= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=6gtZw/19cy9nmFSY5os2fTfYQ3iFqfNfyyYDRhMXkkk=; b=oS9Jr+7EjwU9rXH/Pm3dWaV/Ey GEwPNVMGXnfSMLG85yAbZ7c8BClT4qtCWC6BZ6MTsuXOpx7fEX7QydFY4CphyEn634oGZtgWb8Fj6 V56GGyrvJU52+TW6Sw3Zxq3Mhx/2NJzk0trwp0nq94p+sI0WaiEBpM8E31rqniJlMmnA3T+1CjKk2 DPmtvmkdPTUJTclOoi2UNjJwMki+9FdcHGXhX+TEC6qNqf2LP/OcVvqzziS1D+v9DSYRanZDOl2Nb KDNv3v1mGkdahIDtX1ebJOqjtf+n2PiB0oRVTWOpFiPfJsqt/o65j4SZVfjZbq9ZoiA4pLwtTE4Rw E+ruHBtw==; Received: from willy by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1uCdbS-00000004J1Q-0Izd; Wed, 07 May 2025 12:10:22 +0000 Date: Wed, 7 May 2025 13:10:21 +0100 From: Matthew Wilcox To: Baolin Wang Cc: akpm@linux-foundation.org, david@redhat.com, hannes@cmpxchg.org, lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, ziy@nvidia.com, vbabka@suse.cz, rppt@kernel.org, surenb@google.com, mhocko@suse.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 2/2] mm: convert do_set_pmd() to take a folio Message-ID: References: <8e33c8a65b46170dfd8ba6715d2115856a55b8f6.1746609191.git.baolin.wang@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5405020003 X-Stat-Signature: 861ap9cthtm9ziitkbczmuqfiebr9gj9 X-HE-Tag: 1746619889-277185 X-HE-Meta: U2FsdGVkX18z21UMW7g+F0h0xh/ll5FDfPrjFSn+7VvM8sXwNwI4gOEtNKLqVq2ZvshkQ6RitwvNdMjS+T6l8PVCTb7nV5Q1nh6J0lhrMuRqbIvLnFmIPcJFl2jHbqOYMxxRq/3Sj+ZQbwiATlqT81FRtkny51m8Rtlw6qw1NO+q6ahRg2UEviqAkm4TzDt9RBtjnN27tUUvKixtNQyTm4HVBV8U8glhmWu/3hCpIIPBnXEzvFLQw7lHHWPSdrazPaNr4GyI8T17Mb+Wyc2FIx7X78/8+PKF6D/Z47zl4UFDZDlmIo+6O4tpcodDurSf/6CioOM917yky/ggfU2hVyQ3uDl5JYMbzrdi+PAqmDQ3MN/bf+YH/FrThNA6vrENkecET2d4wMWFuZG9RhH/hBOYo8XZuf8MkffxqytsyddxChzEbRsl+oVUlLGykslQtPfBMVv7MLikL8hHRJ0CRa1QXI/O8aODDP0/0v/8uBkW/BsCoVHs1u10VEjB/OJDLAPGKQ4yXvpLAnffwizL0PBOoHNq79rt0S0SYKqA5YPhnTWwxs9PzaEtlbnP6nuA9EwCXTv6ozCJHXdO4oh0FUjQ9fK10sHlo/pcrY33izoxAgVxvQpbJ4InFVidaXVR45gzQSkD8rSZv4qhZ/EjKHsFsNu/s8tqYQsEyeG1rLqPyXylHbFaQ6nnDLBwMvEbsO+VUS8K580R1w1HIkWGaR+yzBjvdLNMV5oQOPiLLPxo8yOsDg260nfddFyYIxRLwp4Flrsgq9hhevuH8OVcNZMs6qWNSyI0QeXK490BcnaBIjiDkWbYwiWXI17kW6Hn10HLEFgUJ5IqfCTBERwUT+akse6lT+kVWbCtTYt5Q4ew2NIf2ddEWmlfFAyr4lAvJouTz4iMwrSwNHVpURvuSCZfTFHbimxckXKlBVF0NNaDHYfPxtPoyFBYKxmac21V1gjaAo7qBDlklt5YjH6 x0XlQUKM 1Qo53RsDj54zgjfyGmAfme0TwsUL6d1iKOjC0vaIHo9B5+z6AIop4E09hE925/+MEQkJs7sLKErtA/brtYKv7YgoqRkFuN6/VSI/OBTbhHRpm7zO8/MsrMuWDwiIuzn108+nTirRhns/NAlaOnviUVhDlLeDBz/RPOByrqS3UE8BVavrIbIZmoSZXvXHuLyaGl/PPZHok39VG4w7fJSeesqOKYkyqzSaSsJoCk8UVu8xHWDSjEI+YWfPE8HUKWRkbQw3svlcxW7i1QzXb/VFlSXv2xA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, May 07, 2025 at 05:26:13PM +0800, Baolin Wang wrote: > In do_set_pmd(), we always use the folio->page to build PMD mappings for > the entire folio. Since all callers of do_set_pmd() already hold a stable > folio, converting do_set_pmd() to take a folio is safe and more straightforward. What testing did you do of this? > -vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page) > +vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio) > { > - struct folio *folio = page_folio(page); > struct vm_area_struct *vma = vmf->vma; > bool write = vmf->flags & FAULT_FLAG_WRITE; > unsigned long haddr = vmf->address & HPAGE_PMD_MASK; > pmd_t entry; > vm_fault_t ret = VM_FAULT_FALLBACK; > + struct page *page; Because I see nowhere in this patch that you initialise 'page'. And that's really the important part. You seem to be assuming that a folio will never be larger than PMD size, and I'm not comfortable with that assumption. It's a limitation I put in place a few years ago so we didn't have to find and fix all those assumptions immediately, but I imagine that some day we'll want to have larger folios. So unless you can derive _which_ page in the folio we want to map from the vmf, NACK this patch.