From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8293EC369C2 for ; Fri, 25 Apr 2025 23:59:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B627D6B0005; Fri, 25 Apr 2025 19:59:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B13A46B0007; Fri, 25 Apr 2025 19:59:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DAE16B0008; Fri, 25 Apr 2025 19:59:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 838366B0005 for ; Fri, 25 Apr 2025 19:59:50 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1E483C0D30 for ; Fri, 25 Apr 2025 23:59:51 +0000 (UTC) X-FDA: 83374236582.24.65422E2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf17.hostedemail.com (Postfix) with ESMTP id A733340009 for ; Fri, 25 Apr 2025 23:59:48 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WrbLbbr3; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf17.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745625589; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wtOnWyGTfN8q6JuJ/l3fPnGL2JnyGgt9o0A4bTemOx0=; b=bLndiGQrB4ZWhfo2f/xM3ItePYXtpuN8QqFIaD/vGOPagUhcm7viKOj7J5k9hJxVexKCAB nRQ/+2Mv/l/o18D2fsYLNqDWZ3IUS+hNPIv7mhMd6k9B+rnoDmwcFPVxL3FGdwpzG7zfnK l7T914YgOnjKsxFVSHYpyEgV/me9M8w= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=WrbLbbr3; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf17.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745625589; a=rsa-sha256; cv=none; b=WQ1ep4zwCkiLoI4TMb2hhNna1wh+r09c/2/jfZRUuhkydYvM9RRBO/ECaAHPKqDGd80UbI Pl67De40DVqsva8L3cFXGHs5BsmM13K5mOGTaa2C63wJu5A8MSbaufb9UQXhDzHZLtSSlT G9OYd4dchm2BcHT4Ktwg8QtYLtJEfJQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1745625588; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=wtOnWyGTfN8q6JuJ/l3fPnGL2JnyGgt9o0A4bTemOx0=; b=WrbLbbr3vBXUdty/pVTAGgVPB/UmPwrOny8YgZksziyT1t65+GUtm+Kpd3iC5v3tpPZQ3c iyiFx7r5wnMOUK9eUhCpfAlfKJ6s3Us6fmlLTiT7ho+TdzDw58GF5jVFI0OTIySray7tpB F6dVXhHT9Tveup52VE+6982udrATq4Q= Received: from mail-qv1-f72.google.com (mail-qv1-f72.google.com [209.85.219.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-66-IjJy1-XjPK-_HijGr8OnFg-1; Fri, 25 Apr 2025 19:59:44 -0400 X-MC-Unique: IjJy1-XjPK-_HijGr8OnFg-1 X-Mimecast-MFC-AGG-ID: IjJy1-XjPK-_HijGr8OnFg_1745625584 Received: by mail-qv1-f72.google.com with SMTP id 6a1803df08f44-6f0c089909aso50989786d6.2 for ; Fri, 25 Apr 2025 16:59:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745625584; x=1746230384; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=wtOnWyGTfN8q6JuJ/l3fPnGL2JnyGgt9o0A4bTemOx0=; b=Omx/rbJhy8BjqKweKp2UV/FX8qETVsGVrnvD2s83p6ZkikVwXr48J/HlcWHdLh/532 WL2Y+0JGuEzwY/8AtCBk0xEMDOHp5mCblH4avYHaPiay9WLUcxQjnjTzg5kB+ZidCmHz rk8glAgXfueZh4o/7FUIdWVdxKZHk1cCI1kgubTbvI+J6FFA7xNOjPyN8g/8UmolhvHv mJaW2JITbxj8xsj3uTlqc5e4z+dqdCM0NohXbUGp1XASmLsibVjZmDbPz/BwxVnVxoPk 0hsZOnX1p+UT3lT52fQ7TP7KHwg3DQoBTGRID1WLXkSrjgDZUomadgcVttipdRgS27Uj J+ag== X-Forwarded-Encrypted: i=1; AJvYcCXKuKimFeK+es1j6xcsSeGEoD2LyKd/+yWTo0JqiDJFMSyWQh2fE6S9xI1hSi74MrjzxhMH/dUI/g==@kvack.org X-Gm-Message-State: AOJu0YyzT+KOfzCg0SGgy8lfwZRK7SL81cT/9HifE08HNrSme7QfguLH +JObljnouGz+wGco8YttYwEOVW2eGNL7AKddWwc9uOc7tSl7eoyEOg3+A/sgfXsXwbBI4fIu8Cn gcMZQAAC+gamO4ktWhucp06p5b7wCmpO9OAEXMXqf4LhL76+9 X-Gm-Gg: ASbGncvBryydjGbOUpDqY7K+DB6zig7QTFMPn/+mVlH4HaNHwLHGLRoi3kgEosC03ZM Ga1TWJaj/CZTQyY5ex8lo/fj3MlYUHXkkyse41JSvPrFJ5NbG4N6RxJiR1uMtvf6+1GhB9+P9Cl Bhpnz0MJevYwqCYF5rtB5GDuKJhdmPlGVlhTLzAA3RpbO4Fc1WA0F5TNBQYAT/7oYFiWZyKfMTY ptcLHe56aVTfqaTo4fBf6Fa0PkoDawT0RMkd9BDvQ5YYM2LjNtVBNJBdhJaz8WKAHXoLd3vqMMW 6S0= X-Received: by 2002:a05:6214:194a:b0:6e6:5e15:d94f with SMTP id 6a1803df08f44-6f4d1f19b77mr24255666d6.27.1745625584226; Fri, 25 Apr 2025 16:59:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEdbVSllJ4RZlkwtB3mta0Vz7At0z8Ar6VLV/trW+nwFTJ9jQ0vWTbFnH+eKnFdhDis/h0FdA== X-Received: by 2002:a05:6214:194a:b0:6e6:5e15:d94f with SMTP id 6a1803df08f44-6f4d1f19b77mr24255296d6.27.1745625583759; Fri, 25 Apr 2025 16:59:43 -0700 (PDT) Received: from x1.local ([85.131.185.92]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-47e9efdad60sm32399051cf.28.2025.04.25.16.59.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 25 Apr 2025 16:59:42 -0700 (PDT) Date: Fri, 25 Apr 2025 19:59:38 -0400 From: Peter Xu To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-trace-kernel@vger.kernel.org, Dave Hansen , Andy Lutomirski , Peter Zijlstra , Thomas Gleixner , Ingo Molnar , Borislav Petkov , "H. Peter Anvin" , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin , David Airlie , Simona Vetter , Andrew Morton , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Pedro Falcato Subject: Re: [PATCH v1 02/11] mm: convert track_pfn_insert() to pfnmap_sanitize_pgprot() Message-ID: References: <20250425081715.1341199-1-david@redhat.com> <20250425081715.1341199-3-david@redhat.com> <78f88303-6b00-42cf-8977-bf7541fa45a9@redhat.com> MIME-Version: 1.0 In-Reply-To: <78f88303-6b00-42cf-8977-bf7541fa45a9@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: -CYrQGeaQcVZ_DmNtu35bgh-xyVr1kLO36Ycx1RiOD0_1745625584 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Stat-Signature: s1io95y3bu6qgyk5cos9p5mxa8kreb9y X-Rspamd-Queue-Id: A733340009 X-Rspam-User: X-Rspamd-Server: rspam05 X-HE-Tag: 1745625588-836541 X-HE-Meta: U2FsdGVkX183HFXmA3LgOHiZFmyvUTiXa7f0BgVc0m8tmZ3Nv1f6HnxcjWZgaATzV5LYWwG7ivl8GSB5IWM9sTrvwWEDF/3F+cmKGtApSy9382lgv0Tx4DVB9QK/s1fyca1KrCLgb3bf2jtcWxtNT7SBQnWC2/Lh3fVLWcNuC/NUZ7BWlGcPJSs68f4QClK/As4r9OUt6ezZ10QN+FYDjyheDyg/OH2LFwKLkM7b/b5xxX2/yOHIHcpmXIrJlb502+yE68ehSUCwpsQhxR+8yECOAQDKNHX8ZHlrfWNZcGKk34fmMEix+8+NlrosZHgZYdQC9LmMdnc+JBhj88puXZtsVpNGMnY+GuU5kS+UDSycbBLkhKJ2f+CQ33fzQAw76J41q5r+8fV5uM9f7CnnVYuJyPf4mY56umGhm3jZXmIB/4U0zAc2dZcfXO/Ti5B1eYrn7tiI+d44ISmHf78tgYrdQT2E6LWINUOAKpIIgWkl9d+lEwopVLLqz1wbUBP6Cl9T1bGgmc6+EJkfx1JVWD2hed4SVUwdQj0d0Wz4sHUYnBwjQburyEle3M8pB8U8kL5Dd2Ep4jFb7Rxrdw9UP6fkGzIKFIrPcE9R2mf1ut7hPuc3I9h0yId+DCNQNCTBkIQ5YDbQFX5haczIuw1rEtLX+TPf5n8Ji/bkJBaFF72Di8z7kBYyFYW/LTnjihmzpvl3wlAqGorfJ+oIzi46xvUua3mo5zkiJXuEa+Np5wcH3HO5apeNrGjO3zMO5utm6aEpFSWVo1IwOTKqfSIaH01lohJEeA6yx7WO9C67ZPix87bbXb9+/uuWlw2vyUwrmd2FyXM6iP6YRcNplrM7C5i1M8gGH1PNsyETr+y0mnLFHHJjwtYdxKpvNNWvNOBsnR81+s8tQfdj9p9SOtVw50QvlozgZ3vcxBgC0rHafsB3NxKC+95pi8nRMXdnuOauNWsFO/qN0rYNZc0B+w5 lhb/GsXw jKN0UDoDYGHopl7pWKBngVjmI91QVlg0DKIn0+pxo2W5cKiMK66+iB2LWF+AePDgXBh9QeQADEyH8hzlmHpn7ufTvGUkOZXLqlv/SDjLlelPlvMY8rFLPBAn0WfQJ97YM2Jz6C5ypqFR7VCkfO5o9WdB4YI7Fs2cBVgjeE8s5FTR4zgXCku/ZL8gFd85Oqw8/CBPGnRZURMh2CzugKeDAQ9/2YJHhbZ4wo9LDviKnQhj7emBHHKRvGfYTIhz4YpBePl4Gz7f32lBuZWfZzBt0KRUwAI3oK9KHbx2CLVPCYiISorvl/x2aLv9ZNU0AI3kD6p+blOT7KYG2LXM/I8POkgqqSUUp6YNxBkgrj11IX9EbkA5FToYkwX2it7RkniqBN9GeASxM2ip3Ncc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Apr 25, 2025 at 09:48:50PM +0200, David Hildenbrand wrote: > On 25.04.25 21:31, Peter Xu wrote: > > On Fri, Apr 25, 2025 at 10:17:06AM +0200, David Hildenbrand wrote: > > > ... by factoring it out from track_pfn_remap(). > > > > > > For PMDs/PUDs, actually check the full range, and trigger a fallback > > > if we run into this "different memory types / cachemodes" scenario. > > > > The current patch looks like to still pass PAGE_SIZE into the new helper at > > all track_pfn_insert() call sites, so it seems this comment does not 100% > > match with the code? Or I may have misread somewhere. > > No, you're right, while reshuffling the patches I forgot to add the actual > PMD/PUD size. > > > > > Maybe it's still easier to keep the single-pfn lookup to never fail.. more > > below. > > > > [...] > > > > /* > > > @@ -1556,8 +1553,23 @@ static inline void untrack_pfn_clear(struct vm_area_struct *vma) > > > extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot, > > > unsigned long pfn, unsigned long addr, > > > unsigned long size); > > > -extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, > > > - pfn_t pfn); > > > + > > > +/** > > > + * pfnmap_sanitize_pgprot - sanitize the pgprot for a pfn range > > > > Nit: s/sanitize/update|setup|.../? > > > > But maybe you have good reason to use sanitize. No strong opinions. > > What it does on PAT (only implementation so far ...) is looking up the > memory type to select the caching mode that can be use. > > "sanitize" was IMHO a good fit, because we must make sure that we don't use > the wrong caching mode. > > update/setup/... don't make that quite clear. Any other suggestions? I'm very poor on naming.. :( So far anything seems slightly better than sanitize to me, as the word "sanitize" is actually also used in memtype.c for other purpose.. see sanitize_phys(). > > > > > > + * @pfn: the start of the pfn range > > > + * @size: the size of the pfn range > > > + * @prot: the pgprot to sanitize > > > + * > > > + * Sanitize the given pgprot for a pfn range, for example, adjusting the > > > + * cachemode. > > > + * > > > + * This function cannot fail for a single page, but can fail for multiple > > > + * pages. > > > + * > > > + * Returns 0 on success and -EINVAL on error. > > > + */ > > > +int pfnmap_sanitize_pgprot(unsigned long pfn, unsigned long size, > > > + pgprot_t *prot); > > > extern int track_pfn_copy(struct vm_area_struct *dst_vma, > > > struct vm_area_struct *src_vma, unsigned long *pfn); > > > extern void untrack_pfn_copy(struct vm_area_struct *dst_vma, > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > > index fdcf0a6049b9f..b8ae5e1493315 100644 > > > --- a/mm/huge_memory.c > > > +++ b/mm/huge_memory.c > > > @@ -1455,7 +1455,9 @@ vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn, bool write) > > > return VM_FAULT_OOM; > > > } > > > - track_pfn_insert(vma, &pgprot, pfn); > > > + if (pfnmap_sanitize_pgprot(pfn_t_to_pfn(pfn), PAGE_SIZE, &pgprot)) > > > + return VM_FAULT_FALLBACK; > > > > Would "pgtable" leak if it fails? If it's PAGE_SIZE, IIUC it won't ever > > trigger, though. > > > > Maybe we could have a "void pfnmap_sanitize_pgprot_pfn(&pgprot, pfn)" to > > replace track_pfn_insert() and never fail? Dropping vma ref is definitely > > a win already in all cases. > > It could be a simple wrapper around pfnmap_sanitize_pgprot(), yes. That's > certainly helpful for the single-page case. > > Regarding never failing here: we should check the whole range. We have to > make sure that none of the pages has a memory type / caching mode that is > incompatible with what we setup. Would it happen in real world? IIUC per-vma registration needs to happen first, which checks for memtype conflicts in the first place, or reserve_pfn_range() could already have failed. Here it's the fault path looking up the memtype, so I would expect it is guaranteed all pfns under the same vma is following the verified (and same) memtype? Thanks, -- Peter Xu